Research suggests that developing diverse disciplines fuels creativity and productivity. For example, studies show Nobel Prize-winning scientists are about 25 times more likely to sing, dance, or act than the average scientist, 17 times more likely to create visual art, 12 times more likely to write poetry, and four times more likely to be a musician. Talk about multi-faceted over-achievers.
Experts believe it stems from cross-pollination. Ideas from one discipline can inspire another. World-renowned chef Massimo Bottura, for instance, has said that art inspires his Michelin-starred cooking. What’s more, shifting mental gears can give your brain a much-needed rest, clearing the space for creative breakthroughs.
I could not have said it better. Here is a link to the full article.
New research on large language models suggests LLMs are exhibiting certain human-like qualities (notably how it is organizing information geometrically and conceptually). If this is true, how we prompt a generative AI system might also need to change. If it responds well to more abstract, metaphorical or conceptual prompts, that would imply we need people who can think like this.
Whether or not you accept the premise that some LLMs are developing in this way, we still have our own experiences using generative AI and know that the creative possibilities appear to be endless. It might, therefore, be a good time to ask ourselves if, indeed, we are entering the age of the “creative generalist.”
This is something that Daniel Pink suggested in his 2006 book A Whole New Mind when he said that in the future, there might come a time when we need more “right brained” individuals to piece together all the disparate technologies in sensible, meaningful ways. In such a world, vision, philosophy, history, art, morality, ethics and musical knowledge, might hold a place equal to that of the engineer, scientist or technologist. The question is, is anyone else noticing this shift? Are people seeing the potential? Are companies making strides to bring in systems thinkers, philosophers and ethicists into their AI departments? Are schools of liberal arts taking note and adjusting the curriculum?
Should we, for example, make changes at schools and universities such that AI is not taught merely as an extension of computer science, but is taught in schools for the liberal arts?
Certainly, there are many examples we can look at. One that immediately comes to mind is the creative use of datasets. For example, something that I’ve started looking into as an AI content strategist, is to load datasets of personality and intelligence inventories, like Myers-Briggs and Gardner’s multiple intelligences and then ask the AI to help me better understand the audience or a company in terms of how they are more likely to process information. Other things come to mind too such as training AI on a piece of music like the Finale to Shostakovich, Symphony no. 7 and then asking it to translate that into an equally powerful conclusion to a piece of writing. What about a sustainable business model that resembles a natural process like photosynthesis or documenting the the ecosystem of a forest and modeling the design of a community college in this way. These are all things, by the way, that AI can do right now.
I asked GPT4o: “If certain LLMs are organizing information more geometrically and conceptually, is there a good argument for recruiting more people with creative, conceptually minded brains into the field? Should we, therefore, make changes at schools and universities such that AI is not taught merely as an extension of computer science, but is taught in schools for the liberal arts.”
It replied:
The research paper, The Geometry of Concepts: Sparse Autoencoder Feature Structure, suggests that large language models (LLMs) are organizing information in ways that resemble the functional and structural organization of the human brain. By analyzing the geometry of sparse autoencoders (SAEs), the authors found three levels of conceptual structure: (1) “atomic” small-scale structures, such as parallelograms and trapezoids that encode relationships (e.g., man:woman::king:queen); (2) “brain-like” intermediate-scale modularity, where clusters of related concepts, like math and code, form localized “lobes” similar to those in neural fMRI images; and (3) “galaxy-scale” large-scale structures, where the overall feature space exhibits non-random, fractal-like clustering and power-law eigenvalue distributions. These findings indicate that LLMs are more than linear systems — they are evolving toward complex, hierarchical representations of knowledge, akin to human cognition.
This research has implications for how we approach AI development and education. If LLMs are creating conceptually rich and geometrically organized knowledge spaces, there is an argument for involving more creative, interdisciplinary thinkers — those trained to recognize patterns, connections, and abstract relationships — in the field. This could mean recruiting individuals from liberal arts, philosophy, and design backgrounds, alongside traditional computer scientists. Moreover, it raises the question of whether AI should be taught differently, not just as a technical discipline but as a conceptual, creative practice. Perhaps universities should integrate AI into liberal arts programs, encouraging the development of “creative generalists” who can bridge conceptual thinking and technical expertise. This shift might not only enrich AI research but also lead to more intuitive and human-centered prompt engineering, better suited to the conceptual landscapes LLMs are now navigating.”
Perhaps we have entered the age of the creative generalist.
If you’ve made it this far, you are a good candidate to learn about an upcoming presentation and discussion. On December 21st, I will explore this topic and why I believe there is a good opportunity for creatives, generalists and especially “hyper-generalists” to enter the field of generative AI.
For a deeper exploration of this topic, please see the following:
Could AI demonstrate all the ways that it is not being used? Could it find solutions for some of the biggest problems in the world, like what is happening in Ukraine or Israel, for example? Would it understand what I meant when I said that music might help us to communicate things in the world where language fails?
Before I decided to make the video public, I shared this with a colleague in education. Her response made me realize there is some deeper potential in generative AI than most people realize. In fact, I’m going to include her whole response because when you’re out here experimenting, it’s great to get this kind of validation and I am truly thankful for this.
Note: she agreed to let me share this anonymously.
G’morning Lee ☀️ Thank you so much for sharing your rough cut! 🤓
I was unaware that we were at this level of AI, and the example of its application to “discuss” world issues is nuts. I fully understand the vital importance of prompt engineering. You are a master.
The other lightbulb moment for me was regarding the expansion of knowledge you offered to be considered went beyond humanity to nature. I just believe so strongly that the inclusion of knowledge beyond our 5 senses is critical to world peace. Opening to the natural world brings us to the root and the once strong connection to the beginning of human-hood and connection to each other and that which connects us all. It is who we really are-spiritual beings living a human existence for a short while. Nature is the path to the spiritual. This is what indigenous cultures exemplified, and that which our greed and ego bastardized.
You were also right on in terms of trauma responses. That is the work of Dr. Gabor Mate (Where Olive Trees Weep) and included discussions amongst his other work. The understanding that we are all acting out our trauma and what we can do. SO much rests on our ability to examine ourselves; our thoughts, biases, justifications, and behaviors.
To include all of these aspects into this AI tool and knowing this is just the start -wow!
Here’s what I’ll offer for consideration: I found that I was initially so wowed by what the AI was offering and its ability that when I actually really cued in and listened for solutions offered, there wasn’t much more there than what you had originally prompted. It summarized and/or restated differently without saying much more. I’m sure that will improve over time but it was my one-time observation.
I say thanks, and there’s a reason the AI acted this way. The reason it did this is that it had been instructed to perform this in the very first sentence, i.e. “can you put this together into a coherent narrative?”
But, she is right in that I was never refuted. The premise was accepted easily. No push back. had the prompt been, “discuss this issue with me as though you are professor or professional journalist or television news host” that would have been a lot different.
Another friend said this:
I watched the whole thing. The music is cool. While I understand your friend’s observation that ChatGPT isn’t necessarily adding new insights, it’s still impressive that it can handle this level of complexity (and with a voice interface, no less). I also like the fact you can interrupt/redirect it midstream to augment your prompt.
The real test, and perhaps my next video segment, will be to see what it does when asked to provide counter arguments or to go into more of a dialogue or discussion mode. Will we have to wait for AGI for that? I’ll find out soon.
If you are curious to know how this will become part of my Summer workshop, you can visit the slideshow here.
If you are interested in setting up a similar workshop, please sign up on my Calendly page.
Take a look at my FlowGPT page and you'll understand what I mean. Things are very different than just a year ago at this time. Not many people then were talking about ChatGPT, LLMs or generative AI. If you mentioned that you were a "prompt engineer," you would have received strange looks. But, it's all here now...and it's not going back in the bottle.
Most of what you see here, my FlowGPT page and elsewhere has been free to use for a long time.
Deep creativity takes work, a certain degree of anguish and a life time of trial and error.
Explore various courses offered by the AI Imagination Lab, ranging from introductory sessions to deeper dives and hands-on workshops into generative AI. Tailor your learning experience to your style and interests, collaborating with AI for creative breakthroughs.
Facts
Introduction to Generative AI: Live 50-minute session exploring AI's creativity, problem-solving, and productivity basics. Cost: $90. Prepay
A Deeper Dive Into AI: Two 50-minute sessions delving deeper into generative AI, focusing on prompt crafting and personalized use cases. Cost: $165. Prepay
The 4 Hour Hands-On Workshop: A full AI workshop for hands-on experience in crafting prompts and learning new tools. Prepay
I was a teacher and transitioned to project work and consulting. This created a problem that I was not fully prepared for. Sales! I mean, students show up in your class no matter what. Right? But clients? Yeah, that’s a tougher sale.
So, I started fiddling around with AI and figured out a way for GPT to take over that function. One problem (at least with ChatGPT) was that it would not insert links in the output or conversation. There was no way to transition someone out of the conversation to my Calendly or Stripe apps so they could sign up for my classes or pay for these services.
But, this morning… problem solved. Using the Poe AI platform and Claude, it worked like a breeze. Take a look for yourself.
Now watch as I transition here into sales (haha) — if you’d like some training on this or, perhaps, someone to build you a GPT like the one I created (for thousands less than what you’d normally pay), Sign up for some time here or contact me directly at lee@glidercell.com.