New research on large language models suggests LLMs are exhibiting certain human-like qualities (notably how it is organizing information geometrically and conceptually). If this is true, how we prompt a generative AI system might also need to change. If it responds well to more abstract, metaphorical or conceptual prompts, that would imply we need people who can think like this.
Whether or not you accept the premise that some LLMs are developing in this way, we still have our own experiences using generative AI and know that the creative possibilities appear to be endless. It might, therefore, be a good time to ask ourselves if, indeed, we are entering the age of the “creative generalist.”
This is something that Daniel Pink suggested in his 2006 book A Whole New Mind when he said that in the future, there might come a time when we need more “right brained” individuals to piece together all the disparate technologies in sensible, meaningful ways. In such a world, vision, philosophy, history, art, morality, ethics and musical knowledge, might hold a place equal to that of the engineer, scientist or technologist. The question is, is anyone else noticing this shift? Are people seeing the potential? Are companies making strides to bring in systems thinkers, philosophers and ethicists into their AI departments? Are schools of liberal arts taking note and adjusting the curriculum?
Should we, for example, make changes at schools and universities such that AI is not taught merely as an extension of computer science, but is taught in schools for the liberal arts?
Certainly, there are many examples we can look at. One that immediately comes to mind is the creative use of datasets. For example, something that I’ve started looking into as an AI content strategist, is to load datasets of personality and intelligence inventories, like Myers-Briggs and Gardner’s multiple intelligences and then ask the AI to help me better understand the audience or a company in terms of how they are more likely to process information. Other things come to mind too such as training AI on a piece of music like the Finale to Shostakovich, Symphony no. 7 and then asking it to translate that into an equally powerful conclusion to a piece of writing. What about a sustainable business model that resembles a natural process like photosynthesis or documenting the the ecosystem of a forest and modeling the design of a community college in this way. These are all things, by the way, that AI can do right now.
I asked GPT4o: “If certain LLMs are organizing information more geometrically and conceptually, is there a good argument for recruiting more people with creative, conceptually minded brains into the field? Should we, therefore, make changes at schools and universities such that AI is not taught merely as an extension of computer science, but is taught in schools for the liberal arts.”
It replied:
The research paper, The Geometry of Concepts: Sparse Autoencoder Feature Structure, suggests that large language models (LLMs) are organizing information in ways that resemble the functional and structural organization of the human brain. By analyzing the geometry of sparse autoencoders (SAEs), the authors found three levels of conceptual structure: (1) “atomic” small-scale structures, such as parallelograms and trapezoids that encode relationships (e.g., man:woman::king:queen); (2) “brain-like” intermediate-scale modularity, where clusters of related concepts, like math and code, form localized “lobes” similar to those in neural fMRI images; and (3) “galaxy-scale” large-scale structures, where the overall feature space exhibits non-random, fractal-like clustering and power-law eigenvalue distributions. These findings indicate that LLMs are more than linear systems — they are evolving toward complex, hierarchical representations of knowledge, akin to human cognition.
This research has implications for how we approach AI development and education. If LLMs are creating conceptually rich and geometrically organized knowledge spaces, there is an argument for involving more creative, interdisciplinary thinkers — those trained to recognize patterns, connections, and abstract relationships — in the field. This could mean recruiting individuals from liberal arts, philosophy, and design backgrounds, alongside traditional computer scientists. Moreover, it raises the question of whether AI should be taught differently, not just as a technical discipline but as a conceptual, creative practice. Perhaps universities should integrate AI into liberal arts programs, encouraging the development of “creative generalists” who can bridge conceptual thinking and technical expertise. This shift might not only enrich AI research but also lead to more intuitive and human-centered prompt engineering, better suited to the conceptual landscapes LLMs are now navigating.”
Perhaps we have entered the age of the creative generalist.
If you’ve made it this far, you are a good candidate to learn about an upcoming presentation and discussion. On December 21st, I will explore this topic and why I believe there is a good opportunity for creatives, generalists and especially “hyper-generalists” to enter the field of generative AI.
For a deeper exploration of this topic, please see the following:
- When Everything Became Possible / An AI Podcast
- The Soul of AI (Ep. 12: Lee Chazen)
- THE GEOMETRY OF CONCEPTS: SPARSE AUTOENCODER FEATURE STRUCTURE
- Discussion around The Geometry of Concepts Paper
- I asked GPT 4o the most complex question I could think of. Here's what happened.
- Free Copy: When Everything Became Possible: The Story of Three Universal Characters, Set Free to Help us Understand our Future with AI by Lee Chazen
No comments:
Post a Comment