How applying cognitive diversity to LLMs could transform the user experience
Date:
Tue, 27 Jan 2026 09:50:20 +0000
Description:
Exploring how to get the answers and information we want from LLMs, in the
way that we need.
FULL STORY ======================================================================
As AI continues to transform, so too does the experience of the people it serves.
Research by McKinsey shows that in 2025, 62% of organizations are at least experimenting with AI agents, whilst almost 9 in 10 now say they are
regularly using them.
Despite talk of an AI bubble, the market is currently booming, global
adoption is predicted to be valued at $15 billion before the end of decade, and ChatGPT alone is reported to reach over 500 million users worldwide every month.
Used properly, AI tools and LLMs can be invaluable. In fact, the same
McKinsey survey reports that 39% of respondents attribute some level of operational income to AI, with other benefits including improvements across innovation (64%), customer satisfaction (45%), and profitability (36%).
While these figures are encouraging, concerns about the technology remain consistent, especially around the quality and reliability of data and the inaccurate answers AI tools can generate. Inaccuracy is the risk most organizations are working to mitigate, according to McKinsey.
So, is there a way to improve the output of LLMs and get the answers and information we want and in the way that we need? Currently, the answer is to tell users to simply get better at their prompts, but if we look at how
humans interact with one another, there could be another solution.
Introducing cognitive diversity and why it matters for LLMs
In humans, cognitive diversity refers to differences in how individuals
think, solve problems, generate ideas and make decisions.
The KAI inventory suggests this diversity comes in the form of a natural, innate preference for the amount of structure we use as we generate
solutions, organize our environment as we implement them, and respond to
rules and group norms.
Adaption-Innovation Theory, on which the KAI is based, describes a spectrum that ranges from highly adaptive to highly innovative, with infinite variations in between.
Generally speaking, more adaptive individuals prefer more structure and
prefer to leverage clear and consistent rules, while more innovative people prefer less structure and are more likely to ignore or change the rules to stay engaged.
Ones preference for more adaption or more innovation is not related to ones intelligence or motivation; and because of this, there is no ideal position for one to have on the KAI spectrum.
Decades of research by Dr. M. J. Kirton into Adaption-Innovation Theory suggests that, when individuals understand their cognitive styles, solutions can be reached in more effective, actionable, and efficient ways both alone and in teams.
But how can we apply this theory to technology, and can we train LLMs to work in a similar way? Research suggests the answer is yes. What the research suggests:
A recent paper by researchers at Carnegie Mellon University and Penn State University - Putting the Ghost in the Machine: Emulating Cognitive Style in Large Language Models - explored a fundamental question: can LLMs emulate cognitive styles if we teach them how?
The researchers taught an LLM model about Adaption-Innovation Theory, giving it an understanding of cognitive diversity and how more adaptive and more innovative people behave. It was then tasked with solving three design problems using two different prompts, each prompt having been specially designed with a different cognitive style in mind.
One prompt was adaptively framed - mirroring the thinking style of someone
who is meticulous, attentive to details and thrives when working with clear expectations; the other prompt was innovatively framed - mirroring the thinking style of someone who is energized when the expectations are more ambiguous and there is greater flexibility.
Answers were evaluated on feasibility (how workable and realistic the solutions were) and paradigm-relatedness (whether the ideas stayed within existing frameworks or shifted away from them).
The results revealed that the adaptive prompt resulted in more feasible, structured, traditional solutions. In contrast, the innovative prompt
produced less feasible but more paradigm-challenging solutions.
Simply put, the LLM wasn't just generating solutions or answers, it was generating the right kinds of solutions based on its knowledge of cognitive diversity and the effective cognitive style of the individual asking the question.
As result, it provided a more innovative/adaptive solution depending on how
it was prompted and what the asker needed. But what does this all mean for
the future of LLMs?
Simply put, were wasting the power of LLMs if we dont take cognitive
diversity into account. If we want to get better, more relevant and more productive solutions from AI, and get them more efficiently, the next generation of the technology must have an understanding of cognitive
diversity embedded into it.
In real life, we rarely preface a question by explaining in detail how we think or approach problems, but we know when an answer matches our way of thinking or not and whether that is the type of answer we are seeking. If LLMs can offer us the same range of possible answers that the cognitive style spectrum represents, it will eliminate the endless cycle of prompting until
we stumble on the answer we need.
Research shows that, by integrating an understanding of human cognitive
styles into the technology itself, were giving ourselves, and our AI tools, a head start. From there, the opportunity for even better rates of productivity , efficiency and user satisfaction have the potential to skyrocket.
We feature the best Large Language Models (LLMs) for coding .
======================================================================
Link to news story:
https://www.techradar.com/pro/how-applying-cognitive-diversity-to-llms-could-t ransform-the-user-experience
--- Mystic BBS v1.12 A49 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)