Creativity is a unique and complex aspect of human nature that can manifest in countless ways, from artistic expression to scientific innovation and practical problem-solving. Whether it’s painting a masterpiece, inventing a groundbreaking technology, or finding a clever solution to a common problem, creativity touches every part of our lives.
Because creativity is so diverse, measuring it can be challenging. Different methods have been developed to measure creativity, including tests that evaluate a person’s ability to generate multiple ideas, connect unrelated concepts, or solve problems in unique ways.
Creativity assessments can be subjective, like self-reports. Self-reports offer a quick way to gauge how people view their own creativity, though they can be biased by factors like self-esteem. Creative self-belief, which reflects a person’s confidence in their creative abilities, is linked to greater creative engagement and overall well-being. Personality also influences creative self-belief, with traits like openness to new experiences being strongly linked to creativity. While questionnaires such as the Creative Self-Belief scale, which asks just one question, can be quick and easy to use, they might not capture the full complexity of creativity.
Assessments can also be more objective and scored based on specific criteria, such as the Alternate Uses Task (AUT), where participants think of different ways to use common objects (e.g. a brick). While useful, this test is labor-intensive to score manually, and can be inconsistent due to human judgment. Even with efforts to reduce bias, like using multiple raters, the manual scoring process can limit the test’s use in large studies, creating a roadblock to studying creativity. Newer automated methods, such as AI models, have been developed to score the AUT to make the process faster and more accurate.
How do these methods compare to one another? We conducted a study to find out, focusing on two key questions. First, we looked at how well a simple self-assessment question, “How creative do you consider yourself?” aligned with more complex tests like the AUT. If these results match closely, the self-assessment could offer a quick alternative to longer tests.
Second, we evaluated whether AI-based scoring methods could reliably match traditional human ratings. If AI proves reliable, it could eventually replace manual scoring, streamlining creativity research.
How we did the study
We had 485 participants in the study. They rated their own creativity with the Creative Self-Belief (CSB) questionnaire by answering the question, “In general, how creative do you consider yourself?” using a slider scale that ranged from “Not at all creative” (0) to “Very creative” (100).
They also completed the AUT, in which they were shown one of four randomly selected images of common items (a newspaper, brick, envelope, or wire clothes hanger). They had 2 minutes to list as many different uses for the item as possible. Three reviewers manually scored a subset of the AUT responses based on these criteria: how many ideas they came up with (fluency), how many different categories their ideas fell into (flexibility), how detailed their ideas were (elaboration), and how unique or unusual their ideas were (originality).
The AUT responses were also automatically scored based on two criteria: elaboration and originality (scored with an AI tool called Open Creativity Scoring with Artificial Intelligence (OCSAI).
Since personality traits can influence creative self-belief, we also measured personality traits using the Big Five Inventory – 10 (BFI-10), which assesses the five major personality traits: extraversion, agreeableness, conscientiousness, emotional stability, and openness to experience.
Simple is not always better
We first explored how well the CSB self-assessment aligned with results from the AUT. CSB showed a weak correlation with manually-scored fluency (number of ideas) and originality but didn’t significantly relate to flexibility or elaboration. While people who rated themselves as more creative generated more ideas and unique responses, the connection wasn’t strong. Automated scores for elaboration and originality also didn’t align significantly with CSB.
Next, we examined how CSB related to personality traits and how consistent it was over time. CSB was strongly linked to openness to experience and positively correlated with traits like extraversion, agreeableness, and conscientiousness, while negatively related to neuroticism. CSB scores remained stable, showing little change between tests, which is important for long-term studies.
Finally, we compared manual and automated scoring. Manual and automated elaboration scores were strongly correlated, but originality scores showed only a weak link between human and AI ratings, possibly because AI evaluates each idea individually, while humans assess the overall response.
What do the results mean for creativity research?
The findings from this study highlight the complexity of measuring creativity. While a simple self-assessment like asking “How creative do you consider yourself?” can offer some insight into a person’s creative self-belief, it doesn’t fully capture how people perform on more complex, objective creativity tasks like the Alternate Uses Task (AUT). Creative self-belief is somewhat related to how many ideas people can generate and how original those ideas are, but this connection is not particularly strong. This suggests that while self-perception is important, it may not always reflect a person’s creative output.
Moreover, personality traits, particularly openness to new experiences, play a significant role in shaping how creative people believe themselves to be. This self-belief remains stable over time, making it a reliable factor for long-term studies. However, relying solely on self-reports may not provide a full picture of someone’s creative potential, as these measures can miss key elements like flexibility and elaboration in creative thinking.
Finally, the study’s comparison of manual and AI-based scoring methods for the AUT shows promise for future research. While automated methods align well with human scoring in some areas, like elaboration, they are less accurate in others, such as originality. As AI technology continues to improve, it may become a useful tool in large-scale creativity research, offering quicker and more consistent scoring methods. However, for now, a combination of self-reports and creativity tasks remains the best approach to better understanding creativity.
Creativity research is important for fostering innovation and solving problems, so improving how we measure and understand creativity is crucial. This study is a step in that direction.
Read more about this study and the results in the paper “Creative self-belief responses versus manual and automated alternate use task scoring: A cross-sectional study” published in Journal of Creativity.