LLMs meet Bloom’s Taxonomy: A Cognitive View on Large Language Model Evaluations
Type
conference paper
Date Issued
2025-01
Author(s)
Abstract
Current evaluation approaches for Large Language Models (LLMs) lack a structured approach that reflects the underlying cognitive abilities required for solving the tasks. This hinders a thorough understanding of the current level of LLM capabilities. For instance, it is widely accepted that LLMs perform well in terms of grammar, but it is unclear in what specific cognitive areas they excel or struggle in. This paper introduces a novel perspective on the evaluation of LLMs that leverages a hierarchical classification of tasks. Specifically, we explore the most widely used benchmarks for LLMs to systematically identify how well these existing evaluation methods cover the levels of Bloom’s Taxonomy, a hierarchical framework for categorizing cognitive skills. This comprehensive analysis allows us to identify strengths and weaknesses in current LLM assessment strategies in terms of cognitive abilities and suggest directions for both future benchmark development as well as highlight potential avenues for LLM research. Our findings reveal that LLMs generally perform better on the lower end of Bloom’s Taxonomy. Additionally, we find that there are significant gaps in the coverage of cognitive skills in the most commonly used benchmarks.
Language
English
Keywords
nlp
llm
evaluations
Publisher
Association for Computational Linguistics
Event Title
COLING 2025
Event Location
Abu Dhabi
Event Date
19-24 January 2025
Official URL
Subject(s)
Contact Email Address
thomas.huber@unisg.ch
File(s)![Thumbnail Image]()
Loading...
open.access
Name
2025.coling-main.350.pdf
Size
716.57 KB
Format
Adobe PDF
Checksum (MD5)
81928e49ada9d88895561dd499106d84