A Loosely Wittgensteinian Conception of the Linguistic Understanding of Large Language Models like BERT, GPT-3, and ChatGPT
Grazer Philosophische Studien
In this article, I develop a loosely Wittgensteinian conception of what it takes for a being, including an AI system, to understand language, and I suggest that current state of the art systems are closer to fulfilling these requirements than one might think. Developing and defending this claim has both empirical and conceptual aspects. The conceptual aspects concern the criteria that are reasonably applied when judging whether some being understands language; the empirical aspects concern the question whether a given being fulfills these criteria. On the conceptual side, the article builds on Glock’s concept of intelligence, Taylor’s conception of intrinsic rightness as well as Wittgenstein’s rule-following considerations. On the empirical side, it is argued that current transformer-based NNLP models, such as BERT and GPT-3 come close to fulfilling these criteria.
contribution to scientific community
HSG Profile Area