Can Large Language Models Understand Non-Literal Numbers?

2024 · 2024 Competition

School: School of Computer and Information Sciences
Category: ResearchPrimary

Project Overview

One Liner: While large language models (LLMs) are capable of or even surpass humans in many tasks, they still lack a basic understanding of nonliteral numbers compared to humans.

Abstract

With Large Language Models (LLMs) become more and more capable of doing a variety of language understanding and reasoning tasks, there have been a emerge of hypes around LLMs can to some extend understand languages and preliminary cognitive capabilities. However, Do they really do? As we all know, in real-world social contexts, we fulfill the communications intention not always directly through the literal meaning of what we say, the actual meanings we try to convey always need the interlocutors understand the nonliteral aspects behind languages. One of the most important aspects in this consideration is pragmatics – the understanding of nonliteral languages. In this study, we aim to examine the nonliteral language understanding capabilities of LLMs, and by comparing with human data, we try to get a better sense of LLMs’ language understanding capabilities and the gaps between LLMs and humans language abilities. Specifically, we focus on nonliteral number understanding trying to observe the hyperbole and pragmatic halo (interpreting sharp number percisely and round numbers imprecisely) effects, as "numbers" are both the easiest and most intuitive to analyze and specific chosen numbers would avoid potential data contamination in the training stage

Screenshots

0 image(s)

No screenshots uploaded yet.

Team Members

Haoran Zhao
Lead

Advisors

Jake Williams
Jake Williams
Shadi Rezapour
Shadi Rezapour