
Princeton Journal of Interdisciplinary Research, Volume 1, Issue 3
— Bridging Horizons (March 2026) - ISSN 3069-8200
A Comparative Study on Cognitive Bias in Large Language Models
Author: Siddharth Sreekanth, Ali Mahmoodi
Affiliation: Nikola Tesla STEM High School
Abstract:
Large Language Models (LLMs) have revolutionized natural language processing and artificial intelligence by enabling machines to understand and generate human-like text. However, these models are not immune to cognitive biases, which can be inadvertently acquired during the training process. This paper explores the sources and implications of cognitive bias in LLMs. Additionally, this paper discusses the ethical concerns and impact of biased LLMs, particularly in applications such as chatbots and automated decision-making. Furthermore, this paper examines various techniques and best practices for mitigating cognitive bias in LLMs. Through data analysis and prompt testing, this paper highlights the various cognitive biases present in a vast number of LLMs, providing a comparative overview of the impact of these biases. Additionally, this paper reviews the effectiveness of different mitigation strategies and suggests future directions for developing unbiased language models. By addressing bias, this study aims to enhance the fairness, accuracy, and equality of LLMs in various applications
Keywords: Machine Learning, Large Language Models, Cognitive Biases