Can LLM Understand Business Research: A Comprehensive Analysis

Researcher(s)

  • Yihong Chen, Management Information Systems, University of Delaware

Faculty Mentor(s)

  • Harry Wang, Department of Accounting and Management Information Systems, University of Delaware

Abstract

LLMs have sparked a wave of innovation across various fields, with many attempting to replace traditional human work with these advanced models. While research has historically been considered a domain exclusive to humans, LLMs present a potential threat to human experts. Consequently, we must explore whether AI can truly understand business research. Simultaneously, human experts can enhance their cognitive abilities and improve research quality through extensive literature review. To investigate these issues, we conducted two tasks: paraphrasing business research abstracts and generating abstracts based on keywords, journal titles, and paper titles. We compared the performance of state-of-the-art models (GPT series) with the latest open-source models (Meta-Llama-3.1 series) on these tasks. Additionally, we fine-tuned a Meta-Llama-3 model to mitigate the performance gap between smaller open-source models and SOTA models. We evaluated the quality of the results using three dimensions: theory, method, and results. Our findings indicate that while all models performed well on both tasks, the GPT series outperformed the Meta-Llama series. AI demonstrated a reasonable understanding of business research, but struggled with the more creative task of ideation. The fine-tuned model showed improved performance in paraphrasing, nearing SOTA levels, but surprisingly performed worse than the base model in ideation. Therefore, AI’s capabilities in the ideation task reveal a significant deviation from human behavior, indicating that it cannot achieve ideation through learning and reading more business research. This suggests that LLMs still fall into the classic machine learning trap of overfitting.