Soygazi,F.Oğuz, Damla2024-06-192024-06-192023979-840070898-5https://doi.org/10.1145/3633598.3633614https://hdl.handle.net/11147/14548The development of large language models (LLMs) has led to the consideration of new approaches, particularly in education. Word problems, especially in subjects like mathematics, and the need to solve these problems by collectively addressing specific stages of reasoning, have raised the question of whether LLMs can be successful in this area as well. In our study, we conducted analyses by asking mathematics questions especially related to word problems using ChatGPT, which is based on the latest language models like Generative Pretrained Transformer (GPT). Additionally, we compared the correct and incorrect answers by posing the same questions to LLMMathChain, a mathematics-specific LLM based on the latest language models like LangChain. It was observed that the answers obtained were more successful with ChatGPT (GPT 3.5), particularly in the field of mathematics. However, both language models were found to be below expectations, particularly in word problems, and suggestions for improvement were provided. © 2023 ACM.eninfo:eu-repo/semantics/closedAccessChatGPTLangChainLarge Language Models (LLMs)Mathematics EducationAn Analysis of Large Language Models and Langchain in Mathematics EducationConference Object2-s2.0-8518412898210.1145/3633598.3633614