Leveraging Large Language Models (LLMs) to Conduct Mental Health Assessments in a Conversational Manner

Ajit, Naik Atharv (2025) Leveraging Large Language Models (LLMs) to Conduct Mental Health Assessments in a Conversational Manner. Masters thesis, Indian Institute of Science Education and Research Kolkata.

[img] Text (MS Dissertation of Naik Atharv Ajit (20MS195))
20MS195_Thesis_file.pdf - Submitted Version
Restricted to Repository staff only

Download (3MB)
Official URL: https://www.iiserkol.ac.in

Abstract

Mental health conditions including depression, anxiety, addiction, and more are increasing at exponential rates, and yet the ability for individuals to receive the fast, consistent diagnosis and support they need from credible mental health professionals continues to be bleak as access to mental health professionals is limited and stigma against reaching out for mental health help continues to be prevalent. Conventional evaluation instruments, such as PHQ-9 and GAD-7, are often manual and thus time-consuming, and they become difficult to use in practical applications. Although rule-based chatbots have offered some remedies, they do not possess the contextual understanding required for psychiatric evaluations. Large Language Models (LLMs) have the potential to close this gap by providing natural, empathetic conversations where human-like evaluations are emulated. Yet, utilizing LLM in structured psychiatric assessments faces issues of reliability, control and interpretability metrics. Any LLMs that are out-of-the-box have been utilized in this thesis to build a model-agnostic framework that can be put to use for these standard tests such as PHQ-9 and GAD-7 or for any custom questionnaires. The architecture integrates LLMs into distinct logic chains, i.e. Eval Chain for parsing user replies, Decision Chain for managing context flow in conversation, and Score Chain for scoring asynchronously with questions. Internally, the flow is driven by state-tracked variables—chat state, assessment phase, question node, and repeat attempts—to facilitate ambiguity, looping, and phase transition. Multiple tests are performed in a session and the results are shown in a dashboard that doctors can access. Two versions of the system were developed: one for general mental health evaluations, and another tailored for students, each with custom assessments and suicide-risk alerting. By combining the power of LLMs of understanding text coupled with the precision of a deterministic conversational framework, this work demonstrates a scalable approach to automating mental health assessments, bridging the gap between patients needing help and doctors. 5

Item Type: Thesis (Masters)
Additional Information: Supervisor: Dr. Dwaipayan Roy
Uncontrolled Keywords: Leveraging Large Language Models, Mental Health Assessments, Conversational Manner, Model-agnostic framework
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Divisions: Department of Computational and Data Sciences
Depositing User: IISER Kolkata Librarian
Date Deposited: 28 Apr 2026 10:41
Last Modified: 28 Apr 2026 10:41
URI: http://eprints.iiserkol.ac.in/id/eprint/2143

Actions (login required)

View Item View Item