1. Home
  2. Browse by Author

Browsing by Author "Shahrokhian, B."

Filter results by typing the first few letters
Now showing 1 - 2 of 2
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Conference Object
    Automating Thematic Analysis With Multi-Agent LLM Systems
    (CEUR-WS, 2025) Sankaranarayanan, S.; Borchers, C.; Simon, S.; Tajik, E.; Atas, A.H.; Celik, B.; Shahrokhian, B.
    Thematic analysis (TA) is a method used to identify, examine, and present themes within data. TA is often a manual, multistep, and time-intensive process requiring collaboration among multiple researchers. TA’s iterative subtasks, including coding data, identifying themes, and resolving inter-coder disagreements, are especially laborious for large data sets. Given recent advances in natural language processing, Large Language Models (LLMs) offer the potential for automation at scale. Recent literature has explored the automation of isolated steps of the TA process, tightly coupled with researcher involvement at each step. Research using such hybrid approaches has reported issues in LLM generations, such as hallucination, inconsistent output, and technical limitations (e.g., token limits). This paper proposes a multi-agent system, differing from previous systems using an orchestrator LLM agent that spins off multiple LLM sub-agents for each step of the TA process, mirroring all the steps previously done manually. In addition to more accurate analysis results, this iterative coding process based on agents is also expected to result in increased transparency of the process, as analytical stages are documented step-by-step. We study the extent to which such a system can perform a full TA without human supervision. Preliminary results indicate human-quality codes and themes based on alignment with human-derived codes. Nevertheless, we still observe differences in coding complexity and thematic depth. Despite these differences, the system provides critical insights on the path to TA automation while maintaining consistency, efficiency, and transparency in future qualitative data analysis, which our open-source datasets, coding results, and analysis enable. © 2025 for this paper by its authors.
  • Loading...
    Thumbnail Image
    Conference Object
    Comparing a Human’s and a Multi-Agent System’s Thematic Analysis: Assessing Qualitative Coding Consistency
    (Springer Science and Business Media Deutschland GmbH, 2025) Simon, S.; Sankaranarayanan, S.; Tajik, E.; Borchers, C.; Shahrokhian, B.; Balzan, F.; Celik, B.
    Large Language Models (LLMs) have demonstrated fluency in text generation and reasoning tasks. Consequently, the field has probed the ability of LLMs to automate qualitative analysis, including inductive thematic analysis (iTA), previously achieved through human reasoning only. Studies using LLMs for iTA have yielded mixed results so far. LLMs have successfully been used for isolated steps of iTA in hybrid setups. With recent advances in multi-agent systems (MAS) enabling complex reasoning and task execution through multiple, collaborating LLM agents, the first results point towards the possibility of automating sequences of the iTA process. However, previous work especially lacks methodological standards for assessing the reliability and validity of LLM-derived iTA. Thus, in this paper, we propose a method for assessing the quality of iTA systems based on consistency with human coding on a benchmark dataset. We present criteria for benchmark datasets and an expert blind review with this method on two iTA outputs: one iTA conducted by domain experts, and another fully automated with a MAS built on the Claude 3.5 Sonnet LLM. Results indicate a high level of consistency and contribute evidence that complex qualitative analysis methods common in AIED research can be carried out by MAS. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.