Simon, S.Sankaranarayanan, S.Tajik, E.Borchers, C.Shahrokhian, B.Balzan, F.Celik, B.2025-09-032025-09-03202597830319841980302-974310.1007/978-3-031-98420-4_52-s2.0-105011948858https://doi.org/10.1007/978-3-031-98420-4_5https://hdl.handle.net/20.500.14720/28362Google, Gates Foundation, Hewlett Packard Enterprise, Eedi, VitalSource, Duolingo English Test, Springer.Large Language Models (LLMs) have demonstrated fluency in text generation and reasoning tasks. Consequently, the field has probed the ability of LLMs to automate qualitative analysis, including inductive thematic analysis (iTA), previously achieved through human reasoning only. Studies using LLMs for iTA have yielded mixed results so far. LLMs have successfully been used for isolated steps of iTA in hybrid setups. With recent advances in multi-agent systems (MAS) enabling complex reasoning and task execution through multiple, collaborating LLM agents, the first results point towards the possibility of automating sequences of the iTA process. However, previous work especially lacks methodological standards for assessing the reliability and validity of LLM-derived iTA. Thus, in this paper, we propose a method for assessing the quality of iTA systems based on consistency with human coding on a benchmark dataset. We present criteria for benchmark datasets and an expert blind review with this method on two iTA outputs: one iTA conducted by domain experts, and another fully automated with a MAS built on the Claude 3.5 Sonnet LLM. Results indicate a high level of consistency and contribute evidence that complex qualitative analysis methods common in AIED research can be carried out by MAS. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.eninfo:eu-repo/semantics/closedAccessAgentic LLMsClaude 3.5 SonnetInductive AnalysisLarge Language ModelsMulti-Agent SystemsQualitative CodingThematic AnalysisComparing a Human’s and a Multi-Agent System’s Thematic Analysis: Assessing Qualitative Coding ConsistencyConference Object15879 LNAIN/AQ36073