Abstract
Public Narratives (PNs) are central to leadership development and civic mobilization, yet their systematic analysis remains challenging due to the subjective nature of narrative elements and the high cost of expert annotation. In this work, we propose a novel computational framework that leverages large language models (LLMs) to automate the qualitative annotation of public narratives. Building on a rigorously developed codebook—co-created with subject-matter experts—we evaluate LLM performance against expert annotations. Our experiments, which compare multiple prompting techniques, reveal that while LLMs achieve near-human performance on explicit, personal narrative elements, they face challenges with more nuanced, collective, and forward-looking components. This study not only demonstrates the potential of LLM-assisted annotation for scalable narrative analysis but also highlights key limitations and directions for future research in computational civic storytelling.