Document Type
Presentation
Conference Name
Core Forum
Publication Date
11-13-2025
Keywords
Artificial intelligence, Large language models, Summaries, Metadata, Evaluation
Abstract
Since the introduction of ChatGPT in 2022, the potential impact of Artificial Intelligence on library workflows has been a topic of immense interest—and sometimes anxiety—in the library profession. In the area of cataloging and metadata, one frequently cited potential use for AI (particularly Large Language Models (LLMs)) is constructing summaries and abstracts of information resources. However, very little systematic assessment has been done of the characteristics, quality, and utility of AI-generated abstracts for library materials, leaving information professionals with little evidence on which to make data-driven decisions as to how, if at all, to implement AI-assisted workflows for this aspect of metadata work.
Using electronic theses and dissertations (ETDs) as a test case, this study compared AI-generated summaries with human-generated summaries, assessing them for relevance, completeness, clarity, and potential bias. The presenters will share their findings, identifying strengths and limitations of AI-generated summaries for ETDs from a variety of academic disciplines, and offer recommendations and a rubric that information professionals can use to assess the utility of LLM tools for summary and abstract creation.
Funding Source
University Research Grant
Creative Commons License

This work is licensed under a Creative Commons Attribution-No Derivative Works 3.0 License.
Recommended Citation
Baldoni, Emily and Yon, Angela, "Accuracy Matters: Evaluating the Value of AI Tools for Summaries in Metadata" (2025). Faculty and Staff Publications – Milner Library. 293.
https://ir.library.illinoisstate.edu/fpml/293
Comments
The presentation was given at Core Forum 2025. Core Forum is the annual conference for the American Library Association (ALA) division, Core: Leadership, Infrastructures, Futures.