"Mapping the Invisible: Evaluating AI and Human Descriptions for Blind " by Loiy Qasrawi

This dissertation is accessible only to the Illinois State University community.

  • Off-Campus ISU Users: To download this item, click the "Off-Campus Download" button below. You will be prompted to log in with your ISU ULID and password.
  • Non-ISU Users: Contact your library to request this item through interlibrary loan.

Graduation Term

Spring 2025

Degree Name

Master of Science (MS)

Department

College of Fine Arts: Arts Technology

Committee Chair

Greg Corness

Committee Member

Dan Cox

Committee Member

Kristin Carlson

Abstract

Blind and low‑vision (BLV) people often rely on spoken scene descriptions to build mental maps. This thesis examines two tools: Microsoft Seeing AI, which uses computer vision for automatic captions, and Aira, which connects users with trained remote agents. Five BLV participants listened to descriptions from both tools for four everyday scenes (kitchen, university lab sign, city intersection, railroad crossing) and then completed semi‑structured interviews on clarity, trust, and usefulness. Thematic analysis showed three priorities for BLV users for descriptions. First, named hazards such as trash cans, crossing gates, or doorbells. Second, objects described in a steady left‑to‑right or near‑to‑far order. Finally, the use of confident wording for critical elements. Aesthetics or brand details matter only afterward. Participants usually favored Seeing AI because its captions more consistently followed orientation‑and‑mobility ordering. Descriptions from Aira helped only when agents applied the same structure. Both tools faltered when they skipped hazards, jumped around, or hedged about essentials.

Access Type

Thesis-ISU Access Only

Off-Campus Download

Share

COinS