Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/183769
Title: Who to blame: algorithmic awareness and users’ perception of AI gender bias
Authors: Ge, Jiayi
Keywords: Social Sciences
Issue Date: 2025
Source: Ge, J. (2025). Who to blame: algorithmic awareness and users’ perception of AI gender bias. 75th Annual International Communication Association Conference (ICA 2025).
Conference: 75th Annual International Communication Association Conference (ICA 2025)
Abstract: In an increasingly algorithm-driven digital environment, the evaluation of algorithms’ performance pays more attention to the subjective perception of users. Previous research highlights that AI systems, from job application algorithms to language models, often raise users' perceived gender bias, partly due to societal biases encoded within AI training data and users' own bias. But in users' mind, to what extent is the AI gender bias due to AI itself, and to what extent do humans play a role in it? Algorithmic awareness, which is a progressive process where users basically recognize, critically understand, and rhetorically interact with the algorithms, is expected to decrease the perception and attribution of bias in AI. Through a 3 (algorithmic awareness: basic vs. critical vs. rhetorical) × 2 (AI output: high gender bias vs. low gender bias) between-subject experiment, we explore how participants attribute the biased outcomes and to what extent they engage the machine heuristic, which reflects the belief in algorithmic objectivity. We further examine the mediating role of the machine heuristic between algorithmic awareness and perceived bias, and whether this heuristic is moderated by the actual presence of gender bias in the content. In addition, the study extends to investigate how algorithmic awareness and bias perceptions jointly influence participants’ attitudes toward the AI system and their future behavioral intentions. The findings may offer insights into how deep interaction with AI may have complex effects on bias perception and suggest pathways for enhancing digital literacy in human-AI interactions.
URI: https://hdl.handle.net/10356/183769
URL: https://www.icahdq.org/mpage/ICA25
https://ica2025.abstractcentral.com/s1agxt/com.scholarone.s1agxt.s1agxt/S1A.html?&a=5544&b=1593989&c=61601&d=17&e=58765975&f=17&g=null&h=BROWSE_THE_PROGRAM&i=N&j=N&k=N&l=Y&m=tNMuvbomL8MmpsKaM6Nww7cGVDE&r=DvSqj&r2=DvSqj7qQK3QyZmiTU_qbVWliPY_qrQCOO25EpGPW&n=0&o=1744880380641&q=Y&p=https://ica2025.abstractcentral.com&x=Y&z=
Research Centres: IGP-Global Asia
Rights: © 2025 The Author(s). All rights reserved.This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder.
Fulltext Permission: embargo_20250617
Fulltext Availability: With Fulltext
Appears in Collections:IGS Conference Papers

Files in This Item:
File Description SizeFormat 
AIBias_Paper_ICA (1).pdf
  Until 2025-06-17
Who To Blame: Algorithmic Awareness And Users’ Perception of AI Gender Bias236.53 kBAdobe PDFUnder embargo until Jun 17, 2025

Page view(s)

53
Updated on May 7, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.