In their new article, “Automated Detection of Media Bias Using Artificial Intelligence and Natural Language Processing: A Systematic Review”, Mar Castillo-Campos, Hajo Boomgaarden, and David Becerra-Alonso examine how advances in NLP can be harnessed to detect media bias. Drawing on 28 peer-reviewed studies published between 2019 and 2023, the review maps definitions of bias, applied NLP tasks, technological approaches, and their outcomes.
Their review highlights recurring difficulties such as models struggling with context, metaphors, satire, and omissions; the risk of oversimplifying bias into binary categories like “liberal” versus “conservative”; and the lack of large-scale, multilingual benchmark datasets. While BERT and its variants remain the most widely used tools, the authors show that combining methods often yields more robust results and caution that pre-trained models themselves may reproduce unwanted biases.
This work not only consolidates the state of the art in computational media bias detection but also calls for clearer definitions, standardized evaluation metrics, and greater attention to the subtler, less obvious forms of bias shaping news discourse.
Read the full article here: https://doi.org/10.1177/08944393251331510
Cite the article:
Castillo-Campos, M., Becerra-Alonso, D., & Boomgaarden, H. G. (2025). Automated Detection of Media Bias Using Artificial Intelligence and Natural Language Processing: A Systematic Review. Social Science Computer Review, 0(0). doi.org/10.1177/08944393251331510
