Select Page

JOBEM Special Issue Call: Al and the Platformization of Journalism: Fake News, Mis(dis)information, and Algorithmic Bias

JOBEM Special Issue Call:
Al and the Platformization of Journalism: Fake News, Mis(dis)information, and Algorithmic Bias

Submission Deadline: May 1, 2022

Special Issue Editors: 
 Don Shin, Professor, College of Communication and Media Sciences, Zayed University,

Kerk F. Kee, Associate Professor, College of Media and Communication, Texas Tech University,

Artificial intelligence (AI) is affecting the life of media users (Wölker & Powell, 2021). Search engine platforms (Google), social media platforms (Twitter and Instagram), and other over-the-top service platforms (Hulu and Netflix) are fueled by data, automated and organized through AI and algorithms, controlling users and markets. Similarly, the platformization of news and journalism is also a growing trend (Dijck et al., 2018). The process of platformization increasingly penetrates economic, organizational, and social extensions of digital platforms into the online and media ecosystems, fundamentally affecting the operations of media industries and journalistic practices (e.g., multiplatform journalism).

Recently platformization has been accelerated with the drastic breakthrough of machine learning. It is algorithms that enable sets of automated processes to transform input data into the desired output (Dijck et al., 2018). Algorithms play a key role in curating what information is considered most relevant to users. Algorithms are popular and effective in practice, but their effectiveness comes at the expense of systematic discrimination, limited transparency, and vague accountability (Moller et al., 2018). Algorithmic filtering procedures may lead to more impartial, and thus possibly fairer processes than those processed by humans. However, algorithmic recommendation processes have been criticized for their tendency to intensify/reproduce bias, distortion of facts, information asymmetry, and process opacity (Ananny & Crawford, 2018). Algorithmic bias may further deteriorate algorithmic injustice that machine learning automates and perpetuates (Hoffman, 2019).

AI-powered platforms have greatly contributed to the rapid dissemination of fake news, mis(dis)information, and deepfakes that are undesirable by-products of platformization (Dan et al., 2021). Misinformation spreads more rapidly and more broadly than reliable information does, jeopardizing the credibility of algorithmic journalism. Issues regarding how to safeguard the goals, values, and automated processes of platformization, how to counter fake news, how to discern misinformation, and how to regain media trust in AI remain controversial (Crain, 2018). Underlying these questions are concerns about how to mitigate bias and discrimination in data, as well as urgent tasks on how to design algorithmic systems so they are transparent and fair (Hoffman et al., 2019). As ethical concerns have peaked recently with the rise of platformized algorithms, the opacity of black-box algorithm processes had led to calls for studies on fairness and transparency (Dörr & Hollnbuchner, 2017; Meijer, 2014).

Recent research (e.g., Park, 2019; Sandvig et al., 2016) has highlighted normative implications and problems of these algorithms, summarized by fairness, accountability, and transparency (FAT). Transparency and fairness particularly emerge as key attributes for trustable algorithmic systems (Helberger et al., 2018). This topic will be even more critical when platforms rely more and more on algorithms and people rely more on algorithms than social influence when making judgments. AI becomes pervasive across all media industries and service functions. The key question arises on how to govern these platform algorithms effectively and legitimately while ensuring that they are more trustworthy and socially responsible. These normative concerns have given rise to calls for better explanatory frameworks to effectively addresses them (Thurman et al., 2019). A number of studies have examined these concerns from various perspectives, such as a user consumption perspective (how individuals make sense of fake news and how people share and circulate misinformation), ethical point of view (how journalism practices face and deal with the issues of FAT), managerial perspective (fake news detection and channel of dissemination), and regulatory perspective (how to govern fake news and misinformation).

In this special issue, we approach platformization from broad and thematically relevant topics of algorithmic bias and AI-facilitated harm. Broadly we invite submissions that engage with one or more of the topics below, and beyond:

  • The issues of fairness, accuracy, and transparency (FAT) in platformized journalism
  • The negative side-effects of platformization: Fake news, mis(dis)information, and deepfakes
  • Technological (e.g., blockchain), legal, and/or social measures to fight fake news and misinformation
  • Online misinformation diffusion models and message trustworthiness, trust, and acceptance models.
  • The user acceptance and the diffusion of platformization in the existing approaches to media services
  • The impact of platformization on newsrooms, journalistic practices, and news values
  • The effects of platformization on the users’ news consumption and use
  • Theoretical discussion of platformization and multiplatform journalism
  • Ethical and cultural issues of the transformation of platformization
  • Platformized journalism compared cross-culturally and cross-nationally: variables and modeling
  • Mapping the field of platformized journalism: Different typology, approach, and model
  • A comparative work of the role perceptions, epistemological orientations, and ethical views of algorithmic journalism
  • Democratic and wider societal roles of digital journalism in both democratic and non-democratic contexts
  • Platformization and traditional journalism: competition and strategies for claiming niches
  • Algorithmic journalism and politics in comparative perspective: Socio-technical challenges for AI and algorithmic journalism in various parts of the world
  • Social, political, and regulatory implications of platformization
  • Cross-contextual comparison of platformized news and journalism: Contextual dependence of the development trajectories of the platformized journalism practices around the world

Our objective is to contribute to theorizing and operationalizing algorithmic platforms that are fairer and more transparent. To this end, this special issue aims to contribute to the understandings of fairness and transparency, leading to operational, user-centric definitions for different areas of media platforms with implications for both design/developments and sociological/ethical models. Beyond welcoming research covering algorithmic media in the context of established and mainstream approaches, this issue specially invites contributions from non-Western cultural and marginalized contexts, including non-democratic societies and newly democratic societies. The special issue is open for regular submissions; decisions about inclusion will be quality-based, reliant on thorough peer-reviewing.

Abstracts should be submitted to the special issue editor by May 1st, 2022. Full papers of vested abstracts for the special issue should be submitted to the Journal of Broadcasting & Electronic Media online submission system https://mc.manuscriptcentral.com/hbem by July 30, 2022, however we do encourage early submissions. Abstract submissions (400-700 words excluding references, indicating central questions, theoretical framework, and methodology) are to be submitted through the journal website. Full papers are expected to be between 7,000 and 8,500 words long, including references, tables, figures, and supplementary materials. All the queries on the special issue should be addressed to the guest editors.

Timeline

  • Abstract submission to emails of the guest editors: May 1, 2022
  • Authors notified of the results of abstract selection: May 20, 2022
  • Deadline for full paper submission: July 30, 2022
  • Decision: December 30, 2022

Special Issue Editors

 Don Shin, Professor, College of Communication and Media Sciences, Zayed University,

Kerk F. Kee, Associate Professor, College of Media and Communication, Texas Tech University,

 

 

Verified by ExactMetrics