THE INTERNET DEMOCRACY INITIATIVE

AI-Media Strategies Lab

The AI-Media Strategies Lab – AIMS Lab – focuses on the use of AI technologies in media industries, providing evidence-based recommendations to organizations, and producing research leveraging methods such as surveys, focus groups, experiments, and data analysis.

The Lab’s Director is Dr. John Wihbey, associate professor of media innovation and technology at Northeastern University and a consultant and analyst. His graduate-level course “AI in Media Industries” will run in Spring 2025. 

AIMS seeks to partner with and advise organizations on using AI tools to accomplish their media-related goals, in areas such as news, communications, social media, public relations, advocacy, and more. The Lab seeks to learn from external organizations and consolidate knowledge about the state of the art (applied research), as well as produce more basic research about AI technologies and their uses and perceptions in society relating to media and communications. 

AIMS looks to be an educational bridge as the media and communications workforce adapts to a new technological paradigm.

The research goals of the AIMS Lab are: 

  • Identify workflows and key use cases within media and communications organizations that can benefit from the careful, structured use of generative AI technologies. 
  • Explore the uses of generative AI on social platforms, in order to anticipate emerging platform governance and ethics questions. 
  • Examine how news media may deploy AI in workflows and establish corresponding governance practices. 
  • Leverage ongoing surveys to assess how people understand AI in media and their value preferences.
  • Educate media and communications workers in new areas of competency such as prompt engineering and AI-powered workflows.

Selected research:

The Algorithmic Knowledge Gap within and between Countries: Implications for Combatting Misinformation
The Origin of Public Concerns over AI-supercharging Misinformation in the 2024 US Presidential Election

“The Origin of Public Concerns over AI-supercharging Misinformation in the 2024 US Presidential Election” (forthcoming), Harvard Kennedy School Misinformation Review

AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge?

“AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge?” Trinity College Conference on Democracy’s Mega Challenges, 2024

Social Media's New Referees?: Public Attitudes Toward AI Content Moderation Bots Across Three Countries

“AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge?” Trinity College Conference on Democracy’s Mega Challenges, 2024

How Americans see AI: Caution, skepticism, and hope

“How Americans See AI: Caution, Skepticism, and Hope,” Northeastern University AI Literacy Lab, 2023

Marketplace of Ideas 3.0?: A Framework for the Era of Algorithms
Social Media Regulation, Third-person Effect, and Public Views: A Comparative Study of the United States, the United Kingdom, South Korea, and Mexico
The Emerging Science of Content Labeling: Contextualizing Social Media Content Moderation

“The Emerging Science of Content Labeling: Contextualizing Social Media Content Moderation,” Journal of the Association for Information Science and Technology, 2022

Informational Quality Labeling on Social Media: In Defense of a Social Epistemology Strategy

Contact Us

Are you interested in joining the IDI team or have a story to tell? reach out to us at j.wihbey@northeastern.edu