THE INTERNET DEMOCRACY INITIATIVE

Research Seed Projects

The following research projects have been awarded seed grants from the IDI

AI as a Team Member: Human-AI Group Discussion Improves Misinformation Detection

Principal Investigator: Chenyan Jia, College of Arts, Media and Design and Khoury College of Computer Sciences

As misinformation proliferates online, supporting the public in discerning between true and false news is imperative to maintaining an informed citizenry. Past work mainly focused on improving individuals’ ability to discern true information from false ones. Much less is known about how to increase a group of social media users’ ability to identify misinformation through collaboration and discussion. Can our strength in collaboration ameliorate individuals’ weaknesses in detecting misinformation? Furthermore, with the emergence of generative language models such as ChatGPT, can Artificial Intelligence (AI) serve as a valuable team member that can share knowledge and interact with humans when detecting misinformation? In this work, we propose a 2 (AI as a member: yes vs. no) X 2 (individual vs. group) between-subjects design experiment. We aim to examine whether 1) the group discussion can improve individuals’ ability to identify false information from truth; 2) the group dynamics will change or not after adding AI as a new actor. We will build on the ChatGPT API with the function calling feature. The function calling feature will enable our Chat bot to search Wiki and other verified sources of information (e.g., third-party fact-checking dataset). Our system will also have a memory module which consists of both short-term and long-term memory. The short-term memory system relies on the retrieval of historical options of the agents themselves. The purpose of short-term memory is to simulate a human’s intuitive mind. The long-term memory system is a collection of agents’ profiles which contains all the agents’ social attributes (e.g., gender, ideology, education). The data structure of long-term memory can be designed as a list of key value pairs. One of the strengths of humankind lies in our ability to collaborate. People can take on bigger, more challenging problems when they work together than they can on their own. Our proposed study intends to investigate whether collaboration can help us identify misinformation. Our work has important implications and reorients research and policy from focusing on the individual to a more collaborative and social approach in addressing the problem of misinformation in the era of generative AI.

Vision Beyond Sight: Designing Human-Centered AI Systems for Social Media Accessibility for the Visually Impaired, through the National Internet Observatory

Principal Investigators: Saiph Savage, Khoury College of Computer Sciences; Yakov Bart, D’Amore-McKim School of Business

“Vision Beyond Sight” is an innovative project focused on leveraging human-centered AI systems to enhance social media accessibility for visually impaired individuals. This initiative, operating under the auspices of the National Internet Observatory (especially by using visual data from the observatory), aims to bridge the gap between the rapidly evolving digital landscape and the needs of those with visual impairments. At the core of this project is the development of AI-driven tools tailored to interpret, translate, and present social media content in formats accessible to visually impaired users. These tools will utilize advanced techniques such as GTP-4V, natural language processing and audio description technologies to convert textual, image, and video content into tactile, auditory, and other accessible formats. This conversion process will not only include the literal translation of content but also convey the subtleties of social cues, emotional tones, and cultural context –by connecting with human annotators who will help to provide elements often missed by traditional screen readers. The project also emphasizes user experience, ensuring that the AI systems are intuitive, customizable, and responsive to the diverse needs and preferences of visually impaired users. By conducting ongoing interviews, structured surveys, user feedback sessions and large-scale usability testing, the project will iterate and refine the AI tools to align closely with real-world user requirements and allow necessary customization reflecting heterogeneous preferences associated with information consumption. Interdisciplinary collaboration of faculty experts in computer science, journalism, and consumer behavior in digital environments, “Vision Beyond Sight” seeks to democratize access to social media, enabling visually impaired individuals to participate more fully in digital social interactions. This initiative not only holds the promise of technological innovation but also represents a step towards a more inclusive digital society where everyone, regardless of their visual capabilities, can enjoy equal access to information and communication. 

Publics and Their Opinions: Measures on Realistic Simulations

Principal Investigator: Brian Ball, Philosophy, New College of Humanities London 

The objective of this 6-month project is to understand the (collective) opinions of various social groups, or ‘publics’, under realistic informational conditions, and to assess various measures of those aggregate attitudes. Building on the techniques and findings of the PolyGraphs project, the envisioned research will use computational methods to simulate communities of agents in their quest for informed opinions in adverse circumstances – e.g. confronted with mis- and disinformation, agents may be uncertain which of their network neighbors are trustworthy, or have opinions/expertise that should be deferred to. More specifically, the project will deploy the open-source Python code developed under PolyGraphs (and available on Github) to run experiments on the discovery cluster using large graph datasets obtained from real-world sources e.g. SNAP, or Academia.edu (who will provide csv feeds), so as to yield socially valid network topologies. It will then deploy community detection algorithms to identify groups whose collective outlook is under investigation, and use statistical methods to examine and assess the outputs of a range of aggregative measures of the opinions of these publics. The funds will be used for doctoral research support in computer/data science to generate, gather, wrangle, and analyze the simulation data, contributing to a number of outputs/publications. The findings will be disseminated in the UK and internationally; and they will underpin a subsequent large-scale study, which will further develop the PolyGraphs modeling tool (implementing Bayes nets and Pearl’s rule for individual agents), achieving still greater realism. 

Accessing ChatGPT’s Ability to Mimic Humans on Social Media

Principal Investigator: Kaicheng Yang, Network Science Institute

Large language models (LLMs) can generate human-like text and excel in various natural language processing tasks, catalyzing a range of potential applications. Recent studies have indicated that LLMs can effectively role-play, exhibit diverse characteristics when interacting with humans, and capture political views when fine-tuned with social media data. These findings suggest that LLMs can potentially impersonate humans on social media, but a systematic analysis is lacking. This proposal aims to close this gap by testing the ability of ChatGPT, a prominent LLM, to mimic individual Twitter users. This study could help Northeastern better prepare for the upcoming elections in 2024. Due to the restrictions on data accessibility from platforms like Twitter and Reddit, Northeastern researchers have lost the ability to study online behavior. Therefore, demonstrating the viability of using LLMs to simulate humans could facilitate cost-effective and ethically responsible experiments. LLMs might also act as an alternative source of social media data. From the trust and security perspective, the risk of adversarial actors employing LLMs to fabricate fake personas on social media is imminent. The proposed study could yield valuable insights into how LLMs facilitate such abuses. The proposal involves the following steps. First, we will sample Twitter users with diverse backgrounds and characteristics, leveraging the Twitter panel dataset in Lazer Lab. Second, we will ask ChatGPT to emulate the styles and traits of these users to produce more content, which is then compared with the real content. Finally, we will conduct an online experiment to test if humans can identify AI-generated posts. 

Contact Us

Are you interested in joining the IDI team or have a story to tell? reach out to us at j.wihbey@northeastern.edu