AFFILIATE RESEARCH
Collaboration, crowdsourcing, and misinformation

By Chenyan Jia | September 2024
One of humanity’s greatest strengths lies in our ability to collaborate to achieve more than we can alone. Just as collaboration can be an important strength, humankind’s inability to detect deception is one of our greatest weaknesses. Recently, our struggles with deception detection have been the subject of scholarly and public attention with the rise and spread of misinformation online, which threatens public health and civic society. Fortunately, prior work indicates that going beyond the individual can ameliorate weaknesses in deception detection by promoting active discussion or by harnessing the “wisdom of crowds.” Can group collaboration similarly enhance our ability to recognize online misinformation? We conducted a lab experiment where participants assessed the veracity of credible news and misinformation on social media either as an actively collaborating group or while working alone. Our results suggest that collaborative groups were more accurate than individuals at detecting false posts, but not more accurate than a majority-based simulated group, suggesting that “wisdom of crowds” is the more efficient method for identifying misinformation. Our findings reorient research and policy from focusing on the individual to approaches that rely on crowdsourcing or potentially on collaboration in addressing the problem of misinformation. Learn More >>
Other Affiliate Research

A Case Study in an A.I.-Assisted Content Audit
This paper presents an experimental case study utilizing machine learning and generative AI to audit content diversity in a hyper- local news outlet, The Scope, based at a university and focused on underrepresented communities in Boston. Through computational text analysis, including entity extraction, topic labeling, and quote extraction and attribution, we evaluate the extent to which The Scope’s coverage aligns with its mission to amplify diverse voices.

AI Regulation: Competition, Arbitrage & Regulatory Capture
The commercial launch of ChatGPT in November 2022 and the fast development of Large Language Models catapulted the regulation of Artificial Intelligence to the forefront of policy debates One overlooked area is the political economy of these regulatory initiatives–or how countries and companies can behave strategically and use different regulatory levers to protect their interests in the international competition on how to regulate AI.
This Article helps fill this gap by shedding light on the tradeoffs involved in the design of AI regulatory regimes in a world where: (i) governments compete with other governments to use AI regulation, privacy, and intellectual property regimes to promote their national interests; and (ii) companies behave strategically in this competition, sometimes trying to capture the regulatory framework.

Multimodal Drivers of Attention Interruption to Baby Product Video Ads
Ad designers often use sequences of shots in video ads, where frames are similar within a shot but vary across shots. These visual variations, along with changes in auditory and narrative cues, can interrupt viewers’ attention. In this paper, we address the underexplored task of applying multimodal feature extraction techniques to marketing problems.