AFFILIATE RESEARCH
Misinformation and higher-order evidence

By Brian Ball | September 2024
This paper uses computational methods to simultaneously investigate the epistemological effects of misinformation on communities of rational agents, while also contributing to the philosophical debate on ‘higher-order’ evidence (i.e. evidence that bears on the quality and/or import of one’s evidence). Modelling communities as networks of individuals, each with a degree of belief in a given target proposition, it simulates the introduction of unreliable mis- and disinformants, and records the epistemological consequences for these communities. First, using small, artificial networks, it compares the effects, when agents who are aware of the prevalence of mis- or disinformation in their communities, either deny the import of this higher-order evidence, or attempt to accommodate it by distrusting the information in their environment. Second, deploying simulations on a large(r) real-world network, it observes the impact of increasing levels of misinformation on trusting agents, as well as of more minimal, but structurally targeted, unreliability. Comparing the two information processing strategies in an artificial context, it finds that there is a (familiar) trade-off between accuracy (in arriving at a correct consensus) and efficiency (in doing so in a timely manner). And in a more realistic setting, community confidence in the truth is seen to be depressed in the presence of even minimal levels of misinformation. Learn More >>
Other Affiliate Research

A Case Study in an A.I.-Assisted Content Audit
This paper presents an experimental case study utilizing machine learning and generative AI to audit content diversity in a hyper- local news outlet, The Scope, based at a university and focused on underrepresented communities in Boston. Through computational text analysis, including entity extraction, topic labeling, and quote extraction and attribution, we evaluate the extent to which The Scope’s coverage aligns with its mission to amplify diverse voices.

AI Regulation: Competition, Arbitrage & Regulatory Capture
The commercial launch of ChatGPT in November 2022 and the fast development of Large Language Models catapulted the regulation of Artificial Intelligence to the forefront of policy debates One overlooked area is the political economy of these regulatory initiatives–or how countries and companies can behave strategically and use different regulatory levers to protect their interests in the international competition on how to regulate AI.
This Article helps fill this gap by shedding light on the tradeoffs involved in the design of AI regulatory regimes in a world where: (i) governments compete with other governments to use AI regulation, privacy, and intellectual property regimes to promote their national interests; and (ii) companies behave strategically in this competition, sometimes trying to capture the regulatory framework.

Multimodal Drivers of Attention Interruption to Baby Product Video Ads
Ad designers often use sequences of shots in video ads, where frames are similar within a shot but vary across shots. These visual variations, along with changes in auditory and narrative cues, can interrupt viewers’ attention. In this paper, we address the underexplored task of applying multimodal feature extraction techniques to marketing problems.