Coordinating Humans to Nudge AI Behavior

How can we we pro-socially influence machine behavior without access to code or training data? In a recent study, a community with 15m+ subscribers tested the effect of encouraging fact checking on the algorithmic spread of unreliable news, discovering that adjustments in the wording of this "AI nudge" could reduce the scores that shape the algorithmic spread of unreliable news by 2x. We found that we can persuade algorithms to behave differently by nudging people to behave differently.

How can we think about the politics and ethics of systematically influencing black box systems from the outside? This AI nudge was conducted using CivilServant, novel software that supports communities to conduct their own policy experiments on human and machine behavior–independently of online platforms. In this talk, hear the results of our experiment on reducing the spread of unreliable news, alongside reflections on the history and future of democratic policy experimentation.

More Events