News Site

Google’s ‘Democratic AI’ is Better At Redistributing Wealth Than America

It’s no secret that the overwhelming majority of wealth in the United States is concentrated at the very top, creating staggering levels of poverty and racial inequality that vastly outpace other supposedly “wealthy” nations. But while the current political system ensures that this upward extraction of wealth continues, AI researchers have begun playing with a fascinating question: is machine learning better equipped than humans to create a society that divides resources more equitably?

The answer, according to a recent paper published in Nature from researchers at Google’s DeepMind, seems to be yes—at least, as far as the study’s participants are concerned.

The paper describes a series of experiments where a deep neural network was tasked with divvying up resources in a more equitable way that humans preferred. The humans participated in an online economic game—called a “public goods game” in economics—where each round they would choose whether to keep a monetary endowment, or contribute a chosen amount of coins into a collective fund. These funds would then be returned to the players under three different redistribution schemes based on different human economic systems—and one additional scheme created entirely by the AI, called the Human Centered Redistribution Mechanism (HCRM). The humans would then vote to decide which system they preferred.

It turns out, the distribution scheme created by the AI was the one preferred by the majority of participants. While strict libertarian and egalitarian systems split the returns based on things like how much each player contributed, the AI’s system redistributed wealth in a way that specifically addressed the advantages and disadvantages players had at the start of the game—and ultimately won them over as the preferred method in a majoritarian vote.

“Pursuing a broadly liberal egalitarian policy, [HCRM] sought to reduce pre-existing income disparities by compensating players in proportion to their contribution relative to endowment,” the paper’s authors wrote. “In other words, rather than simply maximizing efficiency, the mechanism was progressive: it promoted enfranchisement of those who began the game at a wealth disadvantage, at the expense of those with higher initial endowment.”

Figures from Google's research paper illustrating an economics game where players contribute coins into a public fund.

Figures from Google’s research paper illustrating an economics game where players contribute coins into a public fund.

The methods differ from a lot of AI projects, which focus on establishing an authoritative “ground truth” model of reality that is used to make decisions—and in doing so, firmly embeds the bias of its creators.

“In AI research, there is a growing realization that to build human-compatible systems, we need new research methods in which humans and agents interact, and an increased effort to learn values directly from humans to build value-aligned AI,” the researchers wrote. “Instead of imbuing our agents with purportedly human values a priori, and thus potentially biasing systems towards the preferences of AI researchers, we train them to maximize a democratic objective: to design policies that humans prefer and thus will vote to implement in a majoritarian election.”

Of course, we don’t need an AI to show us that more sustainable ways of living are possible. On a smaller scale, mutual aid and community organizations that redistribute resources have existed forever. So has scientific evidence showing that—contrary to the dogma of hyper-competitive capitalism—human beings are naturally predisposed toward cooperation, sharing, and collective prosperity.

While the AI’s system was preferred by human participants, that doesn’t necessarily mean it would equitably satisfy the needs of humans on a larger scale. The researchers are also quick to point out that the experiments are not a radical proposal for AI-based wealth redistribution, but a framework for future research on how AI could intervene in public policy.

“Our results do not imply support for a form of ‘AI government’, whereby autonomous agents make policy decisions without human intervention,” the researchers wrote in the paper. “We see Democratic AI as a research methodology for designing potentially beneficial mechanisms, not a recipe for deploying AI in the public sphere.”

This post has been read 17 times!

Like Love Haha Wow Sad Angry