
With much current discussion of AI fixating on ethical implications–whether AI may eventually “outsmart” or harm us; how we can ensure that AI acts in our best interests–it’s worth considering a new approach to AI that keeps humans in the loop: swarm intelligence.
UNU, a software platform run by Unanimous A.I., brings groups of people together online to arrive at all kinds of real-time decisions and predictions, ranging from who will win March Madness to the top four horses at the Kentucky Derby. The system has proven remarkably effective at coming up with accurate answers. In fact, it has outperformed experts in a variety of contests–in the 2015 Oscar predictions, for instance, the swarm had a higher than 70% accuracy–New York Times critics, it should be noted, were right 55% of the time.
SEE: How ‘artificial swarm intelligence’ uses people to make better predictions than experts
But, beyond accuracy, there is another advantage to using the swarm: according to new research, it makes more ethical decisions.
This may go against common beliefs. “As individuals, people generally make moral decisions with respect to the good of society as a whole,” said Louis Rosenberg, CEO of Unanimous A.I. “But in groups, collectively, we often make bad decisions, leading to problems like inequality, pollution, and armed conflict. This is why it’s so interesting that by forming a Swarm Intelligence instead of taking a standard vote, groups seem able to overcome this fundamental human dilemma and make decisions that are more selfless and moral.”
The new research, presented at Collective Intelligence 2016, at NYU last week, involved a series of tests using UNU. Participants were randomly selected, and were paid $1.00. The tests were based on the Tragedy of the Commons (TOC)–a classic economic problem that shows how people will try to take the largest portion of the pie, which eventually leads to harm for those who have less.
In the first round of experiments, 18 subjects made decisions in two conditions: acting as individuals through a survey, and then making decisions as part of a swarm. In the first case, participants selected whether they would like $0.30 or $0.90 extra.
When part of the swarm, the group was charged with moving the “magnet” towards one of six spots: three of $0.30 and three of $.090.
The catch? If more than 30% of the group, in either case, selected the higher amount, everyone would leave empty-handed.
SEE: AI gone wrong: Cybersecurity director warns of ‘malevolent AI’ (TechRepublic)
Also, it should be noted: “users [could] only see their own magnet during the decision, and not the magnets of others users. Thus, although they can view the puck’s motion in real time, which represents the emerging will of the full swarm, they are not influenced by the specific breakdown of support across the options. This limits social biasing.”
The results? When asked individually, 67% of participants asked for the $0.90 award and none received a cash bonus. The researchers say this is “typical of TOC dilemmas.” However, the swarm resulted in 24% pulling towards $0.90, 70% towards $0.30, and 6% removing themselves from the decision.

The swarm, the researchers said, came up with a “solution that optimized the payout for the full group,” beating the classic TOC problem.
In the second experiment, 70 subjects were asked to make a decision again in two cases: a team-decision by majority vote using a standard online poll, and then using a real-time swarm.
There were three teams: orange, yellow, and purple. All members would get an additional bonus of either $0.25 or $0.75, each team was told, simply by the team asking which bonus they would like. But if more than a third of the teams asked for the higher amount, no one would get a bonus.
The results: 47 out of 70 individuals asked for the $.075 bonus. “Thus, when viewed as a pool of disconnected individuals, the subjects once again failed the TOC dilemma,” said the researchers. And when grouped into teams, all three asked for the large bonus.
When the group acted as a swarm, two of the three teams asked for the small bonus.
According to the researchers, “human swarming may be a viable technique for reaching decisions that are better aligned with the common interests of a group, as compared to poll-based methods for tapping collective intelligence … intelligence that arises from human swarms may produce decisions that are more supportive of the common good than would come from the individual participants who comprise the swarm.”
The experiments, using “soft” AI to reach decisions through group intelligence, may offer a new way to think about how to create ethical AI.