Home > News > We’re going to live in a world of bots. They need to be polite.
175 views 10 min 0 Comment

We’re going to live in a world of bots. They need to be polite.

- November 21, 2018
A robot server carrying dishes to fulfill diners’ order arrives at a table in Haidilao’s new artificial intelligence hot pot restaurant in Beijing, Nov. 14, 2018. REUTERS/Jason Lee

Many people — including roboticists, psychologists and philosophers — are trying to understand how the world will work when it is hard to distinguish artificial intelligence from human beings. Ordinary people often are creeped out when they discover that the “person” they thought they were interacting with is actually an AI. Google learned this recently when its Duplex assistant, which could phone up restaurants and hair salons to book reservations and appointments — sounding just like a human, “umms” and “ahhs” and all — caused public consternation.

My research, building on ideas that I have developed with Barry Weingast and that I am working on now with two graduate students in artificial intelligence, Dylan Hadfield-Menell and McKane Andrus, points to some more fundamental questions. We all live in normative social orders — sets of rules that say what is okay, and what is not okay to do, making complex interactions more predictable. If these rules are to work, they need to be enforced. But what happens when some of the human beings we think we are interacting with are bots?

Even trivial-seeming rules can be important

Rules help provide order to the massively complex societies we live in, which otherwise would be completely unpredictable. Sometimes these rules are enforced by officials such as police and judges, but most of the time they are enforced by ordinary people, who disapprove of rule-breakers and end up refusing to do business with them. Rule compliance is expensive, but when these rules are not enforced, then society can get into trouble, because cheating becomes pervasive. Even worse, cheating can spread. If I think that no one is enforcing the rules, and everyone is cheating, then I don’t want to be a sucker. Everyone, then, is always observing: Who is following, who is enforcing, the rules?

This creates what game theorists call a “signaling” problem, where people are trying to read signals from those around them as to whether they are cheating or enforcing the rules, and then deciding whether to comply or not. Here, my research with Hadfield-Menell and Andrus argues that “silly rules” — rules that don’t seem to matter — can be crucially important.

For example, when an executive assistant uses friendly language in an email and signs his or her name with “best regards” or “cordially,” it might seem like meaningless noise that doesn’t contribute information about, for example, when a phone call needs to be scheduled. But many of us work in communities where using that language and style amounts to a rule — if we break the rule, we may be criticized or shunned. People in those communities don’t see the rule as silly at all and get upset about perceived rudeness.

We argue that these silly rules reinforce more important ones — for instance, about keeping promises and respecting property — by providing information about whether rules in general are being obeyed. When there are lots of silly rules, provided they are cheap to enforce and follow, members of the group have lots of opportunities to observe what people do when someone in the group breaks the rules. That means people can figure out in low-stakes interactions whether to stick around for the high-stakes interactions.

Bots may mess these rules up

As businesses try to build congenial artificial intelligence, they get them to comply with silly rules. X.ai, for example, is a product that uses machine learning to learn how to act as a user’s personal assistant over email. When you purchase this product, you get an AI assistant named either “Andrew Ingram” or “Amy Ingram.” If you want to schedule coffee with a new business contact over email, you CC “Andrew” or “Amy” and “they” take over scheduling, emailing the contact — “Hi, I’m happy to help schedule coffee” — and signing off with a “name” and “title.” An email exchange with “Andrew” or “Amy” appears in your contact’s conversation threads.

But, of course, the AI system doesn’t have a name and title; it can’t feel happy to help; it isn’t a person sitting alongside all the other people in someone’s email inbox.

This leads to problems. Should your contact be polite to “Andrew Ingram” when they respond, even if Andrew Ingram is not a real person? Or do they just stick to the purely functional content: I’m available at 9:30 on Wednesday? If, further into the exchange, “Andrew” makes a mistake about what they said, do they chastise “him” for not reading carefully? Or for treating them in a manner they consider rude? It’s confusing — especially if people think at first that Andrew is human and only later discover that he is a bot. Andrew’s politeness may not tell us much about whether his organization can be trusted on other rules.

If enough of us start interacting with bot schedulers who don’t care if we are polite or not, we might all start to wonder whether the rules we really do care about — like whether people are timely in keeping their appointments or respect the confidentiality of what we tell them when we speak — will be enforced and followed.

This may damage social order

Our work uses computational simulations of groups like this and finds that a group that has a lot of (cheap) silly rules is more likely to endure moments of doubt about whether the rules are being enforced. Groups face those kinds of moments all the time, as innovation and globalization continually shake up our understandings of which rules work when.

The risk with robots that impersonate humans is that silly rules will become less effective. The more bots that there are in our email chains and social interactions, the less opportunities we have to see whether people in our communities are following the important rules. And by having robots follow our silly rules, we introduce noise into the signals that real humans are sending. “Politeness” from the “assistants” in an organization becomes a less reliable indicator of whether you can trust the humans in that organization if more of the “assistants” are robots.

Making robots that seem more human by following silly rules will indeed make them easier to work with. But we shouldn’t make the mistake of thinking that these practices are just window-dressing. Silly rules play important roles in maintaining human social orders.

Gillian Hadfield is a professor of law and strategic management at the University of Toronto, a faculty affiliate at the Vector Institute for Artificial Intelligence, and senior policy adviser at OpenAI.  She is the author of “Rules for a Flat World: Why Humans Invented Law and How to Reinvent It for a Complex Global Economy” (Oxford University Press, 2016).

This article is one in a series supported by the MacArthur Foundation Research Network on Opening Governance that seeks to work collaboratively to increase our understanding of how to design more effective and legitimate democratic institutions using new technologies and new methods. Neither the MacArthur Foundation nor the network is responsible for the article’s specific content. Other posts can be found here.