Computers Are People Too, And Must Be Punished For Their Indiscretions

For several years, researchers have been contrasting human-human and human-computer interactions in order go gain more insight into theory of mind. The assumption is that people don't treat computers like, well, people. It's not a totally unfounded assumption, either. In several studies in which people have competed with computers in games like the prisoner's dilemma or the ultimatum game, their behavior has been different than when they played the same games with other humans. In the ultimatum game, for example, on player is given a sum of money and told to offer some of it to a second player. The first player chooses the amount to offer, and the second player can accept or reject it. If the second player accepts it, then he or she gets the amount offered, and the first player gets what's left. If the second player rejects the offer, then neither player gets any money. Even though it means getting nothing rather than something, the second player often rejects the first player's offer if it's too low, when the first player is human. It's as though the second player is punishing the first1. When the first player is a computer, however, the second player is much less likely to reject small offers. Furethermore, imaging studies have shown that activation levels in brain regions associated with theory of mind are different depending on whether we're playing such games with humans or computers, with human-human interactions yielding greater activation in those regions2.

In a paper posted on the Experimental Philosophy blog, Lévan Sardjevéladzé and Edouard Machery argue that the reason people treat computers differently in these studies may have to do with the descriptions of the computers that they are given. Specifically, people tend to be told that the computers will behave randomly or based on a predetermined program. They write:

It is unclear why people would need to use their mindreading capacities in interacting with a computer, if they have been told that it behaves randomly or according to fixed probabilities. If participants do not represent their computer partner in intentional terms, they are unlikely to feel any moral pressure to cooperate with it. This would explain why they cooperate less. Moreover, they are unlikely to feel any negative emotion toward non-cooperative computer partners and to feel any satisfaction in anticipating punishing them. This would explain why they punish less when they know that they are interacting with a computer than when they believe that they are interacting with a computer. (p. 7)

To test this vies, Sardjevéladzé and Machery had participants participate in a game similar to the ultimatum game. Here's their description of their game:

Our version of the trust game involved two players, player 1 and player 2. Participants always played player 1. Player 1, but not player 2, was endowed with some initial monetary endowment (10 écus).5 In the first stage of the game, participants were invited to give a fraction of their endowment to player 2. The monetary amount given by the participant ("the offer") was tripled and was given to the second player. For instance, if a participant gave 2 écus, player 2 received 6 écus and the participant ended up with 8 écus (her initial endowment of 10 écus minus 2 écus). Moreover, participants were invited to ask player 2 to give back a fraction of the amount of money player 2 received ("the request").

In the second stage of a typical trust game, player 2 decides whether and how much she wants to give back to player 1. The monetary amount that is given back is added to player 1's remaining monetary endowment (without being tripled). In the two conditions of our version of the trust game, player 2 gave back nothing.

In the last stage of the game, participants were given the opportunity to punish their partner at their own cost. The cost of punishment was set at half an écu for each écu taken from player 2. (p. 8)

Participants played the game on a computer screen, and could not see the other player. They were told that either that the other player was another human, or that it was "a computer endowed with an artificial intelligence program." The prediction is that, if the instructions lead people to treat the computer as an "intelligent agent," people will give as much to computers as they give to humans, will ask for the same amount back from humans and computers, and will be as likely to punish it when it gives nothing back as they are to punish human players who give nothing back. They found that participants initially gave computers and humans almost identical amounts, on average, and that they requested only slightly more from computers (though this difference was not statistically significant). Perhaps most interestingly, participants punished both humans and computers at the same rate (costing them a little over 30% of the amount they had left after giving some to the human/computer).

So, people can treat computers like people, when they believe that the computer is acting intelligently. Specifically, they're perfectly willing to punish a poor, helpless machine when it violates their trust (when they initially gave the computer some money, they knew that the computer would have the opportunity to give them some back). I don't think this finding really spells trouble for theory of mind research that contrasts human-human and human-computer interactions, but it does show that how you describe the computer is important. If people perceive the computer to be an intelligent agent, they're going to treat it like they would treat any intelligent agent, but when they see it as a mindless machine, they're going to treat it as such. It would be interesting, though, to study people's theory of artificial mind further, by using the "artificial intelligence" description in other theory of mind experimental paradigms. For instance, do people believe that can computers have false beliefs, or are they more like God? I also wonder how children interact with computers. Are they more or less likely than adults to adopt an "intentional stance" towards computers, regardless of how it's described?

1Rilling, J. K., Sanfey, A. G., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2004) The neural correlates of theory of mind within interpersonal interactions. NeuroImage, 22, 1694-1703.
2Ibid & McCabe, k., Houser, D., Ryan, L., Smith, V, Trouard, T. (2001). A functional imaging study of cooperation in two-person reciprocal exchange. Proceedings of the National Academy of Sciences, 98, 11832-11835.

Categories

More like this

Chris, you write that "...people can treat computers like people, when they believe that the computer is acting intelligently..."

In general, a person tends to treat a computer as they would another person when the person feels a rise in one or more socially normative emotions. My sense is that people are moved to action in response to emotion. Emotions are products of specific constituents and it is the relative proportions of such constituents (along with at least one other element) that serve to index specific types of action. Such types extend across contexts, and vary as to properties outside the scope that defines the type.

More fundamentally, people have emotional responses to ALL media. Clifford Nass and Byron Reeves have researched this subject and you might find it instructive in this case. For example, people physiologically respond to a video of a hotdog rolling in their direction. They fear it!

A person needn't believe the hotdog has some intelligence in order to fear it, or feel anger towards it. In fact, because anger emerges naturally from fear, it's more likely that the human affective system plays the starring role in human behavior. Intellect and ethic serve to shape one's affective response thereby mediating action taking. And in a cooperative fashion, emotion operates in part as an environmentally grounded heuristic, without which one's actions would appear neither intelligent, nor ethical.

Feel free to check out my blog at Plexav.com. I think you'd appreciate my piece on the relations between semantics and character.

Summmum Bonum.