Yesterday, Matt demonstrated a scene from Splinter Cell: Double Agent involving an interesting moral exercise.
The situation: The protagonist Sam Fisher, an NSA operative, is undercover in a terrorist group, the JBA. To effectively serve the NSA, he must maintain his cover within the group. If he does not make himself useful to the terrorists, his cover will be blown; if he does not make himself useful to the NSA, they will assume he’s gone rogue and treat him as a terrorist. This is represented by two “trust bars,” as we call them, that effectively measure how useful Sam is to the two groups, and–since trust grants greater freedom in gameplay–how useful they can be to him.
In the scenario we viewed in class, Sam is given a handgun and ordered to execute an innocent civilian, a news pilot called to the scene by a third party. Although the game generally takes place from a third-person perspective, this scene plays out from a first-person view, helping to conceal the distinction between the player and the protagonist. Since there’s no obvious “don’t shoot” button, the player might be led to assume that he has no choice but to shoot the pilot; a look at the HUD, however, reveals that the gun (which appears to be a WWII-era Luger 9mm, for some reason) contains only one bullet, and putting it into the wall counts as sparing the man’s life. (In the demonstration we saw, the player took too long to decide, and an NPC shot the man anyway, taking the decision out of Sam’s hands.)
The first thing this scene does is to remind us that videogames are very good at encouraging people to do things, but a bit less so at encouraging people to not do things. This varies by player and genre, of course, and the stealth genre is arguably all about training the player to not do things (don’t step into that hallway without checking for cameras, don’t attack that guard if you can avoid him, etc.) Still, player action is generally affirmative rather than abstinent in nature.
The second thing this scene does is to remind us that Sam Fisher and the player are not the same person. The decision can be seen as a purely tactical one. If there is any guilt involved–and rational people can disagree on whether or not there should be–it’s extremely unclear whether Sam or the player ought to be feeling guilty. If Sam does not seem to be shaken by the experience, is it because he honestly doesn’t care? Is it because he conceals his emotions, as he’s no doubt been trained to do? Or is it because Sam is conditionally sharing an identity with the player, and the player is the one who’s supposed to be “feeling” for Sam?
The third thing this scene does is to suggest the importance of clearly defined consequences in (fictional) decision-making. While the player is deciding whether or not to shoot, the trust bars demonstrate, in a fairly straightforward way, the consequences of either choice. While the player might not know exactly how those consequences will affect later gameplay, (s)he can guess with some accuracy how much they will.
It was suggested, in discussion, that making the consequences more or less obvious might change people’s reactions to the scene. So let’s go into that a bit. If we start from the assumption that moral actions are actions that produce moral consequences, we’ll likely soon find ourselves in a utilitarian framework. As consequences go, pleasure and pain are relatively easy to measure, especially when placed against metaphysical ideas of “the good,” the will of supernatural beings, etc. So what are the consequences of Sam’s/your decision to shoot/not shoot the pilot? We already know that Sam’s status with either the NSA or the JBA will be enhanced or degraded, but that’s hardly the kind of thing people think of morally. Let’s think of some other consequences.
1. If Sam does not kill the pilot, his cover will be blown immediately. In this case, killing the pilot could be construed (dubiously) as an act of self-defense, since the JBA will not look kindly on a double agent. This argument is weakened somewhat by the fact that Sam is partially responsible for being in that situation in the first place. (Very few games make any allowance for martyrdom, traditionally seen as one of the highest demonstrations of morality there is, but I digress.)
2. If Sam does not kill the pilot, the pilot will be let go. At first glance, it would appear that this is the ideal scenario. Assuming it doesn’t make Sam’s mission completely impossible, letting the man go would seem ideal. Except, by utilitarian standards, letting the man go is only good insofar as it produces positive consequences. So…
2b. The pilot is let go, and Sam accomplishes his mission anyway. A year later, laid off from his job, the pilot walks into his old office with a submachinegun and kills twenty people. Does knowing this in advance change the decision to be made? What if there’s only a 50/50 chance the surviving pilot will go on a killing spree? What if the player is told there’s a “significant” chance, but not told the actual odds?
One of the major criticisms of consequentialist ethics, after all, is that consequences are difficult to accurately predict in practice. A deontological (rule-based) approach would presumably refer to a rule such as “don’t kill innocent people,” something that’s fundamentally hard to argue with until you’re presented with extremely unlikely scenarios like the one detailed above. When such moral rules seem to require martyrdom, pure ideas of moral duty are basically all that can constrain human action, at least in real life–deontological ethics might be more intuitive to human beings if we could refer to status screens that would display to us the sum morality of our actions in an objective fashion. All kidding aside, this seems like it could be an interesting thing for games to tackle.
But back to our consequentialist game. We have thus far only briefly mentioned the problem of guilt. While the consequences we’ve discussed so far are external, guilt is an internal consequence that presents some difficulty from a design perspective. Some work is being done in the area of modeling protagonist psyches; as Eternal Darkness notably suggested, the protagonist does not need to be rational just because the player is. Alternatively, one could just focus the players’ attention on imagining, in detail, what it would be like to kill an innocent. Terror management theory gets some interesting results by asking people to ponder their own deaths, but how would it affect players’ perception of this scene if they were asked, before they picked up a controller, to spend several minutes thinking about both dying and killing?
There are, of course, a few other ways of doing this. One could model a kinship system and work that into the game’s engine, i.e. it “hurts” the player more to do bad things to the terrorists or the NSA than the unfortunate strangers caught in the middle. There’s also the virtue ethics approach, attempting to parse out what virtues are demonstrated by either shooting the innocent and focusing on the big picture or refusing to be complicit in cold-blooded murder. We could probably trot out a hundred versions of the scene we watched yesterday, and I’d be curious to see if tweaking it will produce notably different feelings in players.