New Perspectives on Splinter Cell: Double Agent

Yesterday, Matt demonstrated a scene from Splinter Cell: Double Agent involving an interesting moral exercise.

The situation: The protagonist Sam Fisher, an NSA operative, is undercover in a terrorist group, the JBA. To effectively serve the NSA, he must maintain his cover within the group. If he does not make himself useful to the terrorists, his cover will be blown; if he does not make himself useful to the NSA, they will assume he’s gone rogue and treat him as a terrorist. This is represented by two “trust bars,” as we call them, that effectively measure how useful Sam is to the two groups, and–since trust grants greater freedom in gameplay–how useful they can be to him.

In the scenario we viewed in class, Sam is given a handgun and ordered to execute an innocent civilian, a news pilot called to the scene by a third party. Although the game generally takes place from a third-person perspective, this scene plays out from a first-person view, helping to conceal the distinction between the player and the protagonist. Since there’s no obvious “don’t shoot” button, the player might be led to assume that he has no choice but to shoot the pilot; a look at the HUD, however, reveals that the gun (which appears to be a WWII-era Luger 9mm, for some reason) contains only one bullet, and putting it into the wall counts as sparing the man’s life. (In the demonstration we saw, the player took too long to decide, and an NPC shot the man anyway, taking the decision out of Sam’s hands.)

The first thing this scene does is to remind us that videogames are very good at encouraging people to do things, but a bit less so at encouraging people to not do things. This varies by player and genre, of course, and the stealth genre is arguably all about training the player to not do things (don’t step into that hallway without checking for cameras, don’t attack that guard if you can avoid him, etc.) Still, player action is generally affirmative rather than abstinent in nature.

The second thing this scene does is to remind us that Sam Fisher and the player are not the same person. The decision can be seen as a purely tactical one. If there is any guilt involved–and rational people can disagree on whether or not there should be–it’s extremely unclear whether Sam or the player ought to be feeling guilty. If Sam does not seem to be shaken by the experience, is it because he honestly doesn’t care? Is it because he conceals his emotions, as he’s no doubt been trained to do? Or is it because Sam is conditionally sharing an identity with the player, and the player is the one who’s supposed to be “feeling” for Sam?

The third thing this scene does is to suggest the importance of clearly defined consequences in (fictional) decision-making. While the player is deciding whether or not to shoot, the trust bars demonstrate, in a fairly straightforward way, the consequences of either choice. While the player might not know exactly how those consequences will affect later gameplay, (s)he can guess with some accuracy how much they will.

It was suggested, in discussion, that making the consequences more or less obvious might change people’s reactions to the scene. So let’s go into that a bit. If we start from the assumption that moral actions are actions that produce moral consequences, we’ll likely soon find ourselves in a utilitarian framework. As consequences go, pleasure and pain are relatively easy to measure, especially when placed against metaphysical ideas of “the good,” the will of supernatural beings, etc. So what are the consequences of Sam’s/your decision to shoot/not shoot the pilot? We already know that Sam’s status with either the NSA or the JBA will be enhanced or degraded, but that’s hardly the kind of thing people think of morally. Let’s think of some other consequences.

1. If Sam does not kill the pilot, his cover will be blown immediately. In this case, killing the pilot could be construed (dubiously) as an act of self-defense, since the JBA will not look kindly on a double agent. This argument is weakened somewhat by the fact that Sam is partially responsible for being in that situation in the first place. (Very few games make any allowance for martyrdom, traditionally seen as one of the highest demonstrations of morality there is, but I digress.)

2. If Sam does not kill the pilot, the pilot will be let go. At first glance, it would appear that this is the ideal scenario. Assuming it doesn’t make Sam’s mission completely impossible, letting the man go would seem ideal. Except, by utilitarian standards, letting the man go is only good insofar as it produces positive consequences. So…

2b. The pilot is let go, and Sam accomplishes his mission anyway. A year later, laid off from his job, the pilot walks into his old office with a submachinegun and kills twenty people. Does knowing this in advance change the decision to be made? What if there’s only a 50/50 chance the surviving pilot will go on a killing spree? What if the player is told there’s a “significant” chance, but not told the actual odds?

One of the major criticisms of consequentialist ethics, after all, is that consequences are difficult to accurately predict in practice. A deontological (rule-based) approach would presumably refer to a rule such as “don’t kill innocent people,” something that’s fundamentally hard to argue with until you’re presented with extremely unlikely scenarios like the one detailed above. When such moral rules seem to require martyrdom, pure ideas of moral duty are basically all that can constrain human action, at least in real life–deontological ethics might be more intuitive to human beings if we could refer to status screens that would display to us the sum morality of our actions in an objective fashion. All kidding aside, this seems like it could be an interesting thing for games to tackle.

But back to our consequentialist game. We have thus far only briefly mentioned the problem of guilt. While the consequences we’ve discussed so far are external, guilt is an internal consequence that presents some difficulty from a design perspective. Some work is being done in the area of modeling protagonist psyches; as Eternal Darkness notably suggested, the protagonist does not need to be rational just because the player is. Alternatively, one could just focus the players’ attention on imagining, in detail, what it would be like to kill an innocent. Terror management theory gets some interesting results by asking people to ponder their own deaths, but how would it affect players’ perception of this scene if they were asked, before they picked up a controller, to spend several minutes thinking about both dying and killing?

There are, of course, a few other ways of doing this. One could model a kinship system and work that into the game’s engine, i.e. it “hurts” the player more to do bad things to the terrorists or the NSA than the unfortunate strangers caught in the middle. There’s also the virtue ethics approach, attempting to parse out what virtues are demonstrated by either shooting the innocent and focusing on the big picture or refusing to be complicit in cold-blooded murder. We could probably trot out a hundred versions of the scene we watched yesterday, and I’d be curious to see if tweaking it will produce notably different feelings in players.

Peter Rauch

2 thoughts on “New Perspectives on Splinter Cell: Double Agent

  1. Thanks, Peter, for this writeup of our conversation yesterday. One of the interesting revelations to me of this demo was that the moral issues was played out against a different mechanic, that of “trust,” and that as you point out the “internal” feeling of guilt (or whatever you might feel if you were to execute an innocent civilian) doesn’t show up directly on the trust meter.

    We also brought up some interesting possible experiments to run to test the pressure level of moral choices, and specifically get at how the player is interpreting the scene (as a “real” decision or as a strategic one to be gamed, as a person playing a role or as the role, etc). One such experiment might be to mess with the trust bars a bit to see how that might affect how quickly the player makes the decision.

  2. Towards the end of the session I also brought up the fact that, for whatever reason, many of the games we discuss in the context of morality seem to involve some element of war or violence. Maybe that’s because most commercial games sit in that space, or because an immoral/amoral context provides the sharpest relief for questions of morality. In line with those thoughts, I thought this recent Gamasutra post would be worth discussing in the near future:

    War Games & Morality – What Are We Fighting For?

Comments are closed.