New Perspectives on Splinter Cell: Double Agent

Yesterday, Matt demonstrated a scene from Splinter Cell: Double Agent involving an interesting moral exercise.

The situation: The protagonist Sam Fisher, an NSA operative, is undercover in a terrorist group, the JBA. To effectively serve the NSA, he must maintain his cover within the group. If he does not make himself useful to the terrorists, his cover will be blown; if he does not make himself useful to the NSA, they will assume he’s gone rogue and treat him as a terrorist. This is represented by two “trust bars,” as we call them, that effectively measure how useful Sam is to the two groups, and–since trust grants greater freedom in gameplay–how useful they can be to him.

In the scenario we viewed in class, Sam is given a handgun and ordered to execute an innocent civilian, a news pilot called to the scene by a third party. Although the game generally takes place from a third-person perspective, this scene plays out from a first-person view, helping to conceal the distinction between the player and the protagonist. Since there’s no obvious “don’t shoot” button, the player might be led to assume that he has no choice but to shoot the pilot; a look at the HUD, however, reveals that the gun (which appears to be a WWII-era Luger 9mm, for some reason) contains only one bullet, and putting it into the wall counts as sparing the man’s life. (In the demonstration we saw, the player took too long to decide, and an NPC shot the man anyway, taking the decision out of Sam’s hands.)

The first thing this scene does is to remind us that videogames are very good at encouraging people to do things, but a bit less so at encouraging people to not do things. This varies by player and genre, of course, and the stealth genre is arguably all about training the player to not do things (don’t step into that hallway without checking for cameras, don’t attack that guard if you can avoid him, etc.) Still, player action is generally affirmative rather than abstinent in nature.

The second thing this scene does is to remind us that Sam Fisher and the player are not the same person. The decision can be seen as a purely tactical one. If there is any guilt involved–and rational people can disagree on whether or not there should be–it’s extremely unclear whether Sam or the player ought to be feeling guilty. If Sam does not seem to be shaken by the experience, is it because he honestly doesn’t care? Is it because he conceals his emotions, as he’s no doubt been trained to do? Or is it because Sam is conditionally sharing an identity with the player, and the player is the one who’s supposed to be “feeling” for Sam?

The third thing this scene does is to suggest the importance of clearly defined consequences in (fictional) decision-making. While the player is deciding whether or not to shoot, the trust bars demonstrate, in a fairly straightforward way, the consequences of either choice. While the player might not know exactly how those consequences will affect later gameplay, (s)he can guess with some accuracy how much they will.

It was suggested, in discussion, that making the consequences more or less obvious might change people’s reactions to the scene. So let’s go into that a bit. If we start from the assumption that moral actions are actions that produce moral consequences, we’ll likely soon find ourselves in a utilitarian framework. As consequences go, pleasure and pain are relatively easy to measure, especially when placed against metaphysical ideas of “the good,” the will of supernatural beings, etc. So what are the consequences of Sam’s/your decision to shoot/not shoot the pilot? We already know that Sam’s status with either the NSA or the JBA will be enhanced or degraded, but that’s hardly the kind of thing people think of morally. Let’s think of some other consequences.

1. If Sam does not kill the pilot, his cover will be blown immediately. In this case, killing the pilot could be construed (dubiously) as an act of self-defense, since the JBA will not look kindly on a double agent. This argument is weakened somewhat by the fact that Sam is partially responsible for being in that situation in the first place. (Very few games make any allowance for martyrdom, traditionally seen as one of the highest demonstrations of morality there is, but I digress.)

2. If Sam does not kill the pilot, the pilot will be let go. At first glance, it would appear that this is the ideal scenario. Assuming it doesn’t make Sam’s mission completely impossible, letting the man go would seem ideal. Except, by utilitarian standards, letting the man go is only good insofar as it produces positive consequences. So…

2b. The pilot is let go, and Sam accomplishes his mission anyway. A year later, laid off from his job, the pilot walks into his old office with a submachinegun and kills twenty people. Does knowing this in advance change the decision to be made? What if there’s only a 50/50 chance the surviving pilot will go on a killing spree? What if the player is told there’s a “significant” chance, but not told the actual odds?

One of the major criticisms of consequentialist ethics, after all, is that consequences are difficult to accurately predict in practice. A deontological (rule-based) approach would presumably refer to a rule such as “don’t kill innocent people,” something that’s fundamentally hard to argue with until you’re presented with extremely unlikely scenarios like the one detailed above. When such moral rules seem to require martyrdom, pure ideas of moral duty are basically all that can constrain human action, at least in real life–deontological ethics might be more intuitive to human beings if we could refer to status screens that would display to us the sum morality of our actions in an objective fashion. All kidding aside, this seems like it could be an interesting thing for games to tackle.

But back to our consequentialist game. We have thus far only briefly mentioned the problem of guilt. While the consequences we’ve discussed so far are external, guilt is an internal consequence that presents some difficulty from a design perspective. Some work is being done in the area of modeling protagonist psyches; as Eternal Darkness notably suggested, the protagonist does not need to be rational just because the player is. Alternatively, one could just focus the players’ attention on imagining, in detail, what it would be like to kill an innocent. Terror management theory gets some interesting results by asking people to ponder their own deaths, but how would it affect players’ perception of this scene if they were asked, before they picked up a controller, to spend several minutes thinking about both dying and killing?

There are, of course, a few other ways of doing this. One could model a kinship system and work that into the game’s engine, i.e. it “hurts” the player more to do bad things to the terrorists or the NSA than the unfortunate strangers caught in the middle. There’s also the virtue ethics approach, attempting to parse out what virtues are demonstrated by either shooting the innocent and focusing on the big picture or refusing to be complicit in cold-blooded murder. We could probably trot out a hundred versions of the scene we watched yesterday, and I’d be curious to see if tweaking it will produce notably different feelings in players.

Peter Rauch

Towards a unified theory of meaningful games (rough draft)

From the many conversations we’ve been having over the past half-year, a set of consistent ideas keep re-emerging. I’m hoping to pull those ideas together into a coherent statement about what we mean when we talk about games with moral depth. I’ll be pulling from Bioshock for examples.

  1. The game offers meaningful choice along a moral axis. All real games offer choice of some kind, but we seek choices that successfully integrate both narrative and gameplay imperatives and evoke human values in a realistic way. By way of counterexample, Bioshock offers the player a relatively shallow choice regarding what to do with Little Sisters by pitting an obvious good vs. an obvious evil (“Mother Theresa vs. baby-eating“). Contrast choices that are between two goods (Hegel’s interpretation of Antigone), or two evils (politics, anyone?).
  2. The games’ choices are consequential to both the narrative and the gameplay. To keep both story and code manageable, most games employ some variant of “magician’s choice,” but if real choices prove impractical, the games we seek at least maintain the illusion of choice well. In Bioshock, choosing to liberate Little Sisters generates fewer Adam points than harvesting them, but as the game progresses, the difference between these choices evens out when Little Sisters compensate the player with loot. While the narrative consequence of these choices diverge, the gameplay outcome does not (in any significant way). The personal sacrifice entailed in liberating Little Sisters might have been underlined more sharply if the contrast between choices was also sharper.
  3. The game offers an opportunity to reflect on the player’s choices and their consequences. Perhaps this aims at Aristotelean catharsis, or at Joycean epiphany. But at some point(s) in the game, we hope the player achieves a moment of awareness, connecting the game to some “truth” about the world or about herself. In Bioshock, this moment comes as a moment of near-perfect identification between the main character’s plight and the player’s own. Of course, in Bioshock the player awakens not to the consequences of his choices, but rather his complete lack of choice within the game.

I’ve been using Bioshock as an example not just because I’m a shameless fanboy (though I am), but because that game so thoroughly deconstructed the world of games as they are — devoid of meaningful choice — that we’re left yearning all the more for new games that could be. Perhaps these three basic ideas can help point the way.

– Gene Koo

Ideology and persuasion

In any sufficiently convoluted discussion of videogames and narrative, fiction, or speech, the idea of videogames as a communicative medium inevitably comes up. The communication of facts is simple enough in any media, although making them “stick,” i.e. making them sufficiently comprehensible and memorable, is rather more difficult, especially if the medium in question is one that is widely perceived to have “failed” should boredom set in. But facts, stubborn things though they are, are generally not what people are referring to when they speak of “free speech” or a “marketplace of ideas.” Ideas that are not easily empirically verifiable must not only inform but persuade a given audience. What videogames do so effectively, and where I believe lies much of the medium’s potential, is the creation of worlds that in some aspect resemble our own, and set the rules to encourage and discourage behaviors, determine the outcomes of actions, etc. To whatever extent the gameworld resembles our own, what I find most intriguing about the possibilities of the medium is the creation of worlds inherently biased toward certain viewpoints.

There are a few names out there for the general type of viewpoint to which I’m referring–James Paul Gee’s “cultural models” comes to mind, as does the general idea of “worldview”–one of which is the rather troubled term “ideology.” (Another is propaganda, which is a different post altogether.) In Ideology: An Introduction, literary critic Terry Eagleton lays out sixteen commonly accepted definitions for the term:

(a) the process of production of meaning, signs and values in social life;
(b) the body of ideas characteristic of a particular social group or class;
(c) ideas which help to legitimate a dominant political power;
(d) false ideas which help to legitimate a dominant political power;
(e) systematically distorted communication;
(f) that which offers a position for a subject;
(g) forms of thought motivated by social interests;
(h) identity thinking;
(i) socially necessary illusion;
(j) the conjuncture of discourse and power;
(k) the medium in which conscious social actors make sense of their world;
(l) action-oriented sets of beliefs;
(m) the confusion of linguistic and phenomenal reality;
(n) semiotic closure;
(o) the indispensable medium in which individuals live out their relations to a social structure;
(p) the process whereby social life is converted to a natural reality.

These definitions are frequently mutually contradictory, but many have obvious relevance to socially conscious videogame design. “Action-oriented sets of beliefs” certainly relates to this project, and one might argue that the conditional “confusion of linguistic and phenomenal reality” is more or less what happens when one plays a sufficiently immersive videogame. The current economic realities of videogame production have led some to suggest that definitions b, c, and d have great relevance to the videogame industry as it currently stands, but there’s no reason to assume that this is an inherent feature of the physical technology, as opposed to the economic basis of its production. (Then again, a Marxist might be hesitant to separate those two, and McLuhan might agree.) In parsing out just what ideology is or is not, Eagleton brings up a point of unequivocal importance to anyone interested in the persuasive potential of videogames:

[I]n order to be truly effective, ideologies must make at least some minimal sense of people’s experience, must conform to some degree with what they already know of social reality from their practical interaction with it. […] They must be “real” enough to provide the basis on which individuals can fashion a coherent identity, must furnish some solid motivations for effective action, and must make at least some feeble attempt to explain away their own inconsistencies. In short, successful ideologies must be more than imposed illusions, and for all their inconsistencies must communicate to their subjects a version of social reality which is real and recognizable enough not to be simply rejected out of hand.

This almost reads as a primer for how to involve players emotionally in the in-game decision-making process: make the players recognize the world on an intuitive level, regardless of the obvious differences, motivate them to do the things you want to do, and have some explanations for the more obvious holes in the simulation. This last one can be especially tricky; as Matt noted at the last meeting, the more “free” a game is, the more obvious and glaring the walls will appear. While the connections may not be intuitive, the embattled notion of ideology, and literary/political theory in general, may provide some useful new ways to interpret videogame texts, helping to delineate what they are, what they do, and when/why they fail.

Peter Rauch

Meeting notes: 2008 February 27

Sam Gilbert presented his take on Assassin’s Creed, to be posted separately. From there, the discussion blossomed (as always) into some very interesting and exciting directions. Here are some of the main points raised, although unfortunately without attribution (I can only type so fast!).

Killing citizens in Assassin’s Creed has some penalty, but the gameplay almost encourages you to kill innocent people who are really, really annoying. Perhaps intentional design decision to push reflection on why kill?

Putting the player in a murky moral area, making it up to the player to decide what it “means” lets the developers absolve themselves of moral responsibility. Or maybe it’s a good strategy in not inculcating values in a heavy-handed way. But “phony” murkiness — not really a choice (see Bioshock)

What incentives does the game offer — narrative, points, “style points” (XBox achievements)

Compare full-blown stealth games, e.g. Hitman, Thief. Hitman actually penalizes you for killing anyone other than your target. And it presents many game incentives to kill (annoying people). At the highest difficulty level, Thief ends the game (you lose) if you kill ANYONE. (Thief III moves away from that absolutism — only for non-combatants).

Could AC be rebalanced so that death is much more likely, it would have played much more as a stealth game. But the developers probably realized that stealth in this game was really boring.

Hitman: fun in not having fun, but in being “professional.”

AC doesn’t allow you to reload — can’t recreate your game, make choices. Or maybe it makes the choices much more weighty (similar to Bioshock making it difficult to revert to earlier point after learning about Little Sister rewards)

In good stealth games, violence is always a choice, and having that choice makes the stealth element much more valuable.

Games are running out of plot elements to explain why the player has no choice. Video games seem better at the illusion of choice rather than actually providing choice. Gives games a sense of tragedy: feeling that you should have choices but don’t.

To what extent is the world different because of your actions, that is, killing leads to outcomes.

Often games frame killing using one of two justifications: self-defense (kill or be killed) or utilitarianism (killing a monster for the greater good). But in the latter case rarely do you see outcomes. Why not have outcomes be opposite of your overall intent?

How about making moral choices in the spotlight of other people watching. (The discomfort of making choices in Mass Effect in front of a roommate: will he read my choices onto me as a person?)

Guilt as a massive motivation in games — is it underused? Find examples?

What about a mission to kill terrorists, but avoid civilians? (See September 12 as a rhetorical statement).

TO-DOs:

  • Get in touch with developers of Tactical Iraqi.
  • See upcoming HIMR‘s articles on military games.
  • See Serious Games’ military spinoff.
  • Ask Judith Donath about military simulators.
  • Compare games for PTSD therapy.

– Gene Koo

The science of morality: a layperson’s primer

The New York Times Magazine has published a basic overview of the science/psychology of morality: The Moral Instinct (13 Jan 2008). It’s interesting to note that this article topped the “Most emailed” charts for a while.

The article is by Steven Pinker, Professor of Psychology at Harvard. In it, Pinker draws heavily on Jonathan Haidt’s taxonomy of five major spheres of moral intuition: harm, fairness, community (or group loyalty), authority and purity. Pinker adds that two of these — fairness and community — form the building blocks of altruism. Perhaps they encircle what many of us mean colloquially when we talk about “morality,” especially as something that needs to be “inculcated.” Or perhaps each sphere can reach a level of refinement that requires social and not just genetic transmission: what we mean by harm, for example, changes quite a bit based on the cultural and legal norms. (Pinker notes that Western liberals put a premium on the spheres of harm and fairness, while others put the emphasis elsewhere).

The article also touches on two potentially universal themes that we’ve hit upon as well in our discussions about morality in games (and the lack of sophistication therein): non-zero-sum games, and interchangeability of perspectives (the opposite of our natural tendency towards self-sanctification).

Most importantly to me, Pinker hits on my core concern about the intersection of morality and our postmodern, globalized condition: that our genetically-endowed sense of morality may not be adequate to the task of, for example, global warming. He starts the article by posing Mother Theresa, Bill Gates, and Norman Borlaug as moral figures and our instinctive bias that Mother Theresa is, among these, the most saintly. Yet in practice the other two (arguably) have / are doing more to change the actual world we live in for the better. (He’s addressing Bill Gates’ philanthropy here, not business).

So essentially the science of morality opens not not the problem of free will, but how we can learn, identify, and ultimately overcome the limits that a biologically-based morality has set. And in that endeavor, I do believe games have a role to play in at least two different ways: (1) they may be able to teach us to deploy systems-thinking, to borrow Eric Zimmerman’s phraseology, in the service of moral advancement (I alluded to this idea elsewhere); or (2) they may teach us new ways of creating user interfaces to systems that align our bio-morality with a systems-morality.

– Gene Koo