Budget games largely lack human engagement

Budget Game, New YorkNancy Scola of TechPresident recently excoriated a budget calculator put out by NY Governor Patterson, primarily on the ground that it’s “more a dull-edged hatchet than a scalpel” and ignores revenue options. Strangely, though, she ignores the glaring fact that the tool is painfully meaningless to any normal taxpayer. Never mind how ugly it is (though that matters); its numbers are not only grossly general but also inhumanly abstract.

Scola also mentions the Obama-Biden tax calculator, which presents an interesting contrast. It, too, is a calculator — raw numbers stacked up — but it has the distinct engagement advantage of being about your money. Its designers don’t need to provide context or background; presumably, you know exactly what another $1,000 in your pocket would mean.

Such lame attempts at public education (or, as Scola argues, “pretend participation”) ignores the basic problem that for most taxpayers, issues of government taxes and spending are emotional, not rational, and not because we are innumerate but because such systems are too big and too remote for most of us to comprehend. This is a point that Prof. Henry Jenkins makes in his essay, “Complete Freedom of Movement,” which contrasts the play spaces of boys and girls. Whereas a game like Sim City allows players to mold physical territory, in girls’ games and stories like Harriet the Spy “the mapping of the space was only the first step in preparing the ground for a rich saga of life and death, joy and sorrow – the very elements that are totally lacking in most simulation games.”

Stated differently: cutting $10M from the state’s Department of Mental Health means something real for real human beings. The essence of a true public policy debate is to capture human reality in the discussion, not abstract it into numbers. (To those who argue that this would merely lead to an exploding debt, it’s up to deficit hawks to describe the issue as compelling drama, not formal logic).

Budget Game - MAA different contrast can be made with the Massachusetts Budget Calculator Game, Question 1 edition. As in the original version of this spreadsheet game, each top-level line item is explained with ample text — which requires players to be both numerate and literate. This “game” is no better than Patterson’s effort — except that the point isn’t really to balance the budget. The point is to show just how absurd repealing the budget is. It turns out that it’s pretty much impossible to eliminate the income tax without destroying practically all of the Massachusetts government, which an overwhelming majority of voters ultimately agreed was reckless. Rhetorically, then, the Globe’s budget game was less a simulation and more an exercise in futility, much like the message embedded in Ian Bogost’s “editorial games” for the New York Times.

Budget HeroBut what about a game that actually helps the player understand a budget and make difficult tradeoffs? Possibly the best example out there is Budget Hero from American Public Media. (Read Ben Medler’s review). Among its stronger features is the ability to choose particular values that your budget should maximize (e.g. “national security” or “energy independence”). As your budget fulfills those values, the corresponding “badge” fills up. It’s a relatively elegant way to convey the idea that budgets aren’t just abstract numbers but expressions of our collective social values — moral and meaningful choices writ large. It also doesn’t hurt that the design is colorful, noisy, and generally attractive.

Most intriguingly, Budget Hero also compares your results with peers (assuming, as Medler points out, that the players are truthful). It’s a step in the right direction towards an engaged and informed public dialog.

What might a pro-social rating system look like?

This was the mostly-serious question I put to our games group last night at our monthly meeting. The question emerged from previous discussions we’d had about how the meta-game-industry – critics, player feedback – influences game development. While the ESRB ratings are about as fuzzy as MPAA film ratings – and equally subject to manipulation – there’s no doubt that they influence actual design decisions. One former developer talked about how his team worked to keep a shooter at a “Teen” rating, which meant, for example, that players should not be able to manipulate dead bodies. (Shooting them while alive, of course, is perfectly fine!).
Cheat Code Central
We struck on a range of possibilities: an ESRB-like rating system, better search categories in game databases, better game criticism, and of course self-critical game design. Although it opens the door to even more subjectivity, we were all interested in shifting the focus from a checklist of features (blood? gore? bad language?) to an evaluation of the gameplay experience. Whether the graphics feature blood or not, does the game encourage cooperation and mutual sacrifice?
Continue reading

Performative Play as Nudge: It’s fun to do right?

The Prius Game Scoring SystemWhen evaluating “games for change” – whether we mean games that aim at education, social impact, or behavioral modification – the problem of transferability looms large. Sure, maybe we can teach someone to cognitively understand usury or compound interest, but does that really lead the person to walk away from the payday loan store next Thursday? The answer seems to be as murky for “good” behaviors as for violent ones.

Ian Bogost’s piece on “performative play” offers one avenue of response: play the change you wish to see. In other words, gameplay can involve real-world actions that have immediate impact:

Performativity in video games couple gameplay to real-world action. Performative gameplay describes mechanics that change the state of the world through play actions themselves, rather than by inspiring possible future actions through coersion or reflection.

A rudimentary precursor to performative gameplay might include the Prius MPG gauge. No, the gauge does not create gameplay any more than a 20-sided die, but Prius owners can and do make up their own games, challenging themselves to ever-higher mileage achievements. There’s even, you might say, a guild for the MPG-conscious. So before there’s a game, there needs to be a mechanism for gameplay, whether that be a Wii balance board, GPS chip in your shoe, or – who knows – a full-body ARG suit.

Bogost’s piece asserts a basic need for reflective performance: “the player’s conscious understanding of the purpose, effect, and implications of her actions, such that they bear meaning as cultural conditions, not just instrumental contrivances.” But if our goal is to retard energy consumption or encourage saving, I’m not convinced that conscious understanding is necessary.

As Thaler and Sunstein point out in Nudge, sometimes the inputs of our behaviors are unconscious – which is not to imply irrational or stupid. Take the oft-repeated example cited in the book of electricity bills that put smiley faces next to below-average usage and frowny faces next to above-average usage. What’s interesting about this from a game design perspective is that it translates a numerate and rational score into an emotional and social one. We’re not obligated to do anything about that score, as Thaler and Sunstein go to great pains to point out, but those of us whose values correlate with those implied in the scoring system are now more likely to change our behaviors as a result – even though we are literally paying a price for not optimizing our energy usage without that additional nudge.

Given the temporal and technological disconnect between an electricity bill and the means of changing electricity usage, Bogost would be correct that, in the example I just gave, conscious understanding of the effect that turning off the air-conditioning would have on the “score,” as well as the cultural desirability of that score. But another example from Nudge doesn’t require conscious awareness: painting parallel lines on the road that come closer and closer together as the road enters a dangerous curve does far more to cause people to slow down than putting up “Slow or Die” signs. Reflection, in the case of someone about to drive off the road, would just get in the way.

Whether conscious awareness is necessary or not is really just a side point here. I’m fascinated by the possible combination of Nudge with Performative Play and would love to think more about possible avenues for experimentation and implementation.

– Gene Koo

Wii Fit and Games of Guilt

Most games play on a narrow range of human emotion, rarely straying from excitement, anxiety, or awe. So it’s worth noting when a game comes along that relies on a rather unusual feeling for an entertainment title: guilt.

(In using the term “guilt,” I am primarily drawing on our colloquial understanding of the term, the feeling of conflict between what one has done and what one believes one should have done, rather than any specific psychological or philosophical definition. I suspect much of our understanding of the word “guilt,” outside of the law, comes from marketing for diet products).

If Wii Fit succeeds in whipping American butts into shape, it will partially be through imparting a feeling of obligation to do some exercise every day. But it also courts danger in this regard: a nagging game can turn off a would-be exerciser as easily as its non-interactive predecessors. (How many treadmills became bulky clothes racks after the heat of zeal congealed into lethargic shame?). Serious commitments require both a carrot and a stick, but too much stick kills the fun.

Wii Fit employs a smörgåsbord of characters to engage players: there’s your Mii avatar, the diagram-y yoga instructors, and the anthropomorphized Wii Fit balance board. While the Mii gives some basic feedback (its shape changes as you gain/lose weight) and the yoga instructors provide tips and positive feedback, it’s the balance board that helps you set and keep your goals and chides you when you go astray.

The balance board character, a strangely expressive white rectangle, is no match for the average mom, but skip a day or two and does serve up a “You don’t call, you don’t write” routine:

eh?

There’s no reasoning with the board on this matter. Go on a week-long business trip? Too bad – that smug little rectangle doesn’t offer excuse options. On the other hand, neither does it dwell, moving on with perfect cheer and letting bygones be bygones. Unlike a true nag, it never brings up your transgression again — the prick of guilt is instant and ephemeral. But it is there.

So Wii Fit, via the balance board character, “cares” whether you play with it or not, and whether you do so regularly. (Once you start, the game tracks but doesn’t mind which exercises you choose). A game that makes you feel guilty for ignoring it isn’t novel; pet simulators like Nintendogs also mark your absence, during which time your virtual puppy gets increasingly hungry, thirsty, and disheveled. The possibility of neglect, and the guilt that accompanies it, seems to stimulate some sense of care and responsibility.

Wii Fit doesn’t merely concern itself with your decision to play; as an interactive title that attempts to change the user, it also attempts to address your other, probably more important choices. Consider this sequence, triggered when you gain too much weight vis-à-vis your stated goal:

Overweight 1 Overweight 2 Overweight 3 Overweight 4

We’ve often discussed reflection as a vital element of moral choice-making in games. On the scale of moral choices, staying healthy isn’t high up there (except for the ancient Greeks), but this device of asking the player to reflect on out-of-game, real-life decisions is worth considering for application in other games for change. Particularly notable is that it’s the player, not the software, who sets the goals in the first place. The Wii Fit is there to help keep you on the path that you’ve laid down for yourself.

Set a goal Reaching your Fit goal

Is this method of reflection effective as a mechanism for personal change? Or does it, together with the goal-setting and the nagging, only drive away those who have trouble staying on the bandwagon? We should start seeing some answers in the next few months.

– Gene Koo

G4C2008: Jim Gee vs. Eric Zimmerman

Gee: “World of complex systems that is biting us, and biting us bad.” e.g. peak oil => biofuel => no water / no food => failed states => end of global economy

Zimmerman: industry (19th century), information (20th), the Ludic Century (21st century systems)

Gee: Games not terribly good at delivering information, but at novel experiences: seeing the world in new ways. Continue reading

G4C2008: values at play

Mary Flanagan (Tiltfactor, Values @ Play) — “a humanistic approach to game design.” How to think about / change existing gameplay to incorporate human values? How to embed human values/principles into design processes such as game design? Some of the values include privacy, creative expression, diversity, cooperation/, commons, community/collective decision making, altruism/sharing, inclusivity?

V@P recreating iterative design process to examine human values.

Studies to test impact of V@P curriculum on designers. “Grow a game” brainstorming cards. (Verbs, Challenges, Games, Values). Stages of Concern Instrument to measure changes in attitudes about values-conscious design.

Findings:

  1. The big issue with making activist games is a perceived conflict between fun and the seriousness of the social issue (don’t want to make light of that issue). Going too serious leads to strange unintended consequences, e.g. Jena 6 game ends up seeming racist — therefore need to maintain the values.
  2. Students’ three strategies: (1) the unwinnable game; (2) appropriate mainstream games for activist purposes; (3) most difficult to accomplish — invent new mechanics

See V@P public contest — deadline July 1.

G4C2008: some genre terminology

On a panel on “Journalism, Games, and Civic Engagement,” Asi Burak of Impact Games (Peacemaker) suggests the following tags for interactive media, which he distinguishes from “games”:

  1. Editorial short-form — Ian Bogost’s “Persuasive Games” (I’m curious what Ian thinks of this tag)
  2. Advocacy short-form — Darfur is Dying, Starbucks’ environment game
  3. Long-form advocacy — Peacemaker, A Force More Powerful — goal is to come out with the realization, “It’s more complex than I thought”
  4. Community interaction — World without Oil

Other possible terms: “Experiential storytelling,” “Interactive infographic”? One audience member points out that games usually have meaningful choice, a magic circle, a win state that some of these examples do not.

I’m not sure I would put A Force More Powerful in the “Advocacy” camp since its main focus is to teach strategy (not just demonstrate complexity), but as Asi points out, both that title and Peacemaker have a “bias for peace” built into the design. (In AFMP, demonstrations that go violent is a Bad Thing).

Another journalism game: Joellen Easton of American Public Media demonstrated Budget Hero, which allows players to set their own goals through selecting a “badge” (e.g. national security, universal health care). It’s particularly interesting to me that these goals (and thus, the underlying values) cannot all be met, which for me is a criterion for a “meaningful choice.”

APM is also finding that players of Budget Hero are significantly younger than consumers of other public media: 53% are 18-35.

Why a game: Player experiences tension between own assumptions and the facts built into the game (assuming vetted facts are correct) — Joellen. Limitations of traditional media that lack context, cause-effect — Asi.

GTA4: reintegrating the divided self

2 faces of NikoBy the close of our discussion about GTA4 on Wednesday, some of us expressed pessimism that computer games possessed any capacity to invigorate moral reasoning or reflection. Matthew remained hopeful, but expressed his dismay that the critical reception of GTA4 seems to set a ceiling, not a floor, for morally-deep games:

…The series cheered (and criticized) for glorifying violence has taken an unexpected turn: it’s gone legit. Oh sure, you’ll still blow up cop cars, run down innocent civilians, bang hookers, assist drug dealers and lowlifes and do many, many other bad deeds, but at a cost to main character Niko Bellic’s very soul. GTA IV gives us characters and a world with a level of depth previously unseen in gaming and elevates its story from a mere shoot-em-up to an Oscar-caliber drama. Every facet of Rockstar’s new masterpiece is worthy of applause…
IGN review by Hilary Goldstein

Maybe Niko loses his soul, and maybe you, the player, care. Or at least try to care. And so maybe through its long reach, however flawed, GTA4 also opens new frontiers to explore, and it becomes our duty to turn that perceived ceiling of possibility into a challenge.

Andrea Flores, responding to the recurring theme of “schizophrenia” throughout the discussion, brought in the idea of ritual, especially as described by anthropologists like Arnold Van Gennep and Victor Turner, to understand the interplay between real (the player) and game (the character). Like the “liminal space” of ritual, perhaps the “magic circle” of games offers a passage from one state to the next. If so, the tension among player, avatar, and character might well be something to exploit rather than bemoan; indeed, I find quite compelling the idea of the avatar as a “symbol” that the player manipulates to conduct the game-as-ritual.

From a positivist perspective, there is certainly much to learn from real players’ experience of the moral dimensions of a game like Grand Theft Auto. (Grand Theft Childhood is one place to start; the GoodPlay Project, where Andrea and Sam research, is another). From a normative and developer’s standpoint, there’s also so much to imagine, to build, and to test.

(gk)

Soul of the Machine: Awakening the moral conscience of impersonal systems

Ever since Ultima IV showed us how computer games might embrace virtue, I’ve longed for similar titles with moral depth. Over a year ago, Kent Quirk awoke me to the power that computer games offer and why they are so important right now. At a local Games for Change meetup, Kent showed off Melting Point, a game about climate change. What impressed me about Melting Point was that Kent wasn’t proselytizing for a particular policy or worldview but rather hoping players would understand the interplay of complex systems (climate and economy) and make up their own minds about what, if anything, we should do about it.

This made me realize that computer games can merge two important features — player choice and systems-modeling — to achieve something even more powerful: nurturing morally aware systems-thinking. In other words, I began to see games as a tool to enable people to see that the complex systems around us — whether global trade or ocean ecosystems — have moral consequences, and that we aren’t just idle observers but actors both within and over those systems.

And it’s at this very moment in human history that we, as a species, must learn to see ourselves as moral agents within systems.

Never before has humanity had the power to destroy each other and the world as we know it, whether in clouds of radiation or of carbon dioxide. Never before has so much of humanity been at the mercy not of human tyrants and local lords but of machine code and faraway tribunals. The world, as Max Weber predicted, is becoming an iron cage of systems and bureaucracies beyond human ken.

It’s beyond our common understanding because homo sapiens didn’t evolve to naturally grasp large, complex systems but rather small networks of people. As psychologists are steadily learning, scruples aren’t merely nice but actually hard-wired into our brains. Ask someone whether it’s right to push a big man in front of a runaway train to save the lives of five bystanders, and parts of our brains begin firing to tell us, “no.” But ask whether it’s OK to throw a switch that decides between the fate of a man on one track versus that of five on the other, and those same neurons stay quiet.

So our genetic code instructs us to treat our face-to-face relationships as potentially moral, but our innate moral sense may not extend into our systemic or mediated relationships. Bringing chicken soup to our sick neighbor strikes us as self-evidently virtuous, but shaping our nation’s health care policy — not so much, at least not until it begins affecting us personally. Viewing policy as a structure that embodies collective morality is learned, not instinctual.

Computer games offer at least two possible responses to our collective human predicament. First, they can open players’ eyes to the moral implications of systems by experimenting with them and witnessing the results. Games might offer moments of reflection and of epiphany, connecting personal morality with systemic awareness. A player might see how tweaking health care policies affects a family’s lives, or how environmental regulation could shape the destiny of a polar bear. Games might lead people to begin to see a soul within the machine.

And perhaps systems might begin to learn lessons from game design. Why must the computer systems that exercise more and more control over our daily lives be morally inert? If computer games — mere software — can lead players to weep, perhaps the mechanization of our world needn’t be soulless. If a global society demands that our interpersonal relations become abstracted into an iron cage of systems, can’t we re-envision such systems as a purposeful tool for realizing our collective moral vision?

Computer games won’t solve the problems that face humanity and our planet. But media, from cuneiform to newspapers to film, have always assisted humanity to reach new levels of moral self-realization and galvanize moral action. How fortuitous it may prove that computer games with their unique capacity for choice and systems-modeling should arise at this critical juncture of our evolution.

– Gene Koo

New Perspectives on Splinter Cell: Double Agent

Yesterday, Matt demonstrated a scene from Splinter Cell: Double Agent involving an interesting moral exercise.

The situation: The protagonist Sam Fisher, an NSA operative, is undercover in a terrorist group, the JBA. To effectively serve the NSA, he must maintain his cover within the group. If he does not make himself useful to the terrorists, his cover will be blown; if he does not make himself useful to the NSA, they will assume he’s gone rogue and treat him as a terrorist. This is represented by two “trust bars,” as we call them, that effectively measure how useful Sam is to the two groups, and–since trust grants greater freedom in gameplay–how useful they can be to him.

In the scenario we viewed in class, Sam is given a handgun and ordered to execute an innocent civilian, a news pilot called to the scene by a third party. Although the game generally takes place from a third-person perspective, this scene plays out from a first-person view, helping to conceal the distinction between the player and the protagonist. Since there’s no obvious “don’t shoot” button, the player might be led to assume that he has no choice but to shoot the pilot; a look at the HUD, however, reveals that the gun (which appears to be a WWII-era Luger 9mm, for some reason) contains only one bullet, and putting it into the wall counts as sparing the man’s life. (In the demonstration we saw, the player took too long to decide, and an NPC shot the man anyway, taking the decision out of Sam’s hands.)

The first thing this scene does is to remind us that videogames are very good at encouraging people to do things, but a bit less so at encouraging people to not do things. This varies by player and genre, of course, and the stealth genre is arguably all about training the player to not do things (don’t step into that hallway without checking for cameras, don’t attack that guard if you can avoid him, etc.) Still, player action is generally affirmative rather than abstinent in nature.

The second thing this scene does is to remind us that Sam Fisher and the player are not the same person. The decision can be seen as a purely tactical one. If there is any guilt involved–and rational people can disagree on whether or not there should be–it’s extremely unclear whether Sam or the player ought to be feeling guilty. If Sam does not seem to be shaken by the experience, is it because he honestly doesn’t care? Is it because he conceals his emotions, as he’s no doubt been trained to do? Or is it because Sam is conditionally sharing an identity with the player, and the player is the one who’s supposed to be “feeling” for Sam?

The third thing this scene does is to suggest the importance of clearly defined consequences in (fictional) decision-making. While the player is deciding whether or not to shoot, the trust bars demonstrate, in a fairly straightforward way, the consequences of either choice. While the player might not know exactly how those consequences will affect later gameplay, (s)he can guess with some accuracy how much they will.

It was suggested, in discussion, that making the consequences more or less obvious might change people’s reactions to the scene. So let’s go into that a bit. If we start from the assumption that moral actions are actions that produce moral consequences, we’ll likely soon find ourselves in a utilitarian framework. As consequences go, pleasure and pain are relatively easy to measure, especially when placed against metaphysical ideas of “the good,” the will of supernatural beings, etc. So what are the consequences of Sam’s/your decision to shoot/not shoot the pilot? We already know that Sam’s status with either the NSA or the JBA will be enhanced or degraded, but that’s hardly the kind of thing people think of morally. Let’s think of some other consequences.

1. If Sam does not kill the pilot, his cover will be blown immediately. In this case, killing the pilot could be construed (dubiously) as an act of self-defense, since the JBA will not look kindly on a double agent. This argument is weakened somewhat by the fact that Sam is partially responsible for being in that situation in the first place. (Very few games make any allowance for martyrdom, traditionally seen as one of the highest demonstrations of morality there is, but I digress.)

2. If Sam does not kill the pilot, the pilot will be let go. At first glance, it would appear that this is the ideal scenario. Assuming it doesn’t make Sam’s mission completely impossible, letting the man go would seem ideal. Except, by utilitarian standards, letting the man go is only good insofar as it produces positive consequences. So…

2b. The pilot is let go, and Sam accomplishes his mission anyway. A year later, laid off from his job, the pilot walks into his old office with a submachinegun and kills twenty people. Does knowing this in advance change the decision to be made? What if there’s only a 50/50 chance the surviving pilot will go on a killing spree? What if the player is told there’s a “significant” chance, but not told the actual odds?

One of the major criticisms of consequentialist ethics, after all, is that consequences are difficult to accurately predict in practice. A deontological (rule-based) approach would presumably refer to a rule such as “don’t kill innocent people,” something that’s fundamentally hard to argue with until you’re presented with extremely unlikely scenarios like the one detailed above. When such moral rules seem to require martyrdom, pure ideas of moral duty are basically all that can constrain human action, at least in real life–deontological ethics might be more intuitive to human beings if we could refer to status screens that would display to us the sum morality of our actions in an objective fashion. All kidding aside, this seems like it could be an interesting thing for games to tackle.

But back to our consequentialist game. We have thus far only briefly mentioned the problem of guilt. While the consequences we’ve discussed so far are external, guilt is an internal consequence that presents some difficulty from a design perspective. Some work is being done in the area of modeling protagonist psyches; as Eternal Darkness notably suggested, the protagonist does not need to be rational just because the player is. Alternatively, one could just focus the players’ attention on imagining, in detail, what it would be like to kill an innocent. Terror management theory gets some interesting results by asking people to ponder their own deaths, but how would it affect players’ perception of this scene if they were asked, before they picked up a controller, to spend several minutes thinking about both dying and killing?

There are, of course, a few other ways of doing this. One could model a kinship system and work that into the game’s engine, i.e. it “hurts” the player more to do bad things to the terrorists or the NSA than the unfortunate strangers caught in the middle. There’s also the virtue ethics approach, attempting to parse out what virtues are demonstrated by either shooting the innocent and focusing on the big picture or refusing to be complicit in cold-blooded murder. We could probably trot out a hundred versions of the scene we watched yesterday, and I’d be curious to see if tweaking it will produce notably different feelings in players.

Peter Rauch