These days, we find ourselves in a world where the word “cinematic” is frequently thrown about in relation to videogames. While classic titles like Pac-Man and Donkey Kong represented the fun, unrealistic entries in a new art form’s infancy, today the notion seems to be that videogames should, for some reason or another, strive to be more like films.
In a way, this philosophy is not wholly unwarranted: the medium of film has had more time to hone and perfect its storytelling and visual craft, so why wouldn’t videogames borrow some of its mechanics? After all, some of the most intense, interesting scripted events in gameplay have been inspired by/directly ripped off from films (whoever directed Enemy at the Gates should have sued Infinity Ward long ago).
However, the problem comes when a videogame borrows one too many film mechanics, and ends up compromising the very things that make videogames a special and separate medium in the first place: when, confused, the game designer assumes that “film” and “videogaming” are synonyms, and treats one like the other. With that in mind, I present the following: a short (and by no means comprehensive) list of cinematic conventions that have absolutely no place in videogaming.
The John Williams score
Perhaps it’d be more appropriate to blame the abundance of shallow, spectacle-laden videogame stories that involve no real plot beyond “save the world and kill all the bad guys,” but it has to be said that I’m getting very, very sick of games whose soundtracks consist of half-assed John Williams rip-offs. Not because I dislike John Williams, but because his style – big, orchestral, and epic – is so insanely difficult to emulate efficiently. Out of my years of playing big action titles, Advent Rising is the only game I’ve yet played that has efficiently reproduced the epic feel of a John Williams soundtrack…and we’re not exactly going to see another one of those games anytime soon.
When a game (Halo, for instance) attempts to shoehorn the epic, John Williams feel into the context of a videogame, the result usually isn’t outright bad so much as it is boring. Yeah, the soundtrack may be appropriate to the story and it might provide decent background music for a few of the fights, but rarely are these scores ever particularly memorable, or even that effective in eliciting emotion – at least, not in the same way great film soundtracks do.
When it comes to solving this problem, a bit of baseball advice from my childhood comes to mind: if you can’t hit the ball hard, hit it where they ain’t. If videogame composers can’t create a compelling, epic, and emotional score, there’s no harm in going for the unusual. Jesper Kyd’s scores for the Hitman games have been anything but typical, but they’re entertaining as hell and can occasionally make for some very surreal, satisfying gameplay moments. And since I’m going to mention the game roughly a half-dozen times in this article, it’s worth pointing out that Garry Schyman’s score for BioShock utilizes the strings section of the orchestra in a way that is creepy, tragic, and wondrous all at the same time. And don’t get me started on Kō Ōtani’s work for Shadow of the Colossus.
Or, failing that, why not use more public domain music? Two of the most satisfying moments I have ever experienced in action gaming involved pre-existing music (the Waltz of the Flowers in BioShock, and Ave Maria in Hitman: Blood Money). Using songs your audience is familiar with can often be quite helpful to a scene, as Kubrick knew when he scored 2001. The music becomes more obvious (assuming that’s your intent), and the audience’s preconceptions about said piece of music can either be exploited for the sake of quickly effecting a particular emotion (playing The Blue Danube when you want the player to feel at ease, for instance), or can be ironically juxtaposed with the gameplay to develop a theme (the aforementioned use of Waltz of the Flowers in BioShock comes to mind). Obviously, developers must be careful not to overuse public domain songs if videogaming is ever to develop its own unique musical style, but using recognizable music certainly solves a few short-term problems.
“When the monster is dead, the monster movie is over”
When I read this quote many years ago, it was attributed to B-movie legend Roger Corman. I haven’t since been able to find the precise quote again in order to verify it, but the philosophy remains more or less sound for most mainstream films: if you make a flick about terror or evil or action, the film should end as soon as possible after the death of whomever instigated said terror/evil/action.
For film, a medium in which the audience’s attention must always remain enraptured throughout the two hours they sit in a darkened theater, this advice makes sense: they came to be entertained, so entertain them and then get the hell out. For videogames, however, this attitude is severely flawed.
In videogames, the player connects with his character and essentially shares his identity with him: whatever happens to the character, in other words, also happens to the player. In film, a passive medium, it’s perfectly fine that we don’t find out exactly what happens to Alan Grant and Ian Malcolm at the end of Jurassic Park, or that we don’t follow Laurie Strode after she “kills” Michael Myers at the end of the first Halloween. There are many smaller reasons for this – the creators want to leave room for a sequel, the budget doesn’t permit more scenes, etcetera – but generally, we’re okay with the story ending right then and there because, at the end of the day, we’re watching other people making their way through a story. We may care about them and be mildly curious to see what happens to them, but, generally speaking, we’re okay with leaving them right where they are once their main goal has been accomplished.
In videogames, the player is the character, and therefore a simple “the monsters died and they got off the island, The End” ending just doesn’t cut it. As we are the ones who work our way through the story, because we are the ones who work hard to accomplish a goal, we want to know what happens to us after the main conflict is over. This is our life we’re talking about, after all: the gamer works for hours and hours to accomplish a difficult goal, and wishes to not only be rewarded by an interesting ending (not to get into that discussion again), but to hear exactly what happens to him as a result of the story. Do I keep fighting? Do I disappear into the countryside?
Beyond simply finding out what happens, the player also needs to be told in a significant amount of detail. Warren Spector famously said that videogames are work, and the player needs to be rewarded, in some way or another, for completing said work. This “reward” might very well be an intentionally bleak and unhappy ending, but so long as there is a good reason for whatever ending comes about, and so long as it is explained to the player’s satisfaction, the ending is sufficiently rewarding. Working through thirty hours of nail-biting gameplay tension only to reach a one-sentence ending where the writers essentially say, “And then you lived really happily and nobody died ever again” isn’t really much of a reward.
BioShock, for example (don’t worry, I’m not going to spoil anything important). Perhaps the one small fault everyone can agree on is the game’s ending: even ignoring the rather typical and anticlimactic boss fight, the story and character epilogue lasts – and I don’t believe I’m exaggerating – less than two minutes. Most of the main questions are answered, of course, but the whole thing just feels too brief: after twelve to fifteen hours of work, the player’s only reward is a coda whose running time is shorter than an average music video? It’s difficult to explain, but even though BioShock’s story is adequately wrapped up in its FMV conclusion, it just doesn’t feel like enough. The player cares about all of the characters, and the world, very intimately – why rush us out just for the sake of keeping things short? Gaming not only causes the player to connect to the protagonist in a way films never will; they can occasionally cause the same reaction to the game world itself. Quickly rushing through the last phases of a narrative and kicking the player out of that world feels somewhat like having sex with someone you fancy, but not having a chance to lie around and cuddle for a few extra minutes.
Complete linearity
Not linearity period, of course – every game ever made is linear in some way or another – but linearity that infringes upon the gameplay. Games that, rather than allowing the player some degree of local agency over any of his actions, essentially grab him by the nose and lead him through each level. As one can imagine, these types of games are few and far between but they still (unfortunately) get produced from time to time.
Batman Begins, for example, constantly tells the player where to go, when to go, and how to go throughout almost every single one of its levels: ostentatious, brightly-colored icons constantly point the way for the player, and (perhaps most irritating) the player is only allowed to use certain abilities at certain times. You have to wait for the game to tell you when to use a goddamned batarang, for Chrissake. When games get so linear that they’re literally telling you when and what to do at every turn, they cease to become games at all: I’m tempted to demote these games to the rank of “interactive movie,” but even “interactive” would be a misnomer.
Bruce Willis
Not Bruce Willis himself, mind you – I may be one of the only people on the planet who actually enjoyed Apocalypse quite a bit – but the archetypical badass, one-liner-spewing, invulnerable protagonist character that Bruce Willis represents so well in the cinematic medium.
If a game doesn’t allow the player to create the main character’s personality (whether through giving the player multiple story and dialogue choices, or by giving the character no personality and allowing the player to fill in the blanks a la Gordon Freeman), then it’s pretty damn important that the character be, if not relatable, at least interesting. Starting with Duke Nukem and working up to Marcus Fenix and Cole Train, the ready-for-anything, hard-as-nails videogame protagonist has never really been conducive to a truly interesting or narratively rewarding gameplay experience, thanks mainly to what I like to call the Call of Cthulhu effect.
In Call of Cthulhu: Dark Corners of the Earth, the player is kept in a constant state of tension and fear thanks to the frightening villains and the steadily-lowering sanity meter. The entire gameplay experience is taut and suspenseful, which makes it that much worse when the protagonist, for no reason whatsoever, makes wisecracks about his situation and generally acts like a cocky, ain’t-no-Lovecraftian-monster-gonna-make-me-lose-my-cool videogame hero. If I, the player, am scared, but my cyber-proxy isn’t, this presents a serious problem: the fourth wall has just been unintentionally broken, and I am now separated from my character. From a purely narrative point of view, this is one of the worst possible things that can ever happen during a videogame.
Generally speaking (ironic, self-referential characters like Serious Sam notwithstanding), gamers cannot relate to the sort of “been there, done that” hero typified by Bruce Willis in the Die Hard series and seemingly emulated by many videogame heroes in the subsequent years. If you make a game to scare me, or excite me, or surprise me, then why can’t the videogame character I’m controlling experience those same emotions? Creating badass, fearless characters when you want the player to feel neither badass nor fearless is self-defeating.
Don’t show us what other characters are doing if the protagonist isn’t anywhere near them
In, say, a James Bond movie, the writers will occasionally throw in one or two scenes near the beginning of the film showing the token villain in his natural habitat. These scenes — like, say, the one in The Man With the Golden Gun where Christopher Lee kills a chick for betraying him — typically serve two main purposes: they usually set up some narrative event that will spur the hero to action (“Has evil plan XYZ been set into motion yet?”), and they develop the villain character, usually by making the audience afraid of him. In films, scenes like this which don’t involve the hero save time and effort, as the audience is quickly told what they need to know about the villain and how they should feel about him.
But in a medium where the average title lasts roughly eight hours or more, why worry about saving time? In films, these scenes are usually clunky and interrupt the flow of the narrative; in videogaming, they’re a whole lot worse. One can look to God Hand‘s numerous “meanwhile, at the evil lair:” scenes as an example: not only do these cut scenes commit the cardinal sin of noninteractivity and separating player from protagonist, but they aren’t even particularly effective in developing the villains. Videogame villains frighten and intrigue us through one method, and one method only: how they interact with the player.
Reverend Ray from Call of Juarez is frightening and badass because the player has played through the game with him and knows the horror he is capable of; SHODAN is terrifying because of the way she mocks the player and uses her security connections to wreak havoc on him; Sephiroth is loathsome because he permanently removes one member from the player’s party.
These characters don’t need to be developed by long, uninvolving cut scenes that take place at the Enemy’s Super Secret Lair while the hero is still miles away fighting goombas; their effect on the player is defined by their direct influence on the gameplay, rather than their ability to deliver arrogant-sounding monologues.
Noninteractive cut scenes
Unless you have a really, really, really good reason for denying the player control of his or her character, don’t do it. Don’t even consider it a possibility for delivering exposition.
Ever.
To quote Ken Levine from his ShackNews interview:
“Honestly, any writer could write a 20-minute cutscene. I hate those as a gamer. I skip them. Those games, I don’t know what the hell is going on. I’m not going to sit through those. But in Half-Life, I know everything that’s going on. That was a big inspiration. I know more about City 17 than I know about any Final Fantasy world.
Even a great game like Okami, it has 20 minutes of “blah blah blah” and I just want to kill myself. It’s not fair to our medium, it’s so self-indulgent. I think we have to work harder. Trust me, it’s a lot harder to do what we did in BioShock than to do a 20-minute cutscene. I could write that stuff all day long. .. Cutscenes are a coward’s way out.”
If you’re making a videogame, then make a goddamned videogame. Don’t make a movie with interactive bits. To include a noninteractive cut scene in a videogame is to essentially give up and admit that film is an inherently superior storytelling medium, when this simply isn’t the case. To use a cut scene in a videogame is to rob oneself of the very thing that makes videogames so damn special in the first place: interactivity. Of course, the player must occasionally be reined in for the purposes of delivering information, but the important thing is to make sure the player always feels like he or she is in control of his or her character.
The beginning of Half Life 2: Episode One, for example, includes an extremely long dialogue scene between Alyx Vance and her father, wherein some exposition is laid out along and the player has his goal explained to him. Now, technically, this scene isn’t totally interactive – you can’t decide to stop listening and simply run away – but the entire exchange is played out in the first person perspective, with the player still in control of Gordon Freeman’s actions. In other words, the player has the exact same amount of movement and combat control during a dialogue scene as he or she does during a major action setpiece. Cutting control from the player for the purposes of delivering dialogue is not only a spectacularly lazy storytelling technique, but also causes a rift between gamer and character.
Whenever I watch the demo playthroughs for Assassin’s Creed, I always cringe at the moment where Altair runs into a darkened room, finds himself cornered on all sides, and is literally forced to sit there and listen to a villain’s monologue once control is taken away from the player for the duration of a cut scene. Why? If I’m playing a focused, badass assassin, why on earth would I stop and let my target jabber on at me for a few minutes? Or, even if I did want to hear what he was going to say, why can’t that be my choice? Logically, there’s no reason on the planet Altair shouldn’t just jump to his target’s level and stab the living daylights out of him, but the player is not allowed to do this once control is taken away.
Or, if the game truly wants to force me to listen to the entire monologue, why not let me have control but force the surrounding soldiers to react negatively – say, get closer and draw their weapons — if I move too much? That way, control would remain where it belongs, and, instead of feeling bored or distant or irritated that the game is making me take a time out while it delivers story (I’ve seen many people literally put their controllers down during cut scenes, which is the absolute last thing anyone should ever want to do during a game), I personally feel threatened and frightened that my character is being ambushed. So long as the illusion of interactivity and freedom is kept intact, the player is still very much involved in the game.
To take control from the player without sufficient reason is, to put it bluntly, goddamned moronic.