Terminal Station


Trunks in a storage room

About four years ago I was working in the game industry making video games. At the time I was getting ready to start on a PSP game (this was before the PSP had shipped, but game development was already in full swing), and I wanted to get up to speed with my company’s 3D graphics engine. So I decided to make a couple of simple game demos to improve my understanding of the tech. First I made a 3D knock-off of an old NES puzzle game called Lot Lot, which worked out well but wasn’t very sexy. At the same time I was trying to pitch the idea that my company embark on a 3D adventure game (horror or not), so to back up that proposal I decided to build a 3D horror game demo using the company tech. My good friend (and fellow hardcore horror game fan) Casey Richardson agreed to make the art, and our goal was to spend a month or two and make something that was playable on PS2 and showed that this style of game could be accomplished without huge changes to our existing codebase.

Since we were only working in our free time, the total duration of the project ended up being closer to three months (though we could have easily pulled it off in a month if we worked full time on it), but the result was pretty cool. What we ended up was an engine that supported fixed and tracking cameras, the Devil May Cry control scheme, dynamic lights and shadows, Silent Hill-esque film grain effects, and a pretty neat system for dynamically blending movement and animation to produce believable analog motion. We had a single test character named Trunks, which Casey hilariously made look like a pair of disconnected legs with a little bit of spine coming out of the severed hips, and a map containing a bunch of rooms that you could walk around and explore. Though we were just using test art (which turned out to be The Way To Go with this kind of game–see the next section), the game ran at 60fps and the game mechanics were immediately obvious to anybody who picked up the controller. Casey came up for the name: Terminal Station, taken from a 1954 film by Vittorio De Sica.

This exercise taught me tons of stuff about how survival horror games are made. One of the first things we learned was that placing regions in space that cause certain cameras to activate is way harder than it looks. You know how in Resident Evil you can run around and eventually see every corner of a given room because the various cameras in that room are set up to show different angles without overlapping? Yeah, setting that up is really hard. Casey did it by hand in the 3D modeling tool, but we quickly realized that a real game in this style would require a special tool to make camera regions. Otherwise it was too easy to make a room in which the player could walk off the screen, or simply be unable to explore a section of the space. Moving cameras make this a little bit easier, but it’s a much harder problem than I expected it would be.

Another thing we learned was that fixed camera games make so many real time 3D graphics problems easier. For example, our PS2 engine only allowed the character to be lit by 4 dynamic light sources at any given time, but we wanted to have rooms with a lot of localized lights (see the shot of the open refrigerator for an example). We realized that you can secretly turn lights on and off when the camera cuts and the player will never notice. This is a super simple solution and it worked great–we were able to make rooms with tons of lights and just link sets of four to specific camera angles. When playing the game, the player would appear to walk through the environment and be lit by all the lights in the room. We did the same thing with the shadow: the shadow can only be cast from one light (we only supported a single shadow), but depending on the angle of the camera we allowed which light was responsible for the shadow to change. That made it pretty easy to set up really dramatic shots without compromising the design of each room. (As an aside for the graphics programmers out there, this method also let us separate “shadow-receivable” geometry from “shadow-immune” geometry so our projective texture shadow only had to render a subset of the level art twice).

Though my company didn’t end up pursuing this style of game, I am extremely glad to have done this project because I learned tons about how many of the games listed on this site work. A lot of the code (or, more often, the general approach rather than the actual code) got reused in other projects (the blending motion and animation system survived for another year, only to be killed when the real game it was in got cancelled), and Casey and I learned a boatload. I’ll post some screenshots from our demo below. This isn’t a real game, and will never be a real game, it was just a learning exercise. But it was a lot of fun and it played pretty well!

Screenshots:

In the kitchen

Dorm hallway

Dorm room

A secret passage

The storage closet

15 thoughts on “Terminal Station

  1. Nice post, Chris. I don’t think people realize how much goes into designing a video game, let alone a survival horror game. All we do is sit on a couch and go “This game sucks” or “This game rocks” without a second’s thought about how many man-hours were dumped into a project. Look for the redeeming value in any video game, people. If the gameplay is poor, listen to the sound effects. If the graphics blow, are you at least into the storyline? Even the crappiest games out there usually have some shining light to them. What a bunch of lazy, spoiled, ungrateful bastards we are! LOL! Thanks for the post, Chris.

  2. Did you guys have any problems with the limited graphics memory on the PS2? Did you have to DMA a lot assets from the main memory or did you just load it all up on a per room basis?

  3. > Sylphglitch

    DMA is the method by which memory is copied from one spot to another–in this case, from main memory to vram. It also has something to do with when that data is copied, but I think what you’re asking is if we loaded rooms before the player got to the door or not; whether or not we streamed room data. In that case, we’re talking about moving memory from the disc to main memory, not to vram. The PS2 bus to the GS is very fast, so the engine tech automatically streams texture data as necessary every frame.

    But, to answer your question, we did not stream room data. It could be pretty easily done, but I didn’t see much point: the load time between rooms was very short and it’s pretty standard for this type of game to load from room to room, so I just did that. That’s another advantage to this genre; almost any other type of game would require a more complex loading system.

    Even God of War, which is a fixed camera game, used a dynamic loading system to avoid load pauses between rooms, but the side-effect of that approach is that the rooms must fit in a smaller memory budget (you need to be able to store two rooms in memory at any time) and the rooms themselves have to be designed with turns and curves in them so that you can never see more than two rooms (including the room you are in) at once. That game is an action title and is very fast-paced, so it was the correct decision, I think. But for a horror game where the pace is much slower, small loads are hardly even noticed by the player and greatly simplify the engine code.

  4. Yeah that was my question.
    Thanks for answering.

    Did you guys have door animations like Resident Evil?

    I remember reading an interview about the N64 version of RE2 and how there wasn’t a need for door animations in that version but were kept for anyways.

    About streaming, wasn’t Soul Reaver one of the first console games to stream the world as you moved around..

  5. The movie in Italian is “Stazione Termini”, as Termini is the name of a place in Rome, where there is the station. So the correct English title is “Termini Station”.

  6. twitter.com/gamedreamer
    Great article! Really interesting for a horror/game design buff like myself. Any chance of releasing that code as an open-source project or stand-alone executable any time down the road? I’d love to play around with it and get a hands-on insight into how you overcame certain obstacles as far as camera angles and lighting go. Also, did you use mostly a fixed camera throughout or did you utilize cameras attached to a spline that followed a specific path/the player?

  7. Also, did you use mostly a fixed camera throughout or did you utilize cameras attached to a spline that followed a specific path/the player?

    Sounds like you read my Game Developer article. We used fixed and tracking cameras for this, only because we were short on time. Spline cameras would be the next logical step, but to do them right you really need to have multiple splines: one for movement and one for tracking. And it gets complicated when you allow the camera to turn around; see Silent Hill 3’s awkward front-facing follow came.

    Any chance of releasing that code as an open-source project or stand-alone executable any time down the road? I’d love to play around with it and get a hands-on insight into how you overcame certain obstacles as far as camera angles and lighting go.

    I can’t release the source as I don’t have it any more (it is property of my old employer anyway). It wouldn’t do you much good because it’s all based on the proprietary engine that we used. Most of the tricks are detailed here: use cuts between cameras to hide changes in lighting and shadows for the characters (lighting for the environment was burned in). Laying out collision volumes to trigger camera cuts is pretty hard, but the rest of the stuff just falls out. The other thing I spent some time on was getting the Devil May Cry control stick recalibration to feel right.

    Did you guys have door animations like Resident Evil?

    No, both because it would have been lame to copy that series so directly and because our load times were very quick (we were using placeholder art, after all). In a real game I probably would stream textures off the disc as you approach the door and then load the geometry when the door is actually opened; textures are more likely to be shared across rooms and can be loaded in blocks easily.

    About streaming, wasn’t Soul Reaver one of the first console games to stream the world as you moved around..

    Soul Reaver came out in 1999 in the UK and in 2000 in the US. It’s predated by a lot of games that did this, including Driver and Silent Hill. They all hit within a year or two of each other, though; it was tech that a lot of people were working on. Of course, I am sure that it was done much earlier on the PC.

  8. Thanks for answering my questions, Chris.

    It’s funny, I didn’t realize you were the one that wrote that article in GDMag, haha. It was a good read too.

    One more little inquiry for you though if you don’t mind: When you say that the lighting was “burned in”, do you mean that you used a GIS to calculate the lightmaps, (and falloff, volume, etc..) and then compiled the levels each time you wanted to see the results?

  9. >GameDreamer

    I don’t know what GIS is, but when I say “burned in” I mean that the artist set up all the lights in the 3D tool (3DSMax) and then used it to calculate vertex colors for all the geometry. So the background appears to be lit but in fact there’s no runtime lighting calculation that is necessary for it. Those same lights still get exported but I only apply them to the character, and then only four at a time (selected by the current camera). The drawback to this approach is that the background can’t be lit in real time (well, it can, but the lighting won’t look correct), which might have been a problem down the road if we decided to have a flashlight or something.

  10. Hi,
    Love the article here, good information! I’m curious, how did you do the film grain look? I’m trying to figure out how to do these kinds of effects in OpenGL, and it looks great from those screenshots. I know this was from the PS2, but maybe you can point to an online article on how to do that? I suspect it’ll involve shaders, but I want to do stuff like this for OpenGL ES (on the iphone for a 2D game) which has no programmable shaders. But then, I’m not sure if the PS2 had shaders, at least as they are known now anyway. Maybe a particle system or something (though I guess that might be wasteful, hmm)? Thanks for any info.

    Also, I did not know that it would be that hard to pull of RE1 style camera switches. Honestly I never liked them (preferred SH1 style myself), but I wouldn’t have RE1-3 any other way now 😛 It is also interesting about the light sources too, I hadn’t thought about that, but it does make a lot of sense if you want each angle to have nice lighting. Not very experienced in 3D programming but I definitely enjoyed reading about this! 🙂

Comments are closed.