GDC: Real Time Movies in Resident Evil 4

Here are my notes from the talk at GDC titled Real Time 3D Movies in Resident Evil 4. This talk was a technical discussion of how the artists at Capcom went about creating the movie content for Resident Evil. Consequently, its target audience is not really RE fans, which is why I haven’t posted it until now.

The speaker, Yoshiaki Hirabayashi, is a lead artist at Capcom. He began his talk with a discussion of why Capcom decided to employ realtime movies for Resident Evil 4. Pre-rendered FMV sequences are not time effective, he explained, as the rendering process takes so long that minor tweaks cannot be easily made. Realtime movies were also preferable because they provided more flexibility, and seamlessly integrated with the rest of the game.

Hirabayashi explained that they wanted to fuse the action of the game with the cutscenes, and this decided to make some cutscenes interactive with the action button. Most games simply interrupt the game experience when a cutscene comes along, but the Resident Evil 4 team wanted to keep people engaged. Using interactive cutscenes forced the player to pay attention, which was part of their goal.

Hirabayashi then shifted gears and began talking about the elements of a good realtime cutscene. He listed the following as elements:

  • Smart use of time
  • Believability, including using secondary motion to make animations realistic
  • Appealing characters, even at the expense of realism
  • Intelligent use of CPU and GPU resources: swapping textures and models during cuts, et cetera.

Hirabayashi also described the work environment that the team employed. Game artists typically rely on programmers to put their work into the game, but this approach is slow and time consuming. For Resident Evil 4, the team built a web server that could manage game assets and automatically convert animated cutscenes from Softimage to the game format. This allowed the artists to quickly iterate over their work without involving a programmer, and it moved a lot of work that programmers normally do to the graphic artists, which saved time. This system allowed the graphic designers to solve problems like memory constraints, and resulted in higher-quality work overall. Using this system, Hirabayashi noted that on other game projects typically spend 30% of their time creating scenes, 27% of their time tweaking scenes, and 43% of their time converting scenes to the game format. Under the web server system, the Resident Evil 4 team was able to put much more time into tweaking: 25% of their time was for creation, 50% for tweaking, and 25% for conversion.

Changing gears, Hirabayashi then went on to talk about facial animation in RE4. Ashley’s face had 3500 polygons, which was about average for each character. They created 36 expressions for each character (implemented via morph targets), which was 1.5 times more than any other game they have done. To manage these expressions efficiently, they created a system that allowed them to package different groups of expressions depending on the scene. Given 30 slots for expressions and 25 basic expressions, they were able to select 5 unique expressions for each scene. This allowed them to only load the data they needed. Interestingly, they animated all of the facial expressions by hand after being disappointed with the results of motion capture and phoneme-based animation generation.

The “package of relevant” data concept was extended beyond facial animation. For each scene, the artists were able to choose between low, medium, and high quality models and textures. If they used both high quality models and a high quality texture, each character cost around 400k. However, having the ability to mix and match these assets allowed them to customize the level of detail needed for each scene. If they needed a scene that had a lot of lighting but did not focus on the characters up close, they could use a high poly model (good for lighting calculations) with a middle-quality texture. Or, if there was an extreme close-up with little animation, a low poly model with a high resolution texture would produce good results. Managing these packages of characters allowed them to adjust the relative complexity of each scene, and thus choose between a few highly detailed characters or several simpler characters. Interestingly, they also modified textures depending on the situation. They found, for example, that six different eye textures were necessary to make the character’s eyes look correct in all scenes on a TV.

Hirabayashi also discussed a few of the lighting and visual effects technique used in RE4. Projection lighting is a form of projective texture where a 32×32, 64×64, or 128×128 texture is mapped over the light frustum, making it look like there is geometry between the light and the character. A good example of this is the knife fight with Krauser, where the characters appear to be under a grid-shaped ceiling with a light behind it. They also used real-time generated textures for reflection, and were able to animate depth of field by precomputing a blurry image and then shifting it slightly as the scene progressed. This approach worked well when most of the scene was not moving, such as during dialog scenes.

Overall, it was a pretty interesting lecture for game developers. I am not sure how much regular gamers care about this stuff though.

6 thoughts on “GDC: Real Time Movies in Resident Evil 4

  1. Just wanted to thank you one more time for posting this!
    Being a developer myself and after playing thru the game, I was looking for such extra bits of info.

  2. Glad you found it useful. I am sure I missed a bit of detail with my notes, but the talk was pretty interesting. The best part was definitely the “resource package” description.

  3. http://greggman.com
    It’s funny to me though not unexpected that Hirabayashi emphasized not needing programmers to view their cutscenes.

    >Game artists typically rely on programmers to put their work into the game

    should be changed to

    Game artists typically rely on programmers to put their work into the game *IN BACKWARDS JAPANESE GAME COMPANIES*

    Western companies have had automated systems for 10+ years now to give the artists complete freedom but most Japanese companies still do things by hand where it requires a programmer for the artist to see his art. At Sega there was on programmer assign to that task for each time. Once a day, take all the art the artists give him and plug it into the game so the next day they can see it.

  4. I wish that most Western companies had their data pipeline down to a science, but I think the reality is that very few do. I’d guess that the top 10% of companies have this problem really nailed, while the rest of them are still exporting data by hand and using programmers to implement it in the game.

    That said, I can easily see how Japanese companies might be even further behind the curve, especially considering that the cost of labor alone provides less impetus to automate.

  5. http://greggman.com
    It’s probably worse than you imagine in Japan. For example, in the U.S. we’ve had products like Doom with level editors since 1993. (and of course many products before that had non-public level editors). Those editors, while not for cutscenes, let artists and game designers make complete levels without programmers.

    You can show WorldCraft or other level editor to a typical Japanese developer here in Japan and they will be shocked. It never occured to them not to build levels by typing in 3d coords by hand in C file. Some companies are one step above that, some programmer made a macro for SI or Maya to spit out those positions into a C file direclty but it’s still a C file that has to be compiled into the game.

    That’s quickly changing finally. XSI has been touring the country showing off the tools used for Half Life 2 and Renderware has been pushing Renderware Studio for about a year and a half but it’s still pretty sad in general.

  6. Holy crap, I didn’t realize it was that bad! How can they get anything done!?

    I’m amazed. How do they get any level of quality out of such a system?? Was it similar back in the SNES days?

    Chris

Comments are closed.