Here are my notes from the talk at GDC titled Real Time 3D Movies in Resident Evil 4. This talk was a technical discussion of how the artists at Capcom went about creating the movie content for Resident Evil. Consequently, its target audience is not really RE fans, which is why I haven’t posted it until now.
The speaker, Yoshiaki Hirabayashi, is a lead artist at Capcom. He began his talk with a discussion of why Capcom decided to employ realtime movies for Resident Evil 4. Pre-rendered FMV sequences are not time effective, he explained, as the rendering process takes so long that minor tweaks cannot be easily made. Realtime movies were also preferable because they provided more flexibility, and seamlessly integrated with the rest of the game.
Hirabayashi explained that they wanted to fuse the action of the game with the cutscenes, and this decided to make some cutscenes interactive with the action button. Most games simply interrupt the game experience when a cutscene comes along, but the Resident Evil 4 team wanted to keep people engaged. Using interactive cutscenes forced the player to pay attention, which was part of their goal.
Hirabayashi then shifted gears and began talking about the elements of a good realtime cutscene. He listed the following as elements:
- Smart use of time
- Believability, including using secondary motion to make animations realistic
- Appealing characters, even at the expense of realism
- Intelligent use of CPU and GPU resources: swapping textures and models during cuts, et cetera.
Hirabayashi also described the work environment that the team employed. Game artists typically rely on programmers to put their work into the game, but this approach is slow and time consuming. For Resident Evil 4, the team built a web server that could manage game assets and automatically convert animated cutscenes from Softimage to the game format. This allowed the artists to quickly iterate over their work without involving a programmer, and it moved a lot of work that programmers normally do to the graphic artists, which saved time. This system allowed the graphic designers to solve problems like memory constraints, and resulted in higher-quality work overall. Using this system, Hirabayashi noted that on other game projects typically spend 30% of their time creating scenes, 27% of their time tweaking scenes, and 43% of their time converting scenes to the game format. Under the web server system, the Resident Evil 4 team was able to put much more time into tweaking: 25% of their time was for creation, 50% for tweaking, and 25% for conversion.
Changing gears, Hirabayashi then went on to talk about facial animation in RE4. Ashley’s face had 3500 polygons, which was about average for each character. They created 36 expressions for each character (implemented via morph targets), which was 1.5 times more than any other game they have done. To manage these expressions efficiently, they created a system that allowed them to package different groups of expressions depending on the scene. Given 30 slots for expressions and 25 basic expressions, they were able to select 5 unique expressions for each scene. This allowed them to only load the data they needed. Interestingly, they animated all of the facial expressions by hand after being disappointed with the results of motion capture and phoneme-based animation generation.
The “package of relevant” data concept was extended beyond facial animation. For each scene, the artists were able to choose between low, medium, and high quality models and textures. If they used both high quality models and a high quality texture, each character cost around 400k. However, having the ability to mix and match these assets allowed them to customize the level of detail needed for each scene. If they needed a scene that had a lot of lighting but did not focus on the characters up close, they could use a high poly model (good for lighting calculations) with a middle-quality texture. Or, if there was an extreme close-up with little animation, a low poly model with a high resolution texture would produce good results. Managing these packages of characters allowed them to adjust the relative complexity of each scene, and thus choose between a few highly detailed characters or several simpler characters. Interestingly, they also modified textures depending on the situation. They found, for example, that six different eye textures were necessary to make the character’s eyes look correct in all scenes on a TV.
Hirabayashi also discussed a few of the lighting and visual effects technique used in RE4. Projection lighting is a form of projective texture where a 32×32, 64×64, or 128×128 texture is mapped over the light frustum, making it look like there is geometry between the light and the character. A good example of this is the knife fight with Krauser, where the characters appear to be under a grid-shaped ceiling with a light behind it. They also used real-time generated textures for reflection, and were able to animate depth of field by precomputing a blurry image and then shifting it slightly as the scene progressed. This approach worked well when most of the scene was not moving, such as during dialog scenes.
Overall, it was a pretty interesting lecture for game developers. I am not sure how much regular gamers care about this stuff though.