Stephen Kozak

Video

Video

Base

Full Name

Stephen Kozak

Academic Profile

Summary

Creating hyper-realistic CG characters to replace actors with their CG likeness that would convince the viewer that that’s a real person and apply their performance in dangerous aspects, stunt performance, or in things that we can’t capture in real shoots

Long description

Stephan has been a VFX Supervisor, Technical Director, CG artist, animator, compositor and designer in the commercial and VFX industry. He has over 15 years of award-winning experience as well as 5 years of postsecondary teaching under his belt. Stephan has worked with big clients such as Alliance Atlantis, IMAX, Corus Entertainment, Coca-Cola, Disney, Paramount and many others. He has extensive knowledge of many software platforms including Maya, 3DS Max, Zbrush, Substance Painter, Marvelous Designer, Unreal and 3D scanner to name just a few. Currently, he works with industry partners, Sheridan faculty and students to create high-quality content for tech markets and develop new strategies for current technologies. Furthermore, he is doing extensive research and development work with virtual humans and digital double creation for virtual reality, gaming and broadcast.

Type of institution

College

Address

Pinewood Toronto Studios, Commissioners Street, Toronto, ON, Canada

Institution

Screen Industries Research and Training Center

I have a knowledge mobilization grant.

Yes

Website

http://www.sirtcentre.com

Industry

Information and cultural industries

Publishing industries (except Internet), Motion picture and sound recording industries, Broadcasting (except Internet), Data processing, hosting, and related services, Other information services

Video Transcript

Transcription

Transcript (English)

Introduce your team

Hi, my name is Stephan Kozak and I’m the visual effects Computer Generated lead at the SIRT Center at Toronto Pinewood Studios.
 
Describe your research

My job here is to take care of any CG elements that will be used in various scenarios that we have. So my research is into virtual humans and I can take you through the steps that it takes to create a character from the initial conception of a character from importing in program entry to the animation and delivery of performance, whether it be in cinema or in virtual reality.

This is the capture stage where our actors have been brought into the motion capture stage and we have attached facial rigs to them where we have dotted up our actors. It helps define what the range of motion set is going to target towards. Our actors are acting out the scenes we’re capturing their facial performance, so later on we’ll be able to apply that to the animations.

This is the pre-visualization stage where we’ve imported the motion capture performance, we’ve imported the characters, we’ve attached a bone system to them, we actually defined the facial blend shapes and now we’re attaching the performance to the characters and we’re now acting out the scenes.

This is the compiled project where we have real scene, we have our camera moves, we have our lighting information, we have our different levels of detail, we have a high res character, a medium-rare character, and we have a low res character with all real time performances. As you notice there’s different levels of detail in the skin shaders and hair shaders and limitations of the actual models themselves.

In the high res model we have extra shaders like subsurface scattering and specular shaders and different normal map shaders. In our secondary character we only have a few shader, we’re missing a lot of the extra shaders that help with the performance and the way the character looks. On our third model it’s basically a game model that can be generated and we are applying the same HumanIK rig system that helps drive them all to create similar range of motion.

Explain its significance

In film this would be a character double. Being able to replace a character with a CG likeness that would convince the viewer that that’s a real person and apply their performance in dangerous aspects, stunt performance, or in things that we can’t capture in real shoots. We can apply their CG likeness onto the characters afterwards.

Again it’s about creating believable shaders, believable people and getting the performance so that they create some sort of emotional response from the viewer to connect with these characters. I think in the end that’s basically what we’re trying to do with our virtual human stream, is trying to create an emotional response from the person that interacts with these experiences.