my1

Sunday, March 25, 2012

Stereoscopic Filmmaking

Autodesk
Stereoscopic Filmmaking
Whitepaper
The Business and Technology of Stereoscopic Filmmaking.

This whitepaper examines the S3D business case, the current state of the industry and the technical and creative considerations faced by those looking to make compelling stereoscopic movies. The reader is also given background information on stereopsis and perception, which should strengthen his or her understanding of the science underlying stereosocopy. It is hoped that increased knowledge of the science and technology of S3D will empower the reader to create effective and compelling movie entertainment.
The Business Cases
It is clear that S3D productions have a strong potential to generate revenue and invigorate the box office. Both attendance rates and average ticket prices tend to be higher for 3D screenings. On average animated movies such as Meet the Robinsons earn two to three times the box-office receipts – per screen and per theatre – if shown in S3D .
There are additional benefits to production companies and distributors. With HDTV and surround sound in an increasing number of homes (25% in the US ) S3D is a means of drawing audiences away from home and back into the movie theatre. A recent report by Screen Digest indicates that, based on past releases, an exhibitor can expect to make additional revenue of about US$10,000 per S3D movie release and could expect to generate a profit by the third release.
While as yet there are not enough S3D films being released to solidify the S3D business and not enough theaters to support wide or simultaneous releases, this scenario is rapidly changing. Not only is S3D production piggybacking the adoption of digital-cinema projection technologies, S3D’s momentum is helping those same display technologies to proliferate across an increasing number of theaters.
Aiding this phenomenon is the fact that stereoscopic projection technology is only marginally more expensive than standard digital projection systems. Likewise, the fact that more film studios are planning stereo versions of their upcoming releases is encouraging theatres to adopt the new stereoscopic projection technology.
New, affordably priced stereo camera rigs and other tools are also being developed that will make S3D production accessible to independent producers on limited budgets. At the present time, however, most S3D productions are simply stereo versions of computer graphics (CG) animated movies because, of course, the medium lends itself naturally to the creation of compelling stereo effects. However, the number of planned live-action stereo projects is significant and shows the support studios have for the format. Some of the live-action S3D features in production include Avatar, Horrorween, and a remake of The Stewardess (Stewardesses 4D).
Studios like Disney are showing a particularly strong interest in the S3D format. Not only has Disney made the largest number of S3D animated projects to date, the company is investigating the possibility of producing films in S3D natively. Recently, Disney shot, produced, and released Hannah Montana/Miley Cyrus: Best of Both Worlds Concert Tour exclusively in S3D. The Hannah Montana film was released on Superbowl weekend and, despite the curious timing, averaged a per-screen gross of over US$45,000 .
However another S3D film released that same weekend, U2 3D, did not fare as well. The limited number of available S3D-capable theatres, and competition with Hannah Montana for those theatres, meant that the film failed to yield expected results.
There were an estimated 20 S3D projects in the works at the time this whitepaper was written, including Avatar (James Cameron), Battle Angel (James Cameron), Monsters vs Aliens (DreamWorks), and Tintin (three films by Peter Jackson and Steven Spielberg). Filmmakers have obviously discovered that using depth as part of the storytelling process—from design to development through production—gives them the opportunity to develop an entirely new creative experience.
The stereoscopic cinema renaissance is upon us.
Fuelled by a convergence of economic need and technical possibility, more and more studios are releasing animated and live-action feature films in stereoscopic 3D (S3D) format.
Stereoscopy, or stereoscopic imagery, uses the characteristics of human binocular vision to create the illusion of depth, making objects appear to be in front of or behind the cinema screen. The technique relies on presenting the right and left eyes with two slightly different images which the brain automatically blends into a single view. Subtle right-left dissimilarities in the images create the perception of depth and can be manipulated to creative advantage. Therein lies the art of stereoscopic filmmaking.
1
Background:
Stereopsis, Perception, and the Real World
Stereopsis is the ability of the brain to perceive depth and relief from stereoscopic vision. However it is not easy to accomplish, and is made all the more difficult by people’s lifelong familiarity with 2D images and deriving depth perception from monocular cues on traditional, planar displays. Those cues include light and shade, relative size, aerial perspective, motion parallax (a visual cue created by movement whereby nearby objects move farther across the field of view than more distant objects), and, most importantly, occlusion interposition (objects on top of or behind other objects) and perspective. All these effects also play key roles in stereopsis.
S3D cinema presents the viewer’s eyes with two separate images to create the perception of depth. When looking at an object in a stereo image pair, the viewer’s eyes will move to converge in front of, at, or behind the screen plane depending on the degree of horizontal disparity between the two images. However the brain will always focus the eye on the screen plane. This creates an incoherence that is fundamental to how we perceive objects in S3D as opposed to how we perceive the real world. In the real world, our eyes focus and converge coherently. Looking at S3D imagery breaks this learned, habitual response and is the source of much S3D-induced discomfort. While most people can adapt to sudden discrepancies between focus and convergence to see the illusion of depth, others are more sensitive to it and become disoriented. However, if S3D material is poorly edited, viewers will be presented with sudden, unexpected onscreen changes in perceived depth inconsistent with our real-world experience and the discrepancy becomes noticeable to all.
It is therefore crucial to understand the fundamental differences between real world and S3D perception in order to create compelling stereo experiences and avoid unpleasant results. The following concepts form a basic understanding of Stereoscopy essential to any professional stereoscopic cinema production:
Figure 1. The Pyramid is an object with negative parallax and appears in viewer space (in front of the screen). The cube is an object with zero parallax and appears on the plane of the screen. The cylinder has positive parallax and appears behind the display screen. Below: The three objects as seen by the left and right eyes.
5 Contributions to the Physiology of Vision.—Part the First. On some remarkable, and hitherto unobserved, Phenomena of Binocular Vision. By CHARLES WHEATSTONE, F.R.S., Professor of Experimental Philosophy in King’s College, London. June 21, 1838.
In the real world human beings have horizontally separated eyes and therefore we favor horizontal parallax when it comes to the perception of depth. However in filmed material, lens distortions, misalignments, or processing can cause vertical parallax. Vertical parallax causes eyestrain and should be avoided or corrected. In the following paragraphs we will only refer to horizontal parallax and disparity unless explicitly stated.
The position in depth (relative to the screen plane) of an object in the scene determines the amount and the kind of parallax that it will have in the stereo pair.
An object is said to have zero parallax when it is placed at the same depth as the screen, and causes the two images to lie directly on top of each other while the viewers’ eyes converge at the screen plane. Objects with zero parallax appear to be at the same distance as the screen. Parallax
Stereoscopic images rely on binocular vision. That is, S3D images need to be seen with both eyes to appear three-dimensional. British scientist Charles Wheatstone discovered, in 1838, that the mechanism responsible for human depth perception is the distance separating the retinas of our eyes . Viewers looking at stereoscopic displays without the appropriate goggles (passive polarizers, tinted lenses or active shutter glasses) will see two superimposed images that appear to be out of alignment, such that objects appear to be displaced horizontally to a greater or lesser degree. This displacement between the left and right eye images is known as parallax. When the two images are shown simultaneously, one to each eye, parallax produces a retinal disparity causing stereopsis or the perception of depth. The distinction between parallax and retinal disparity is important. Parallax is a measure of the (horizontal) displacement of an object at the source (e.g. on a display device), whereas retinal disparity is its affect at the destination (i.e. the eye).
2
In S3D an object is said to have positive parallax when its parallax is greater than zero parallax—the image of the object presented to the right eye is further to the right than the image presented to the left eye—but less than or equal to the interocular distance.
Positive parallax causes the object to appear behind the screen plane and can be infinite in distance. In this latter case —infinite parallax— the parallax is equal to the distance between the eyes (interocular distance), causing the eyes’ axes to remain parallel so that the object will appear to be placed at infinity.
Past the point of infinite parallax we have divergent parallax, which occurs when the parallax value is greater than the interocular distance, causing the eyes’ axes to diverge (the right eye tries to look right, while the left eye looks left). This condition doesn’t occur in real world vision and requires odd muscle movements. While some viewers claim to be able to adapt to the sensation, it is recommended that strongly divergent parallax be avoided in S3D entertainment productions.
Negative parallax occurs when the axes of the viewer’s eyes converge in front of the screen, since the image presented to the left eye is further right than the image presented to the right eye, causing the object to appear to be placed between the screen and the viewer. Objects with negative parallax are said to be “in viewer space.” (See figure 2)
Interaxial separation
Producing stereo images requires two real or virtual cameras. The distance between lenses as referenced by the optical center of each lens is called the interaxial separation (or baseline). Perceived depth is directly proportional to the interaxial separation—that is, as the lenses get farther apart, parallax and the corresponding sense of depth increases. When the interaxial distance is larger than the interocular distance, the effect is called hyperstereoscopy and results in depth exaggeration in the scene.
The opposite effect, where the interaxial separation is smaller than the interocular distance, produces a flattening effect on the objects in the scene, and is called hypostereoscopy or cardboarding. These effects can be used creatively to adjust the S3D layout and appearance of the various parts of the scene.
Screen Surround
When watching an S3D movie, whether it is in a theater or on a computer or television screen, the stereo window normally coincides with the screen. The left-right vertical sides of the window and the north-south horizontal edges of the window are called the screen surround. When objects with negative parallax touch or are occluded by the left or right edges of the window, there is a perceptual conflict that the brain cannot resolve: The eyes see an object with negative parallax as being in front of the screen, but the stereo window, coinciding with the screen, also appears to be in front of the object (since it is obstructing it). As a result the brain has to try and resolve two conflicting visual cues – one telling it that the object is in front of the screen and the other that it is behind it. This situation should be avoided and requires diligent production practices to ensure correct composition. If faced with this situation the composition should be fixed, if possible, by moving objects backwards or away from the borders. If this is not possible the stereo window can be moved into the theater space by blanking a side portion of each image, creating a virtual stereo window or floating window, which is placed closer to the viewer, solving the perceptual conflict (see figure 3). This has the same effect as building a physical mask in front of the screen itself.
Figure 2. When the left-right images in a pair lie directly atop one another (A), the object has zero parallax and appears on the plane of the screen. When the parallax is bigger than zero, but smaller than the interocular separation, the object has positive parallax and appears to be behind the screen. A particular case happens when the parallax is equal to the interocular separation. This is called infinite parallax and objects appear to be at infinite distance (B). When the eyes’s axis cross in front of the screen (C), negative parallax occurs. When parallax is greater than the interocular distance (D), the lines of sight diverge. This is called divergent parallax and never occurs in the real world.
A
C
B
D
3
ZPS and HIT
Zero Parallax Setting, or ZPS, is strongly tied to Horizontal Image Translation, or HIT. As previously described, objects with zero parallax appear to reside on the plane of the screen. HIT refers to changing the horizontal distance between the two images in a stereo pair, thus changing their parallax values to put a specified object (or series of objects) at ZPS. Because HIT affects the entire image at once, this can very easily result in divergent parallax for some other parts of the image.
Viewer Space Effects
Effects created with negative parallax reside in viewer space and are referred to as viewer space effects. Extreme negative parallax effects, such as the spear in your face from Andy Warhol’s Frankenstein (1973), are the culprits responsible for many S3D induced headaches in early movies. Mainstream audiences, however, still react most strongly to effects that take place in viewer space. Robust perspective cues, expert stereographers suggest, are more effective and better attained with wide horizontal views rather than large parallax values. There are certainly examples of beautifully executed extreme negative parallax settings, such as those used in Space Station 3D, but given that technology has solved most of the outstanding technical hurdles, extreme depth effects can now be reserved for use where appropriate – either as a narrative tool or to create specific audience experiences.
State of the Industry: Types of Stereo Production
There are various kinds of S3D production, each with its own unique challenges and benefits. Understanding them can help simplify production, improve quality and increase the popularity of the format.
The following is not intended to be comprehensive description. S3D filmmaking is evolving at a rapid pace. We expect the creative process to continue to evolve as the format becomes increasingly widely adopted.
CG Movies Created Natively in S3D
CG animated features are the natural place for S3D production to develop, given the total control over the camera, environments, acting, etc. Many recent S3D projects have been full-CG movies: Chicken Little, Meet the Robinsons, Shrek 4D, Monster vs Aliens, Fly Me to the Moon, to name a few. Because of these successes, many associate S3D with animated features, however recent advancements in S3D live action have enabled other kinds of production.
Live Action Movies Shot in Stereo
Previously, live-action movies were shot with a single camera and the S3D segments added using post-production techniques, e.g Superman Returns, Harry Potter and the Order of the Phoenix, The Polar Express and Monster House. However, with the advent of portable and flexible S3D cameras manufactured by Vince Pace and others, there have been live-action features shot and released in stereo (most notably Journey to the Center of the Earth 3D), with others coming soon. Additionally, there have been some very interesting developments around using S3D to shoot live events, as evidenced by Hanna Montana and U2 3D.
Sports broadcasting to theatres seem to be an area where S3D is gaining wide acceptance judging by the BBC’s live rugby broadcast in S3D and the numerous trials across the globe for almost every sport. The 2007 NBA All Stars game was shot and broadcast live over a closed-circuit system to rave reviews and the BBC has also unveiled plans to cover both the 2008 Summer Olympics in Beijing and the upcoming soccer World Cup in South Africa in S3D, and then display them in theaters.
Stereoscopic Transformation of Flat Movies
This process has been very extensively and successfully applied to many movies and accounts for the largest number of projects released thus far. The Polar Express, Robert Zemeckis’ feature-length CG film was completely re-rendered for stereo display a few months before its 2D opening. The reworking included some stereoscopic layout to accommodate the wider field of view of IMAX® screens. Other 2D conversions to S3D include Monster House and Beowulf (although director Robert Zemeckis knew from the outset he was going to produce a stereo version). Other early high-profile live-action 2D feature films released with segments converted to S3D include Superman Returns and Harry Potter and the Order of the Phoenix: though these films are considered to have been less technically successful as they exhibited problems such as ghosting and parallax values that caused viewers physical discomfort.
Currently, many techniques –mostly derived from visual effects (VFX) are used to convert 2D to 3D. Since the goal is to derive depth information, where there isn’t, many tricks need to be used. Amongst the most common are re-projecting the original image on 3D models and rendering a second-eye view; using depth maps, time-shifting camera pans, etc.; and recreating environments.
State of the Industry: S3D Camera Models
There are three basic stereoscopic camera configurations: parallel, toe-in and image-shifted. The three differ in the way they determine where the Zero Parallax Setting (ZPS) is.
Parallel and Toe-In
When the images of a specific element in the scene are aligned in the left- and right-projected images, they will have zero parallax and appear to be exactly at the screen plane. Parallel configurations will normally generate images where the zero parallax point will be at infinity giving every object negative parallax (the entire imaged scene will appear in theatre space). This is less than ideal in most situations and in order to set the ZPS in practical productions, stereographers often rotate camera heads inward, so that they converge on a specific element in the scene in order to achieve the desired ZPS. This is called toe-in.
Toe-in is useful in situations when the objects in the scene are very close to the camera and to avoid a flattening effect. For example, if parallel cameras cannot get close enough to shoot the object in question, then toe-in is the easiest solution.
A moderate amount of toe-in is acceptable, however, using toe-in causes trapezoidal distortion—keystoning and depth plane curvature; effects which create vertical disparities between the images. Some toe-in artifacts can be corrected in post-production. For live S3D events, however, excessive toe-in could be problematic due to the aforementioned trapezoidal distortion (keystoning) and the difficulty of correcting it live.
Figure 3. Above: The cylinder is occluded by the left edge of the stereo window, a condition that creates perceptual conflict the brain cannot resolve. Below: A floating window created by blanking a side portion of each frame moves the virtual window into theater space and solves the perceptual conflict.
4
Image-Shifted
One way to achieve toe-in results without the distortion is to keep the cameras’ axes parallel while shifting the lenses or the cameras’ image sensors horizontally to achieve the desired zero parallax. While this is quite easy with CG cameras, it is of course very complicated with real ones, but digital tools allow us to reach the same effect by shifting the images to produce the desired Zero Parallax Setting with no keystoning or depth-plane curvature artifacts.
In CG S3D projects and in planar-to-S3D film conversion, multiple virtual cameras with a variable interaxial separation and/or camera configuration are often used to render the left and right images. This is a very powerful and sophisticated creative tool that can of course only be used by animation or VFX movies. In the production of Beowulf, for example, reportedly up to eight cameras per rig were employed.
State of the Industry: Standards
Today, there are no published standards for S3D though there are some recommendations. The Digital Cinema Initiatives (DCI), a joint venture between the major studios Disney, Fox, Paramount, Sony Pictures Entertainment, Universal and Warner Bros. Studios, has developed technical recommendations for both digital cinema and S3D cinema. In 2007 they published a short document defining high-level technical requirements for the mastering, distribution and theatrical playback of stereoscopic digital cinema content. The DCI is working on integrating their S3D recommendation into their Digital Cinema System Specification . The Society of Motion Picture and Television Engineers (SMPTE), an independent industry standards organization, also created the DC28-40 working group to establish published standards for S3D in conjunction with the studios (DCI), exhibitors and technology providers. The expectation is that they will arrive at a recommendation by the end of 2008. SMPTE also recently established a Task Force to define parameters of S3D mastering standard for home display. This activity will set the ground for future standardization efforts and provide a solid foundation for the efforts of several companies which have already developed a range of different products designed for home consumption of S3D content.
The Technical Challenges
For S3D cinema to be successful the viewer experience must be compelling. Technical problems can cause fatigue and eyestrain, and reduce the overall 3D experience to the point where the viewer prefers to see the planar version. The human eye has very little tolerance for discrepancies in color, geometry, and brightness between the left and right eye images and it is essential that they be identical in every way except for the horizontal parallax differences that create the 3D effect.
Today, most of the technical challenges of displaying S3D have been solved by digital cinema technologies and single-projector systems. Digital cinema has eliminated the film projection glitches – such as out of sync projection, and scratched and damaged frames – that plagued 3D cinema viewers in the past; and single projector systems have eliminated problems due to differences in lamp intensity and image alignment. Display challenges that remain revolve around the following:
Viewer Distance
An important consideration to keep in mind when laying out a stereo scene is that stereo effects are a function of the viewer’s distance and appear more pronounced when viewers are farther away from the screen. Distant objects seem farther away; objects in viewer space seem to protrude farther. This is because any given parallax value produces a lower retinal disparity value when viewed from farther away, but parallax and disparity values remain proportionate as viewer distance to the screen changes. That is, 0.33 inches (8.4 mm) of parallax will produce the same disparity at three feet (~1 m) as 0.66 inches (16.7 mm) of parallax at six feet (~2 m). See figure 5.
Ghosting and Crosstalk
Since there is no perfect separation in the real world, all stereo displays that rely on active or passive eyewear leak, allowing one eye to see a remainder of the image intended for the other eye. This condition is called crosstalk. Visible crosstalk is called ghosting. Ghosting will be most noticeable in high-contrast images. In order to reduce ghosting two valuable strategies are to stick to the lowest possible parallax value to achieve the desired effect and to avoid high contrast image elements as much as possible (especially for elements with very high or very low parallax).
Display Challenges: Screen Size
Screen size is another issue that needs to be considered. The larger the screen the greater the stereoscopic effect will be, but also the larger the risk for divergent parallax. Films formatted for theaters will look shallow on a TV set. Until multi-view, auto-stereoscopic screens are commonplace, the wide range of possible screen sizes introduces an additional layer of complexity in the stereo grading process.
Further complicating the issue of screen size is that IMAX production is fundamentally different from Real D or single-camera, normal-sized theatrical screen stereo production. IMAX relies on cameras whose lenses never toe-in, and a display that does not appear as a window through which the audience looks to watch a film. Rather, the assumption is that the screen fills the audiences’ entire field of view.
Pipeline Challenges
In the past few years, digital technologies have blurred the lines between many parts of digital filmmaking, but nothing has challenged the notion of a linear production pipeline more than S3D production. Producing stereoscopic content differs from traditional planar, 2D production in
Figure 4. In this illustration, the parallel cameras in the rig below give an undistorted pair of stereo images, but without image shifting no positive parallax is produced (all objects appear in front of the screen). The camera heads in the one above have been toed-in, allowing to choose the point of Zero Parallax, but producing the trapezoidal distortion (keystoning) shown.
5
numerous ways. Stereo cameras – whether physical cameras used on set or virtual cameras used in CG production pose a variety of challenges to most traditional cinematographers and post-production specialists; the challenge goes well beyond the need ensure that any action is properly applied to both media streams (i.e. both cameras/eyes).
In 2D production, revision cycles occur at each step, but rarely require returning to previous steps in the process. In stereo workflows, everything is so interdependent that some revisions ripple across the entire workflow and may require modifications in processes as far back as camera layout. Thus, from layout to editing through compositing and shot finalizing, S3D pipelines must account for stereo processes.
The following is a list of other common issues that any person involved in S3D content creation should be aware of.
Pre-Production
Traditional planar film pre-production starts with storyboards. Stereoscopic pre-production requires a depth script created by a stereographer to accompany the storyboards. The depth script acts as an invaluable tool for visualization, making decisions about each scene’s stereo depth, and as a tool to communicate the creative intent. The use of the depth script to set the stage and frame the scenes is very helpful in generating comfortable and readable stereo images, shots, and sequences.
Production: CG
Artists working on a stereo production require workstations that are stereo-enabled in order to accurately visualize their work. That said, artists might not wish to always see stereo. In the same way audio mixing over long stretches of time results in aural fatigue, looking at stereoscopic images day-in, day-out can cause eyestrain and fatigue. Making critical visual decisions is best handled when the eyes are rested. This is especially true given that decisions made in any one phase of production have an impact on every subsequent phase. Effective stereo pipelines take this into consideration across the entire workflow.
3D VFX techniques will no doubt evolve quickly now that a new generation of digital tools is emerging to facilitate the creation of S3D CG content. Digital stereographer Bernard Mendiburu envisions a new vocabulary of visual effects based on depth consisting of, for example, Z-axis wipes, depth compression, intentional retina rivalry, and depth ghosting.
Production: Live Action
Since physical-world filming is bound by physical limitations, live-action projects have to deal with issues stemming from the fact that no two cameras are exactly alike. In live-action S3D productions, the slightest inconsistencies in alignment, distortions and aberrations from lenses, zoom breathing, lens flare, and spherical reflections can produce discomfort or break the stereo illusion of depth.
Modern specialized S3D camera rigs are designed to maintain the best possible alignment, virtually eliminate pitch, yaw, and roll between cameras, as well as to make sure that lens length, focus, zoom, and iris are linked as closely as possible. Timecode references are genlocked and computer-control is established over zoom, interocular distance, and lens length, while each camera’s respective metadata is saved. Each of these parameters, however, can still have small inconsistencies due, for example, to chromatic and spherical aberrations and zoom breathe. In order to achieve the best results, these need to be addressed in a post-production environment that allows tracking of metadata throughout the entire pipeline. Furthermore, natural differences between frames, like flare and specular reflections in the scene, will often be present and require correction. Continuous access to the available metadata is crucial and could greatly optimize this process.
While most corrections can be applied in post-production environments and in context of the linear narrative, many decisions will be made on-set. Although there is still a debate over whether to leave convergence decisions for post-production, it is clear that since most S3D productions will go through digital processing, preserving information—of the choices, technical parameters, and image layers—is critical to a good result.
Post-Production
Compositing and effects work in general require artists to rethink what they know, because techniques that work well in 2D do not work at all in stereo. This is because even the smallest nuance of difference between left-right images is a cue for the brain to sense depth. If those subtle differences are lost, so is the illusion of depth.
Standard 2D particle effects and layered effects can’t simply be set on the left and copied on the right, because the images will be exact copies of each other and will therefore appear flat at the screen plane. As for lens flares, these are best applied as a post process since the minimum differences in the optical characteristics of lenses and their positions will cause mismatches between the flares that will render them inconsistent with each other and hence break the stereo illusion. Stereo grading is essential for achieving smooth transitions between shots and maintaining stereoscopic continuity, which reduces eye fatigue.
The aforementioned issues with screen size also pose a challenge at the post-production stage. While an experienced stereographer can make judgments inferring the effect on the big screen from what he or she sees on the workstation’s small display, it is always advisable to have a continuous revision process on screens comparable to the target screen size.
Other Issues
There are a number of other challenges inherent in the S3D pipeline: controlling light levels consistently between stereo image pairs, designing for different eye separation technologies (such as active shutter glasses or polarizing lens glasses), managing left-right assets and assembling them before editing,
Figure 5. Stereo effects appear more pronounced the farther viewers are from the screen. Parallax and disparity values, as shown here, remain proportionate as viewer distance to the screen changes.
6
to name but a few. Some industry professionals also argue that different digital masters should be created for different screening sizes such as TV, IMAX, Real D, and other theatrical exhibition formats. These issues must be addressed systematically for stereoscopic production to transition from the experimental to the routine, freeing filmmakers to focus on storytelling and creative intent.
Creative Challenges
Since the inception of cinema, filmmakers have strived to tell stories in the most compelling manner possible. Pioneering masters of the medium sought the best technology their budgets would allow as a means to push past the limits and achieve their creative vision. Sound, color, lighting, lenses, cameras, projection, and other specialized equipment have enjoyed relatively rapid development and adoption, propelled by groundbreaking films that used revolutionary technologies to tell great stories or recount cultural or historic events.
Stereoscopic cinema has taken longer than any other technology to come into its own as a storytelling tool. To date, stereoscopy has been used to recreate three-dimensional reality on a 2D screen. There is plenty of opportunity, however, to experiment with hyper-depth images to evoke new cinematic experiences and emotions. Artistically speaking, most 2D rules need not apply.
In fact, experience has shown that some monoscopic conventions simply do not work when carried into the stereo realm. For instance, cues that create the impression of depth in 2D counteract the illusion of depth in stereo. Using depth of field to throw a background out of focus is a common 2D technique used to draw an audience’s attention to anything in-focus placed against that background. Stereopsis, however, is based on binocular pattern matching. A blurred wall behind a character will appear flat in stereo. If characters are set behind the screen plane, the audience will see them as being behind the wall when projected in stereo.
Filmmaking is a collaborative craft that requires powerful and intuitive tools that help creative people tell compelling stories. The convergence of digital display technologies, emerging digital tools and standards, as well as economic considerations has enabled filmmakers to concentrate their efforts on the creative side. Knowledge of the new medium, well-crafted tools and confidence with a new language is what will drive the success of S3D.
The Case for a Unified Stereoscopy Pipeline
The fact that each step in the stereoscopic workflow is so interdependent presents quite a challenge for those planning an S3D production. To improve the quality, efficiency and success of S3D projects, control over the stereoscopic parameters must be maintained over the entire process and allowances made for very wide revision cycles, and for a workflow made of many parallel, iterative steps.
While there is no “one right way” of creating an S3D film, production should be stereoscopic from the start. A depth script should accompany the actual script. Storyboards should be produced in a CG stereoscopic-enabled previs toolset. Depth grading should be checked at each step of the workflow—previs, digital dailies shot with camera rigs or rendered CG, edits, and effects—on actual theatrical stereoscopic screens in whatever stereoscopic format and master size the target screen will be. Effects should be designed on the same nonlinear platform in which the rest of the production is being handled.
To facilitate such an integrated approach to the stereoscopic creation process, tools that maintain and track new classes of metadata specific to S3D images, and their corresponding layers, should be used as they allow visualization of S3D information in-context and permit the modification of all relevant parameters throughout the entire production process. A standard file format to support transporting of data across such a pipeline will be required. Such tools could automate most of the processes that require no creative input, leaving creative individuals free to focus on getting the most out of an exciting new, yet century-old, medium.
Summary
Developments in digital-projection technologies, combined with declining attendance driven by a number of alternate forms of entertainment technology, are at the heart of a renaissance in stereoscopic cinema.
But is S3D here to stay? Judging by the number of projects currently in the works and today’s technical and economic conditions, S3D is clearly not just a fad. Just how pervasive S3D will become, however, hinges as much on creative considerations as it does on technical or economic ones. S3D can enhance the movie-going experience, and if filmmakers are willing to use S3D as an immersive storytelling medium with a creative vocabulary all its own, S3D projects will likely enjoy continued box-office success.
Filmmakers willing to embrace S3D must, however, bear in mind that many S3D processes do not have an equivalent in traditional planar, 2D production. Indeed, the very interconnectedness of these processes poses unique challenges; changes made at any stage of the S3D production process may ripple across the entire process. Moreover, even without the depth variable, today’s post-production landscape is complex. It is therefore imperative that a unified production pipeline be used to minimize potential problems.
Well-crafted, integrated tools give creative people command over technology, making it one of many colors in their creative palette. Stereoscopic 3D imagery will succeed when the production tools harness its power, giving filmmakers command over depth as a means to enhance their stories. As history has shown, if the tool chain is limiting, it takes control of the creative process.
Autodesk is investing in building top-tier tools that address S3D content creation from early pre-production stages, through post-production to mastering, in a holistic manner. These tools are being designed for interoperability and ease of integration so that filmmakers can have at their disposal a pipeline that gives them creative command of stereo imaging as a storytelling device.
References:
StereoGraphics® Developers’ Handbook, Background on Creating Images for CrystalEyes® and SimulEyes
The Business Case for Digital 3D Cinema Exhibition, June 2007, Screen Digest
The Stereoscopic Digital Cinema in 2007: Dimensions and Future of the “Digital 3-D Revolution”
by Bernard Mendiburu, Digital Stereographer Los Angeles, USA (©2007 Autodesk, Inc.)
A Second Chance for 3-D, Wired magazine, Nov. 2007 issue, by Frank Rose
A Systematized Visual Pipeline Brief Final LEFT SMPTE Version reduced.pps (LightspeeD Design, Chris Ward, Robert Mueller)
Prospects for 3D Digital Cinema, Matthew Brennesholtz, Insight Media Authoring in Stereo: Rewriting the rules of visual story telling.
September 18, 2007, Jim Mainard, DreamWorks Animation
Lenny Lipton’s blog: http://community.reald.com/blogs/real_d_blog/
www.wikipedia.org
www.boxofficemojo.com
Autodesk is a registered trademark or trademark of Autodesk, Inc., and/or its subsidiaries and/or affiliates in the USA and/or other countries. All other brand names, product names, or trademarks belong to their respective holders. Autodesk reserves the right to alter product offerings and specifications at any time without notice, and is not responsible for typographical or graphical errors that may appear in this document.
© 2008 Autodesk, Inc. All rights reserved.
7

No comments:

Post a Comment