PostHeaderIcon 3D: Understanding Focus & Convergence

I’m currently in Singapore leading a series of 3D camera workshops with the new Panasonic 3DA1. Several of my students continue to confuse focus with convergence so it’s worth reiterating here: The convergence point, i.e. where the left-right camera lens axes intersect, defines the screen plane in a scene. As a matter of practice we always set the convergence point FIRST then focus. When the convergence point is placed BEHIND an object, that object appears in FRONT of the screen. When the convergence point is placed AHEAD of an object, that object appears BEHIND the screen.

Audiences viewing a 3D program are not immediately aware of the focal plane and can actually converge their eyes on the screen while focusing elsewhere inside the 3D volume. This is a fundamental conceit of 3D, and is the reason audiences are able to explore the 3D space around them.

3D Workshop Singapore • 12 June 2010

PostHeaderIcon The Shooter Needs Another Set of Eyes

It never ceases to amaze me. I’m a shooter born and raised and I love what I do. Then why am I spending so many hours these days inside Apple Final Cut Pro? Isn’t it enough we capture remarkable images and tell compelling visual stories? Do we have to edit, score and sound design our own movies too? It seems to the way of the industry these days as I put the finishing touches now on The Darjeeling Limited behind-the-scenes show for Criterion. Maybe it’s for the better that we shooters are increasingly allowed this much control. On the other hand I am not really an editor by trade. I get married to too many shots. I find myself way too close to my material. I NEED the distance that only a great editor and another set of eyes can provide. Too bad the current economic reality is precluding more and more a bona fide post-production specialist.

PostHeaderIcon Amazing Student Video from Egypt Workshop

My recent month-long workshop in Cairo yielded many surprises to say the least, but student filmmaker Karim Soliman’s video SKETCH shot in four hours in the Garbage City section of the city was particularly impressive. The skillful use of close-ups, rigorous control of the frame boundary, and strict adherence to the exclude, exclude, exclude principle produced a truly amazing piece of work. Congratulations to filmmaker Karim Soliman for creating a lasting work of art!

View ‘SKETCH’ here:

PostHeaderIcon 3D Not EZ: A New Way of Seeing

NAB 2010. I’m in Las Vegas now strolling the Strip with Panasonic’s new one-piece 3D camera. It’s my second night out, and while my 3D skills are improving I’m finding the whole experience rather humbling with only a few usable shots mixed in with a lot of  rubbish.

In the first place the Panasonic AG-3DA1 camera is not a run-and-gun type affair. Rigorous control of the frame boundary, convergence point and placement of objects is imperative, as is the use of a steady tripod to support a compelling i.e. non-sickening 3D experience. Thousands of drunken strollers cutting into and out of frame at all angles do not a pleasant 3D experience. make!

Unlike more sophisticated rigs that use twin-mounted cameras, a beam splitter and precise control of the distance between the left and right “eyes”, the A1 uses a single-twin lens system with a fixed interocular setting. The advantage is a greatly simplified operation with key parameters such as image distortion and rotation automatically addressed inside the front-mounted binocular housing.  The downside of of the fixed distance between the left and right eyes is the inability to shoot objects converging closer than 8 feet (2.2 meters). Indeed a shooting distance of 10-100 feet (approximately 3 – 30 meters) is ideal to produce the most compelling 3D images utilizing the Panasonic A1.

Objects floating ahead of the convergence point in negative space can produce a nauseated feeling in the viewer, as can objects cutting through the frame boundary that might appear in one eye and not the other. The threat of “Window Violations” is a constant menace for 3D shooters, and is a major reason why documentaries utilizing a verité 3D camera are inherently impractical.

The close-focus prohibition can be highly disconcerting to accomplished shooters long accustomed to working optimally at a range of six to eight feet (1.8-2.2 meters), invariably capturing close-ups at this distance for reasons of gaining proper perspective with minimum distortion.

Accomplished 2D shooters are also long accustomed to anchoring the frame by placing objects at or near the frame edges. We trim head room as objects approach the camera, and use copious foreground action to enhance the three-dimensional illusion.

These techniques are not applicable or advisable utilizing an actual 3D camera, especially one with a fixed interocular distance like the A1. Indeed the veteran 2D shooter can forget about many of the hard-learned third-dimension-inducing techniques. 3D requires a much more rigorous control of the frame and frame boundary than any of our 2D work until now.

One area requiring specific expertise is the setting and re-setting of the 3D camera’s  focus and convergence points often simultaneously mid-scene; the convergence point is typically set first and represents the screen plane. The 3D shooter must thus always be cognizant of objects crossing behind and in front of the screen plane/convergence point; this understanding helping to minimize to the extent possible the headache induced in viewers that may result from objects floating in negative space ahead of the convergence point.

Shooting proper 3D will require training and close attention to imaging fundamentals. Even for the most accomplished among us, the new 3D models including the new A1 will prove humbling at first as we understand how much (or how little) of our previous knowledge and experience can be safely applied.

PostHeaderIcon Jump starting tape-based cameras in Egypt

As many of you might be aware I’ve been in Egypt this month working with local filmmakers and conducting a cinematography workshop at night. The matter of adopting a tapeless workflow is a virtual imperative here since the old tape-based systems currently in use here are decaying and failing with age. The tape-based cameras I’ve encountered including a range of vintage Betacam and newer Z1 HDV models haven’t been cleaned or otherwise maintained in years. It’s simply too expensive to do so given the modest Egyptian market that can ill-afford even the most basic maintenance tools of cleaning tapes. For the many HDV tape cameras here the use of cheap consumer tape only exacerbates the maintenance issue, as the shedding and poor slitting of this bottom quality tape leads to frequent head clogs and tape drop-outs. On many HDV and DV cameras I’ve seen including some on substantial television productions, the on-board battery connectors are broken or missing leaving the power unit to fall off, one or more XLR audio inputs do not work, and most distressingly perhaps, the tape transports need a light tap on the side of the camera to get rolling. Jump starting cars in the streets here seems to be a common means of getting things rolling , and so it would seem the same logic applies to jump-starting ailing tape-based cameras.

Of course there is a considerable price also for investing in tapeless equipment in the first place, but so is continuing down the current tape-based path with all the issues of poor reliability and inconvenience. For many filmmakers here the capture of tape-based material into the NLE requires the rental of a deck or camera from across town, which in Cairo is easily an all-day affair given the dense choking traffic. The advantages of foregoing tape in favor or recording to flash memory is obvious to me, and it is especially apparent in more cash-starved markets that can benefit handsomely from the workflow advantages and cost benefits of foregoing video tape – if they can afford, that is, the initial cost of investment.

PostHeaderIcon Shooting Sony EX1 or EX3 @ 24 FPS: Do you know your editing basis?

Confusion abounds in the land of 24p!  If you’re shooting at 720p24 with a Sony EX camera, you are recording native frames to the SXS card at a rate of 23.98 FPS. Importing this footage into the Final Cut Pro timeline will produce a 23.98 editing time basis consistent with an all-24 FPS workflow ideal for output to DVD and Blu-Ray. The 24p frame rate in the Sony EX is also ideal  for mixing native 24p footage from other cameras like the Panasonic HPX170 and HPX200 P2 models.

On the other if you intend to shoot 24p for broadcast or web applications  at 1080i60, you will likely prefer an editorial time basis of 59.94 fields or frames per second.  Recording the HD SDI output out of the EX1 or EX3 to the AJA Ki Pro or Kona capture card will also give you the option of adding “pull-down” to create a 59.94 FPS time basis in the NLE. While the 23.98 workflow is efficient, especially for output primarily to disc, the 59.94 setting provides best compatibility with the web and certain tape-based cameras like the original Panasonic Varicam and HDV variants on the market from Canon, JVC and others.

To avoid unnecessary rendering on the NLE timeline and a potentially severe loss of quality, make sure you’re right!

PostHeaderIcon 3D: The Latest Fad – Again?

The fate of today’s accelerating 3D trend is an open question in my mind, and I am really hoping for guidance from  you on this one. On the one hand Panasonic (and others) will almost certainly be introducing simple and affordable 3D camcorders in the coming 18 months. The plastic mockup of Panasonic’s 12-bit AVC-Ultra camcorder based on the HPX170 model garnered many long covetous stares from NAB showgoers in 2009. The advent of 3D consumer plasma displays this year is surely another indicator that 3D is for real this time, and not just the province of the privileged few à la James Cameron’s Avatar. On the other hand there are so many technical and craft hurdles to overcome when shooting 3D, the complexity of processing the faux 3D images inside the brain lending itself to many unknown and unwelcome physiological effects beyond the well-recognized 3D Headache’ in audiences subject to unfused 3D imagery. Where are we going with this? Please tell me.

PostHeaderIcon Best Wishes to Shooters in 2010!

First day of the first month of the new decade. Time to think a little bit more perhaps about our priorities in life, our relationships with the important people in our lives, and to a lesser extent with our craft and careers.

Regardless of the current state of the economy, which is dismal for many of us, the changes in technology in cameras and image acquisition are rattling more than a few of our collective cages. Last Monday night on December 28 I presented my heretical thoughts to a ribald group of 110 shooters and presumably interested attendees at Birns & Sawyer in Hollywood CA. Sure we talked about the new gear and lenses as Doug Leighton from Panasonic and Joe Patton from Canon described their respective wares.

The real issue that seemed to dominate the collective discussion, however, had little to do with the physical tools of the trade but the reality of necessarily adapting to the transforming workplace. Many comments from the group expressed support for all things 3D, a notion apparently shaped by the current studios’ increased demand for 3D content. The crowd suggested that 3D capture could become the norm for many shooters in the coming year.

For general live-action titles I’m not sure 3D will ever become more than a minor niche player. One reason is the current requirement that audiences don specialized headgear which interferes with the immersive quality of the cinema.  In the future, halographic technology may very well obviate the need for intrusive glasses. But we’re not there yet.

Some members of the Hollywood group took exception with what they perceived as my anti-super high resolution bias; the feeling no doubt fueled by the less than flattering comments for higher and higher resolution capture devices like the RED One and Epic. My contention was and continues to be that the current Resolution Religion permeating our ranks is fundamentally flawed, when in fact what shooters ought to be demanding is Performance, Performance, Performance, and not Resolution, Resolution, Resolution, which is after all but one measure of a camera’s performance.

Low-light sensitivity, dynamic range, ergonomics, ruggedness of hardware, optics, and appropriateness for the story at hand, should be at the forefront of every shooter’s consciousness, and not merely the number of pixels that can be jammed into a poorly performing albeit cheap CMOS imager the size of a battleship.

Let’s continue in the new year to seek gear with superior performance that can help us garner greater employment and plum assignments. I don’t deny that very high resolution image capture can be advantageous in some applications. Just let us not make it the only point of discussion because it doesn’t serve us well either economically or in terms of our craft, which after all requires maximum versatility and performance in the current highly constrained business environment.

PostHeaderIcon Your Competitive Advantage

In these challenging times we need to take a close look at ourselves as shooters and digital storytellers. Just like some countries are particularly adept at manufacturing certain things like automobiles, TVs and precision watches, so must you the resourceful multi-talented shooter consider your competitive advantage over other shooters vying for the same assignments. Maybe you are a Final Cut Pro maven or skilled After Effects artist. Maybe you are skilled at designing websites or you’re an accomplished linguist and able to speak multiple languages. For a shooter these are compelling ancillary attributes that can be exploited to the max in order to attract potential clients. I happen to be known as something of a fireman in the business; when things go wrong on a shoot somewhere in the world, I will often get a call to round up my commando-style team and fly off somewhere to shoot an overlooked background scene, establishing shot – or take over shooting an entire project. The trick here is to understand and embrace your clients’ fears: if they’re worried about shooting tapeless to flash memory, become adept at handling this new style workflow. Understand archiving protocols and backups, and how to apply best handling practices even in the heat of battle on a professional set. Exude confidence at all times and perform with absolute reliability and good cheer. You’ll get the work you’ll seeking – even through these tough economic times.

PostHeaderIcon Resolution Without Performance

One should never base a camera purchase decision on imager native resolution alone. The resolution perceived by an audience is influenced by many factors beyond a camera’s imager size and native pixel count. High compression, choice of codec, quality of optics, dynamic range, even the use or non-use of a matte box, can seriously impact contrast, which in turn can negatively affect the perceived resolution of an image. A more relevant measure when considering a new camera is performance, that is, how well the camera behaves under the most difficult conditions you’re likely to encounter in the course of your shooting day. A  camera able to perform well in a wide range of conditions  is going to be far more useful to most users than a camera system with merely more pixels jammed into the face of an imager. In fact those cameras with very high resolution imagers should in general be avoided in order to preserve the best low light sensibility and dynamic range.

Search