Tuesday, May 28, 2013

Multiple Background Plates to fit a Camera Rotation in Nuke



While working on Gaiaspora, I used the CameraTracker in Nuke to put background matte paintings into several of the shots. For some of the shots I needed to stitch multiple versions of the paintings together to create the entire background for the scene. The biggest example of this was used in the second shot of the film. The shot was a large exterior shot that involved over a 100 degree camera rotation followed by a camera pull-out over a vast expanse of 3D environment.

Scene with the background cards deactivated
Here is the scene without the background cards visible. And then below you can see that the camera is tracked to the scene already.



Now the normal process I would go through for adding a background painting into a scene is to create a Card node and attach it to the Scene node, with the source image being the matte painting. I did that when starting out with this shot before I realized the matte painting wasn't big enough to fit all of the camera moves in the shot. 
Turning on the original Matte Painting in the scene.
The first matte painting card in 3D space. As shown, it didn't exactly cover all of the motion of the camera's cone.
In the actual shot, this is the first matte painting card. This is for the first  portion of the camera rotation.
The end of the camera's rotation doesn't have any background in it with only one card in the scene.
And the camera pull-out doesn't have any background in it either.
After seeing this, I duplicated the first card and translated it in 3D space. It was never meant to be a wrap-around image, but I needed to make it seem seamless in this expansive scene. Trying to attach the two by an edge while making it wrap around the scene was fun. Here's the fruits of my labor.
The two cards in 3D space. More of the cone angle is covered, meaning the camera's pan now has background.
The camera pan is now complete, however...
The original shot still had a hole in it, as well as the pull-out not being fully filled.
These holes still meant another card was needed. True, it would have also been possible to scale the cards and then re-translate them to fit the edges together again, but to save time I decided to just create a third card and use it to fill in the space still open in the background.

The third card in the 3D system in Nuke. 
And the final composite of the background paintings for this shot.
The full and active node tree for the background setup of this shot.
And here we have a fabricated fully seamless background painting!


Monday, May 27, 2013

Camera Tracking in Nuke and NukeX

Well, it's a bit overdue, but now it's time for my first actual blog article or entry. Alright! Today's, if you can tell by the title, will be a compositing piece. I never thought I'd have a chance to understand compositing, but working on Gaiaspora gave me that opportunity! So now, let me share with you some of my self-taught knowledge!

My first foray into compositing was with the program Composite. I've never actually said I'd hated a program before I used it. Camera tracking in it was such a pain to figure out! But then I got to use Nuke. And holy cow, is the tracking in that program so much better. Nuke's camera tracker uses not only it's 3D capabilities, but also utilizes a controlled number of tracking points, each with their own values that can be used in a number of Nuke's other nodes as well. So this blog will kind of serve as an instructional piece more so than a personal techniques piece, but I can use this for a couple things, and will just be able to reference it in later blogs if need be.

So, camera tracking in Nuke using it's 3D capabilities and the CameraTracker node!

Simple Nuke File, no nodes added to the original EXR.
Here we have the original Nuke file. All that has been done is importing the original .exr file for the rendered shot from maya. The color of it is off, as it needs to be changed into a linear color space. This can be done using the attribute of the read node for the .exr, or using a colorspace node. I choose the latter method only because it allows me to use the CameraTracker node on the original, unchanged .exr. I've found that the tracking points in the node work better when there's more contrast in the image it's tracking, making it easier to recognize the changes in space.

The issue for the shot was then adding a background plate to the scene using a matte painting not added into the Maya scene. But this would be interesting because the .exr file has a vary large camera movement. The original Maya scene includes roughly a 130 degree turn, and then a large pull-out to reveal the entire expanse of the landscape. However, trying to manually create the camera in 3D Nuke (as I call Nuke's 3D mode) would have been an amazing hassle. So I learned about the CameraTracker node and how it was used to recreate 3D cameras in a 2D shot. This means, the 3D camera used in Maya can be recreated from the 2D .exr file in 3D in Nuke.

The first step is attaching the CameraTracker node to the .exr Read node. If the Read node is already selected when you tab search for the CameraTracker node, it will create the new node with the .exr already attached to it as the source. If not, simply attach the source input of the CameraTracker into the Read node where you want to recreate the camera information.

CameraTracker node attached to the original .exr through the Read node.
Now that the tracker is set up, points need to be established in the scene to actually be tracked. The third tab of the CameraTracker node is the Tracking tab, and has all the information you will need to set up the original tacking points that will be mapped.

CameraTracker's Tracking Node
Several attributes are here that can help add to the proficiency of the tracking information. And it's a personal preference really, but I always turn on preview features so that I can actually see all of the points Nuke is considering using, and then I can better refine the attributes to the scene I'm tracking.

Turning on tracking point previews
If you didn't already know, Nuke is exceedingly user friendly, and to help you understand each of the node's attributes, it has tooltips for every attribute if you hover over them. 

Tooltips are exceedingly helpful!
Here I've gone ahead and created the tracking points that I'd like to use for the scene. 

Camera tracking for this scene.

Note: The Track Validation attribute will most always be kept at Free Camera. The other options are None and Rotating Camera. As the name suggests, if the camera that was used in the original scene ONLY rotated, you would change the validation to Rotating Camera.

The last thing that needs to be done is inside the fourth tab in the CameraTracker, the Solver tab. Here is the data for the actual type of camera used in the original shot. If this information is known, it can be used to create a more accurate camera track.

The Solver tab
If you know the focal length of camera that was used in the shooting of the scene, change the focal length to Known. Then change the focal length to the actual number used by the camera. Camera Motion is similar to the Track Validation. Most of the time it can be left on Free Camera, depending on the shot. Smoothness will help the tracker, but will also cause it to take a little more time to calculate.

Custom Solver tab settings for the scene.
Now, we are ready to track the scene! To track the scene, go back to the main CameraTracker tab, and choose the Track Features option. And now you wait! If the tracker settings are okay, it will immediately begin. Another awesome feature about nuke is that it tells you what frame it's on and it gives you a progress bar. So user friendly!

The most excruciating part, waiting for the camera track to finish!

*Stretching limbs* Well, the track is complete! Now, if you are viewing the CameraTracker node, you should see all the tracking points now have some from of line attached to them. This shows that they actually have information and it shows the path that they follow during the track.

The completed camera track

There's still one more step to complete the actual camera track. Underneath where you started the track, there is a button named Solve Camera. Click this button. Wait a moment, and Nuke will calculate whether the tracking points were successful or not.

The solved camera
Here you can see the tracking points have mostly changed colors. The red points aren't going to track as well as the green points. And now you have your tracked scene! And now we get to the interesting things. If you hold the cursor over the viewer and hit the "V" key, you will probably see something you weren't expecting. 

The point cloud when you first look at it after the tracker is complete!
The viewer is now looking into 3D space. And now a cloud of points should be visible. Using the "Alt" key and the mouse, you can now maneuver in the 3D coordinate system. The point cloud you see is a physical representation of those tracking points. When looking at the tracking points in 2D view, all you see is a bunch of "Xs" and lines. Now, you can see the actual depth of the shot.

The tracking points with the depth of the shot showing
Almost done! Now you need to set up the axis on the camera. The points generated have a default orientation. To create the corrected orientation, we will start with the X-axis. The X-axis is, obviously, the horizontal axis on a Cartesian coordinate system. So we need to select points in the 2D image (because we can actually see what points are horizontal each other). When looking at the points you've tracked, you will be able to tell which points are located on your scenes ground plane. With the cursor over the viewer, hit the "Tab" button to go back to 2D view.

Selecting two points on the ground plane, for the ground plane (X-axis)
Select a point on the ground plane. Then "Shift"-select the second point. The second point should be as close as you can to a straight, horizontal line from the first point. With both points selected, right-click, move down to ground plane, and then select the axis you are creating, so in this case the X-axis. With the X-axis created, I'll repeat the process for the Z-axis. This time, I'll choose two points that will show the most depth in the scene, so a point here on the ground plane in the bottom of the screen, and then a point in the back where the ground is foggy. With both of the axis created, the flat ground plane for the scene has been created, and you should notice that the point cloud in 3D space has shifted to fit the new axis.

The point cloud with the corrected axis
Now there's only one step left. The CameraTracker has tracked the scene. Now we need to create a 3D camera and scene inside of Nuke. The scene will be how matte paintings and and other pieces can be added into the shot. I know that sounds like a lot, but it's the last step because Nuke can do all this with the click of a button. Literally. Back in the main CameraTracker tab of the CameraTracker node, underneath the Solve Camera option, is a third button that says Create Scene. Next to it is a checkbox with the option to link the output of the button to the CameraTracker node. Keep this checked. Now click the button and you're done!

The new 3D scene inside of Nuke
If you scrub through your shot's timeline, you will see that the newly created camera moves. This camera is the mirror of the 3D camera used in the Maya scene that created the original shot. To add 3D objects into the shot, they will get added into the Scene node here. I also detach the PointCloud from the Scene node so that the tracking points don't show up in the shot. Then to put the additions into the shot, you run the Scene node through the ScanlineRender node. And there you have it! Using the CameraTracker in Nuke to create a replication of a camera used in a shot created outside of Nuke.