Stickit in Nuke 10?

September 9, 2015

(At the 05:00 minute mark)

I have gotten a lot of mails regarding the “inclusion of stickit in Nuke 10” as demoed at Siggraph.
But no, its a coincidental same time development, and I have no involvement in that nor take any credit for their innovation.
Shortly after releasing the first demo of Stickit, i got a little note from “someone” at “some big company” that TheFoundry had privately demoed something that looks a lot like Stickit but using Ocula tech.
Intrigued by this i downloaded the Ocula demo, and made a tool to make a temporal offset vector from the Ocula vector generator, and it was a nearly identical result to what stickit could deliver, (because its mostly the same).

Im really looking forward to see their implementation and see if they could get around some of the shortcomings that i encountered.

 

StickIT – Digital Makeup Gizmo for Nuke

May 8, 2014

StickIT – A Digital Makeup Gizmo for Nuke. from Hagbarth on Vimeo.

In summer 2013, I was tasked with finding a way to easily add digital makeup to actors facees across multiple scenes (with alot of twitchy motion and super shallow-focus closeups), quickly and with as little effort as possible. That was where i came up with StickIT, a 2D optical flow(ish) solution for “warp” matchmoving a image onto another.

 

Most digital makeup solutions involves a 2D planar track or a 3D matchmove both with their own respective strengths and weaknesses. When doing face makeup with a actor talking or doing other rapid motions, both 3D and planar track solutions can easily take hours before a solid track is in place, and this is where the combination of the 2 comes in.

StickIT uses the Nuke Camera Tracker to generate a 2D pointcloud on the desired area. StickIT then generates a mesh of pins based on the density of points from the pointcloud. By triangulating the neareast points taking both movement and distance into account, StickIT calculates the best suitable motion for each and. It all becomes one big mesh that does’nt care for edges, regions, planes or perspective, but rather just the “optical flow” of the pixels underneath. This obviously have its disadvantages in certain situations, but makes it a simple 1 Click, set and forget approach.

Pulling a ST map into the warp you can generate a diffrence map and by that a ST map and Motion Vector map that can be used to not only creating motion blur and re-applying the effect multiple times in the same comp, but also gives the option to export the 2 and replicate the exact same results inside Fusion or AfterEffects for example.

With all that being said, StickIT is made to do things fast and dirty and won’t replace any of the other solutions if there is time for a proper matchmove. But when you are on a budget and got 100 more of these shots waiting in queue you might aswell just “do the clicks and se if it sticks”

 

StickIt01

 

 

 

StickIt02

 

The python source code took quite a few rounds of cleanup (the yellow parts) to bring down solve time to a few seconds.

StickIt03

Dissecting the Nuke CameraTracker node.

October 20, 2013

Update: Nuke 8 fixed / added some of this functionality.

CameraTracker to RotoShape

The 3D Camera Tracker node in nuke is quite nice, but does have its limitations. One of the cool things is that you can export individual tracking features as “Usertracks” and use those as a 2D screenspace track. However you can only select tracks from 1 frame at a time and you can only export a maximum of 100 tracks in one single node. You cannot track single features manually and you cannot do object solving.

Well…. unless you use python =)

 

Extracting All FeatureTracks

I have created a script that returns a full list of tracking points from a CameraTracker node. This can for example be fed into a rotopaint node to do something like this:

Nuke CameraTracker to Rotoshapes from Hagbarth on Vimeo.

 

Here is some sample code that will let you export all FeatureTracks from a CameraTracker node:

'''================================================================================
; Function:             ExportCameraTrack(myNode):
; Description:          Extracts all 2D Tracking Featrures from a 3D CameraTracker node (not usertracks).
; Parameter(s):         myNode - A CameraTracker node containing tracking features
; Return:               Output - A list of points formated [ [[Frame,X,Y][...]] [[...][...]] ]
;                           
; Note(s):              N/A
;=================================================================================='''
def ExportCameraTrack(myNode):
    myKnob = myNode.knob("serializeKnob")
    myLines = myKnob.toScript()    
    DataItems = string.split(myLines, '\n')
    Output = []
    for index,line in enumerate(DataItems):
        tempSplit = string.split(line, ' ')
        if (len(tempSplit) > 4 and tempSplit[ len(tempSplit)-1] == "10") or (len(tempSplit) > 6 and  tempSplit[len(tempSplit)-1] == "10"): #Header
            #The first object always have 2 unknown ints, lets just fix it the easy way by offsetting by 2
            if len(tempSplit) > 6 and  tempSplit[6] == "10":
                offsetKey = 2
                offsetItem = 0
            else:
                offsetKey = 0
                offsetItem = 0
            #For some wierd reason the header is located at the first index after the first item. So we go one step down and look for the header data.
            itemHeader = string.split(myLines, '\n')[index+1]
            itemHeadersplit = string.split(itemHeader, ' ')
            itemHeader_UniqueID = itemHeadersplit[1]
            #So this one is rather wierd but after a certain ammount of items the structure will change again.
            if len(itemHeadersplit) == 3:
                itemHeader = string.split(myLines, '\n')[index+2]
                itemHeadersplit = string.split(itemHeader, ' ')
                offsetKey = 2
                offsetItem = 2
            itemHeader_FirstItem = itemHeadersplit[3+offsetItem]
            itemHeader_NumberOfKeys = itemHeadersplit[4+offsetKey]
            #Here we extract the individual XY coordinates
            PositionList =[]
            for x in range(2,int(itemHeader_NumberOfKeys)+1):
                PositionList.append([int(LastFrame)+(x-2),string.split(DataItems[index+x], ' ')[2]  ,string.split(DataItems[index+x], ' ')[3]])
            Output.append(PositionList)
        elif (len(tempSplit) > 8 and tempSplit[1] == "0" and tempSplit[2] == "1"):
            LastFrame = tempSplit[3]
        else:  #Content
            pass
    return Output

import string #This is used by the code. Include!

#Example 01:
#This code will extract all tracks from the camera tracker and display the first item.
Testnode = nuke.toNode("CameraTracker1") #change this to your tracker node!
Return = ExportCameraTrack(Testnode)
for item in Return[0]:
    print item

Remember if dealing with 1000+ features you need to bake keyframes and not use expressions as it will slow down the nuke script immensely.

 

Manual Single Feature Track

I did some additional tests with this, for example making the reverse of this script and giving me the option to add 2D tracks from a tracker node to the 3D Camera tracking node.

 

Object Solver

Now this is not related to the 2d tracking points but still a simple thing that should be included in the tracker.

Nuke Object Tracking

Reading Lidar data into nuke.

October 20, 2013

Nuke Lidar Reader

 

Did some testing with my pointclouder python script i wrote for Nuke, attatching it to a CSV reader i loaded in some examples posted on the Nuke user forums: http://forums.thefoundry.co.uk/phpBB2/viewtopic.php?t=6982&postdays=0&postorder=asc&start=0 , sadly the data did only include luminance and not color data, but it still gives quite a good readout.

5 million points is a bit much to work with but filtering off 80% of the points still give a great result.