Here is the first teaser for my Nuke Blinkscript called Weave.
Hey guys, thanks alot for all the attention that this blog have gained recently. I do feel a bit sad that i don’t have any updates, but it is not because I haven’t been doing anything interesting.
I have begun digging deep into the BlinkScript node, and have made some quite interesting things such a a Open VDB Reader and renderer.
The thing is, recently I begun making so much progress that it would not make sense to post anything here, as it would be outdated the very next day. I had the issue, that I was jumping from research project to research project, gaining alot of knowledge but without finishing them up for release. So i decided to go back and work through each of them one by one.
So instead of posting a ton of posts here i have begun posting incremental updates on my twitter https://twitter.com/xads and once the projects round up i will post full updates here.
The rough plan is that i start with some of the 2D processing scripts, then the 3D version of those, and then the voxel renderer.
Since NewYear i have gotten a new job, a new website theme and have generally been busy as always.
The new website theme have allowed me to do some more with articles and generally make it easier to navigate the site, however there are still quite a few sections that i need to finish so please don’t mind the many temporary sections. But here are some of the projects i have been working on over the past few weeks:
Silk for Nuke is a BlinkScript powered toolset that generates silky strings from a image input.
I have also been working on a procedural lightning generator for Nuke. I have not yet found a name for it, but you will hear more about it in the near future.
Lastly i’m working on a 3D render that is also written in the BlinkScript node.
The Foundry’s VR toolkit is still only available for select companies. So just out of curiosity, I have decided to make a little toolkit myself for the day-to-day needs. One of the more simple yet quite handy tools is a VR viewer for NukeStudio and Hiero.
It supports both mono and stereo VR plates.
Here is a little demo of the tool:
Creating cleanplates is a common task in the VFX world.
When dealing with crowded/busy plates such as a intersection with pedestrians and cars, fireworks,snow, rain etc, its nice to use some form of automation, at least to get a good base.
Rich Frazer wrote a excellent example on how this process also can be automated by using the motion estimation plugins in NukeX to clean based on motion.
This method works well for larger moving objects but can however be somewhat time consuming and won’t work well on heavy plates such as this:
There is also the execelnt plugin by keller.oi called Superpose that seem to be the ultimate “off the self” solution to the problem.
But if we don’t have NukeX and the boss won’t spend money on plugins we can try the automation on our own.
The first thing to come to mind, is to stabilize the plate and use FrameBlend, but that usually ends up causing a lot of streaking and changes in the luminance and or chrominance:
One of the better ways is to use the TemporalMedian node, however this only works across 3 frames. You could make a custom gizmo that combines a ton of TemporalMedian nodes, but its rather limited what you can get from 3 samples in a median.
This is where BlinkScript is super handy, as it comes with a build in median function.
So with nothing more than a few lines of code (1 line of process code), you can create a tool that allows you to do fine automated cleanplates in regular nuke.
FrameMedian is a BlinkScript that will allow you to do a TemporalMedian over up to 20 frames.
Using the “Frame Range” process method you can specify a Start and End frame, and how many samples you want.
Then FrameMedian distributes samples evenly across that range.
You can also use the “Specified Frames” method, that allows you to specify what frames you want to sample.
For the best result the input plate must be stabilized and not have too much chroma/luma alteration. So in case of Day-to-Night timelapses pick frames/frame range within same light.
Due to the nature of the median function more samples is not always better, so its good to try out different frame ranges and sample count.
The node in action: