This questions is addressed to developers using C++ and the NDK of Nuke.
Context: Assume a custom Op which implements the interfaces of DD::Image::NoIop
and
DD::Image::Executable
. The node iterates of a range of frames extracting information at
each frame, which is stored in a custom data structure. An custom knob, which is a member
variable of the above Op (but invisible in the UI), handles the loading and saving
(serialization) of the data structure.
Now I want to exchange that data structure between Ops.
So far I have come up with the following ideas:
Expression linking
Knobs can share information (matrices, etc.) using expression linking. Can this feature be exploited for custom data as well?
Serialization to image data
The custom data would be serialized and written into a (new) channel. A node further down the processing tree could grab that and de-serialize again. Of course, the channel must not be altered between serialization and de-serialization or else ... this is a hack, I know, but, hey, any port in a storm!
GeoOp + renderer
In cases where the custom data is purely point-based (which, unfortunately, it isn't in my case), I could turn the above node into a 3D node and pass point data to other 3D nodes. At some point a render node would be required to come back to 2D.
I am going into the correct direction with this? If not, what is a sensible approach to make this data structure available to other nodes, which rely on the information contained in it?
This question has been answered on the Nuke-dev mailing list:
If you know the actual class of your Op's input, it's possible to cast the input to that class type and access it directly. A simple example could be this snippet below:
//! @file DownstreamOp.cpp
#include "UpstreamOp.h" // The Op that contains your custom data.
// ...
UpstreamOp * upstreamOp = dynamic_cast< UpstreamOp * >( input( 0 ) );
if ( upstreamOp )
{
YourCustomData * data = yourOp->getData();
// ...
}
// ...
UPDATE
Update with reference to a question that I received via email:
I am trying to do this exact same thing, pass custom data from one Iop plugin to another. But these two plugins are defined in different dso/dll files. How did you get this to work ?
Short answer:
Compile your Ops into a single shared object.
Long answer:
Say
define the depending Ops.
In a first attempt I compiled the first plugin using only UpstreamOp.cpp
,
as usual. For the second plugin I compiled both DownstreamOp.cpp
and
UpstreamOp.cpp
into that plugin.
Strangely enough that worked (on Linux; didn't test Windows).
However, by overriding
bool Op::test_input( int input, Op * op ) const;
things will break. Creating and saving a Comp using the above plugins still
works. But loading that same Comp again breaks the connection in the node graph
between UpstreamOp
and DownstreamOp
and it is no longer possible to connect
them again.
My hypothesis is this: since both plugins contain symbols for UpstreamOp
it
depends on the load order of the plugins if a node uses instances of UpstreamOp
from the first or from the second plugin. So, if UpstreamOp
from the first plugin
is used then any dynamic_cast
in Op::test_input()
will fail and the two Op cannot
be connected anymore.
It is still surprising that Nuke would even bother to start at all with the above configuration, since it can be rather picky about symbols from plugins, e.g if they are missing.
Anyway, to get around this problem I did the following:
An example for a TCL scripts can be found in the dev guide and the instructions for your menu.py could be something like this
menu = nuke.menu( 'Nodes' ).addMenu( 'my-plugins' )
menu.addCommand('UpstreamOp', lambda: nuke.createNode('UpstreamOp'))
menu.addCommand('DownstreamOp', lambda: nuke.createNode('DownstreamOp'))
nuke.load('myplugins')
So far, this works reliably for us (on Linux & Windows, haven't tested Mac).