I've been using Vuforia for a while in a Framemarker-based augmented reality app on iOS. I've grown tired of using Kitware's VTK and would like to switch to SceneKit. How do I best draw into the Scene's background using Metal and Vuforia?
My current technique (Objective-C) has involved grabbing out each frame of video from Vuforia:
- (void)Vuforia_onUpdate:(Vuforia::State *)state
{
Vuforia::Frame frame = state->getFrame();
for (int i = 0; i < frame.getNumImages(); i++) {
const Vuforia::Image *image = frame.getImage(i);
if (image->getFormat() == kVuforiaFrameFormat) {
[self.delegate updateCameraSceneWithBitmap:image->getPixels() size:CGSizeMake(image->getWidth(), image->getHeight()) format:OARCameraImageFormatRGB];
}
}
// I also grab out the frame marker poses and convert the matrices to
// SceneKit compatible ones.
}
Then in Swift-land:
// Get image into UIImage and assign to background
scene.background.contents = backgroundImage
This obviously involves copying images around unnecessarily as well as converting the image into a CGImage
and wrapping that in UIImage
. Currently, using about 50% of the CPU, high-energy impact, and idling around 180 MB worth of memory.
I've also found this project that uses an older version of Vuforia and SceneKit, but it doesn't perform very well and it takes over the UIView. What I would really like is a way of adding Vuforia camera draw to occur in an SCNRenderer
or SCNView
in such a way as to minimize C++ bleed.
How do I draw the camera data coming from Vuforia using Metal into the SCNScene's background so that I can compartmentalize the C++ code in Objective-C++ and generally work in Swift?
One way to handle the problem is to use the same method I used in VTK. Simply pass a UIImage into the scene's background contents. Not optimal from a performance perspective, but it works.
func didUpdateCameraImage(image: UIImage?)
{
if let image = image {
_scene?.background.contents = image
}
}