I am developing an application in the Unity engine for the Microsoft Hololens that uses the camera to take pictures. In our code, first photo mode and the camera is started, the picture is taken, then the camera is disposed of, and photo mode is ended. The user must take several pictures over the course of this app for its primary functionality. The pictures aren't stored anywhere, we only grab colors from them.
Here is the photo-taking code:
Resolution cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First();
targetTexture = new Texture2D(cameraResolution.width, cameraResolution.height);
// Create a PhotoCapture object
PhotoCapture.CreateAsync(false, delegate (PhotoCapture captureObject)
{
photoCaptureObject = captureObject;
CameraParameters cameraParameters = new CameraParameters();
cameraParameters.hologramOpacity = 0.0f;
cameraParameters.cameraResolutionWidth = cameraResolution.width;
cameraParameters.cameraResolutionHeight = cameraResolution.height;
cameraParameters.pixelFormat = CapturePixelFormat.BGRA32;
// Activate the camera
photoCaptureObject.StartPhotoModeAsync(cameraParameters, delegate (PhotoCapture.PhotoCaptureResult result)
{
// Take a picture
try
{
Debug.Log("Trying to take photo");
photoCaptureObject.TakePhotoAsync(OnCapturedPhotoToMemory);
}
catch (System.ArgumentException e)
{
Debug.LogError("System.ArgumentException:\n" + e.Message);
}
});
});
Which is then disposed of afterwards with:
void OnStoppedPhotoMode(PhotoCapture.PhotoCaptureResult result)
{
// Shutdown our photo capture resource
Debug.Log("Disposing of camera");
photoCaptureObject.Dispose();
photoCaptureObject = null;
}
This code works perfectly fine for the purpose of our project - we take a picture and take a color from it each time the user taps on an object.
Since this is for a senior design project in a CS course, we are expected to show a video or a live demo to the class.
However, recording is always stopped as soon as our application attempts to take a picture. We are unable to both record a video with the webcam and also to take pictures using our above code while it is recording. This makes sense, it seems like our application has to preempt the webcam from the recording process in order to use it. This also applies to streaming video through the device portal.
What this means is that we can never record a demo of our functioning project. The video recording always ends as soon as the camera is accessed by our app.
I have found posts and threads from years ago asking about this, but none have ever been resolved. Is there a known way around this now? Any way for me to get a video of my project while still using it to take pictures inside the application?
Not saying its impossible, but with the camera active while your app is open, the developer portal's screen-shot and video-capture features are disabled.
I used a voice command to release the camera (but not alter the scene) so that I could take screenshots as all I was using the device's cameras for was for Vuforia object recognition, so as long as I didn't move (too much) the screenshots were acceptable.
As such I never looked around for another way. There probably is a way to do it (seeing as Microsoft has been able to present it), but it might not be anything that we, as external developers, can access; like a lot of Hololens features.