Search code examples
typescripttensorflowweb-workertensorflow.js

How to run handpose tfjs model in web worker


I want to use webcamera to get frame and run tensorflow model "handpose" for estimating hand visibility. As we know, handpose model is a little slow so i try to move estimating to web worker.

the problem is HTMLVideoElement object could not be cloned. (I need to pass video to estimateHand method).

Is it possible to do it in another way?

web worker:

import { getModel } from "../utils/tf-model";

let model: any;

const modelLoading = async () => {
    model = await getModel();
}

export const estimate = async (video: HTMLVideoElement) => {
    if (model) {
        const hands = await model.estimateHands(video);
        return hands;
    }
    return null;
}

modelLoading();

export default {} as typeof Worker & (new () => Worker);

main thread:

import MyWorker from 'comlink-loader!./worker';

const worker = new MyWorker();

...

func = async () => {
        const res = await worker.estimate(this.video);
        ...

EDIT I found the solution on my own, it's possible to get ImageData from canvas context and pass it to web worker.

...
const canvas = document.createElement('canvas');
// ofc we have to set width and height equal to video sizes ...

const ctx = canvas.getContext('2d')
ctx.drawImage(this.video, 0, 0, this.video.width, this.video.height);
const img = ctx.getImageData(0, 0, this.video.width, this.video.height);
const res = await worker.estimate(img);

Solution

  • Ok, I found the solution. It's so good that we can't pass HTML elements as parameters to web worker, but ofc we can play with canvas context as below:

    ...
    const canvas = document.createElement('canvas');
    // ofc we have to set width and height equal to video sizes ...
    
    const ctx = canvas.getContext('2d')
    ctx.drawImage(this.video, 0, 0, this.video.width, this.video.height);
    const img = ctx.getImageData(0, 0, this.video.width, this.video.height);
    const res = await worker.estimate(img);