I have a canvas in my browser that displays a feed from my webcam. What I want to do, is send the canvas data to my nodejs server, manipulate it, and send it back.
I can do it sending the canvas data via socket.io like so:
socket.emit('canvas_data', canvas.toDataURL());
And then rebuilding it on the nodejs server:
let img = new Image();
img.src = data; // this is the canvas_data from the first step
const canvas = createCanvas(640,480);
const ctx = canvas.getContext('2d');
ctx.drawImage(img,0,0,640,480);
However this seems really wasteful as I'm taking an already rendered canvas, converting it to base64, sending it, and then rebuilding it on the other side.
The whole point of this is to use tfjs on the server side:
let converted = tfjs.browser.fromPixels(canvas);
If I just send the canvas from the first step:
socket.emit('canvas_data', canvas);
And then run tfjs:
let converted = tfjs.browser.fromPixels(data);
I get the following error:
Error: pixels passed to tf.browser.fromPixels() must be either an HTMLVideoElement, HTMLImageElement, HTMLCanvasElement, ImageData in browser, or OffscreenCanvas, ImageData in webworker or {data: Uint32Array, width: number, height: number}, but was object
Is there a more efficient way to accomplish this?
using
toDataURLis always going to be slow as browser needs toencode all data before sending it.
your second example is better, just on the node side you need to create tensor from
Bufferthat you receive on socket (that would be fastest way), no need to use higher-level functions such asfromPixelstake a look at https://github.com/vladmandic/anime/blob/main/sockets/anime.ts for client-side code and https://github.com/vladmandic/anime/blob/main/sockets/server.ts for server-side code
note you also may need to account for channel-depth (does your model work with
rgbaorrgb) and/or model specific any pre-processing normalization, that's handled in https://github.com/vladmandic/anime/blob/main/sockets/inference.ts