How to create video srcObject from VideoFrame? - javascript

I'm learning webcodecs now, and I saw things as below:
So I wonder maybe it can play video on video element with several pictures. I tried many times but it still can't work.
I create videoFrame from pictures, and then use MediaStreamTrackGenerator to creates a media track. But the video appears black when call play().
Here is my code:
const test = async () => {
const imgSrcList = [
'https://gw.alicdn.com/imgextra/i4/O1CN01CeTlwJ1Pji9Pu6KW6_!!6000000001877-2-tps-62-66.png',
'https://gw.alicdn.com/imgextra/i3/O1CN01h7tWZr1ZiTEk1K02I_!!6000000003228-2-tps-62-66.png',
'https://gw.alicdn.com/imgextra/i4/O1CN01CSwWiA1xflg5TnI9b_!!6000000006471-2-tps-62-66.png',
];
const imgEleList: HTMLImageElement[] = [];
await Promise.all(
imgSrcList.map((src, index) => {
return new Promise((resolve) => {
let img = new Image();
img.src = src;
img.crossOrigin = 'anonymous';
img.onload = () => {
imgEleList[index] = img;
resolve(true);
};
});
}),
);
const trackGenerator = new MediaStreamTrackGenerator({ kind: 'video' });
const writer = trackGenerator.writable.getWriter();
await writer.ready;
for (let i = 0; i < imgEleList.length; i++) {
const frame = new VideoFrame(imgEleList[i], {
duration: 500,
timestamp: i * 500,
alpha: 'keep',
});
await writer.write(frame);
frame.close();
}
// Call ready again to ensure that all chunks are written before closing the writer.
await writer.ready.then(() => {
writer.close();
});
const stream = new MediaStream();
stream.addTrack(trackGenerator);
const videoEle = document.getElementById('video') as HTMLVideoElement;
videoEle.onloadedmetadata = () => {
videoEle.play();
};
videoEle.srcObject = stream;
};
Thanks!

Disclaimer:
I am not an expert in this field and it's my first use of this API in this way. The specs and the current implementation don't seem to match, and it's very likely that things will change in the near future. So take this answer with all the salt you can, it is only backed by trial.
There are a few things that seems wrong in your implementation:
duration and timestamp are set in micro-seconds, that's 1/1,000,000s. Your 500 duration is then only half a millisecond, that would be something like 2000FPS and your three images would get all displayed in 1.5ms. You will want to change that.
In current Chrome's implementation, you need to specify the displayWidth and displayHeight members of the VideoFrameInit dictionary (though if I read the specs correctly that should have defaulted to the source image's width and height).
Then there is something I'm less sure about, but it seems that you can't batch-write many frames. It seems that the timestamp field is kind of useless in this case (even though it's required to be there, even with nonsensical values). Once again, specs have changed so it's hard to know if it's an implementation bug, or if it's supposed to work like that, but anyway it is how it is (unless I too missed something).
So to workaround that limitation you'll need to write periodically to the stream and append the frames when you want them to appear.
Here is one example of this, trying to keep it close to your own implementation by writing a new frame to the WritableStream when we want it to be presented.
const test = async() => {
const imgSrcList = [
'https://gw.alicdn.com/imgextra/i4/O1CN01CeTlwJ1Pji9Pu6KW6_!!6000000001877-2-tps-62-66.png',
'https://gw.alicdn.com/imgextra/i3/O1CN01h7tWZr1ZiTEk1K02I_!!6000000003228-2-tps-62-66.png',
'https://gw.alicdn.com/imgextra/i4/O1CN01CSwWiA1xflg5TnI9b_!!6000000006471-2-tps-62-66.png',
];
// rewrote this part to use ImageBitmaps,
// using HTMLImageElement works too
// but it's less efficient
const imgEleList = await Promise.all(
imgSrcList.map((src) => fetch(src)
.then(resp => resp.ok && resp.blob())
.then(createImageBitmap)
)
);
const trackGenerator = new MediaStreamTrackGenerator({
kind: 'video'
});
const duration = 1000 * 1000; // in µs (1/1,000,000s)
let i = 0;
const presentFrame = async() => {
i++;
const writer = trackGenerator.writable.getWriter();
const img = imgEleList[i % imgEleList.length];
await writer.ready;
const frame = new VideoFrame(img, {
duration, // value doesn't mean much, but required
timestamp: i * duration, // ditto
alpha: 'keep',
displayWidth: img.width * 2, // required
displayHeight: img.height * 2, // required
});
await writer.write(frame);
frame.close();
await writer.ready;
// unlock our Writable so we can write again at next frame
writer.releaseLock();
setTimeout(presentFrame, duration / 1000);
}
presentFrame();
const stream = new MediaStream();
stream.addTrack(trackGenerator);
const videoEle = document.getElementById('video');
videoEle.srcObject = stream;
};
test().catch(console.error)
<video id=video controls autoplay muted></video>

Related

Save ALL video frames to folder in Javascript [duplicate]

I'm working on a client-side project which lets a user supply a video file and apply basic manipulations to it. I'm trying to extract the frames from the video reliably. At the moment I have a <video> which I'm loading selected video into, and then pulling out each frame as follows:
Seek to the beginning
Pause the video
Draw <video> to a <canvas>
Capture the frame from the canvas with .toDataUrl()
Seek forward by 1 / 30 seconds (1 frame).
Rinse and repeat
This is a rather inefficient process, and more specifically, is proving unreliable as I'm often getting stuck frames. This seems to be from it not updating the actual <video> element before it draws to the canvas.
I'd rather not have to upload the original video to the server just to split the frames, and then download them back to the client.
Any suggestions for a better way to do this are greatly appreciated. The only caveat is that I need it to work with any format the browser supports (decoding in JS isn't a great option).
[2021 update]: Since this question (and answer) has first been posted, things have evolved in this area, and it is finally time to make an update; the method that was exposed here went out-of-date, but luckily a few new or incoming APIs can help us better in extracting video frames:
The most promising and powerful one, but still under development, with a lot of restrictions: WebCodecs
This new API unleashes access to the media decoders and encoders, enabling us to access raw data from video frames (YUV planes), which may be a lot more useful for many applications than rendered frames; and for the ones who need rendered frames, the VideoFrame interface that this API exposes can be drawn directly to a <canvas> element or converted to an ImageBitmap, avoiding the slow route of the MediaElement.
However there is a catch, apart from its current low support, this API needs that the input has been demuxed already.
There are some demuxers online, for instance for MP4 videos GPAC's mp4box.js will help a lot.
A full example can be found on the proposal's repo.
The key part consists of
const decoder = new VideoDecoder({
output: onFrame, // the callback to handle all the VideoFrame objects
error: e => console.error(e),
});
decoder.configure(config); // depends on the input file, your demuxer should provide it
demuxer.start((chunk) => { // depends on the demuxer, but you need it to return chunks of video data
decoder.decode(chunk); // will trigger our onFrame callback
})
Note that we can even grab the frames of a MediaStream, thanks to MediaCapture Transform's MediaStreamTrackProcessor.
This means that we should be able to combine HTMLMediaElement.captureStream() and this API in order to get our VideoFrames, without the need for a demuxer. However this is true only for a few codecs, and it means that we will extract frames at reading speed...
Anyway, here is an example working on latest Chromium based browsers, with chrome://flags/#enable-experimental-web-platform-features switched on:
const frames = [];
const button = document.querySelector("button");
const select = document.querySelector("select");
const canvas = document.querySelector("canvas");
const ctx = canvas.getContext("2d");
button.onclick = async(evt) => {
if (window.MediaStreamTrackProcessor) {
let stopped = false;
const track = await getVideoTrack();
const processor = new MediaStreamTrackProcessor(track);
const reader = processor.readable.getReader();
readChunk();
function readChunk() {
reader.read().then(async({ done, value }) => {
if (value) {
const bitmap = await createImageBitmap(value);
const index = frames.length;
frames.push(bitmap);
select.append(new Option("Frame #" + (index + 1), index));
value.close();
}
if (!done && !stopped) {
readChunk();
} else {
select.disabled = false;
}
});
}
button.onclick = (evt) => stopped = true;
button.textContent = "stop";
} else {
console.error("your browser doesn't support this API yet");
}
};
select.onchange = (evt) => {
const frame = frames[select.value];
canvas.width = frame.width;
canvas.height = frame.height;
ctx.drawImage(frame, 0, 0);
};
async function getVideoTrack() {
const video = document.createElement("video");
video.crossOrigin = "anonymous";
video.src = "https://upload.wikimedia.org/wikipedia/commons/a/a4/BBH_gravitational_lensing_of_gw150914.webm";
document.body.append(video);
await video.play();
const [track] = video.captureStream().getVideoTracks();
video.onended = (evt) => track.stop();
return track;
}
video,canvas {
max-width: 100%
}
<button>start</button>
<select disabled>
</select>
<canvas></canvas>
The easiest to use, but still with relatively poor browser support, and subject to the browser dropping frames: HTMLVideoElement.requestVideoFrameCallback
This method allows us to schedule a callback to whenever a new frame will be painted on the HTMLVideoElement.
It is higher level than WebCodecs, and thus may have more latency, and moreover, with it we can only extract frames at reading speed.
const frames = [];
const button = document.querySelector("button");
const select = document.querySelector("select");
const canvas = document.querySelector("canvas");
const ctx = canvas.getContext("2d");
button.onclick = async(evt) => {
if (HTMLVideoElement.prototype.requestVideoFrameCallback) {
let stopped = false;
const video = await getVideoElement();
const drawingLoop = async(timestamp, frame) => {
const bitmap = await createImageBitmap(video);
const index = frames.length;
frames.push(bitmap);
select.append(new Option("Frame #" + (index + 1), index));
if (!video.ended && !stopped) {
video.requestVideoFrameCallback(drawingLoop);
} else {
select.disabled = false;
}
};
// the last call to rVFC may happen before .ended is set but never resolve
video.onended = (evt) => select.disabled = false;
video.requestVideoFrameCallback(drawingLoop);
button.onclick = (evt) => stopped = true;
button.textContent = "stop";
} else {
console.error("your browser doesn't support this API yet");
}
};
select.onchange = (evt) => {
const frame = frames[select.value];
canvas.width = frame.width;
canvas.height = frame.height;
ctx.drawImage(frame, 0, 0);
};
async function getVideoElement() {
const video = document.createElement("video");
video.crossOrigin = "anonymous";
video.src = "https://upload.wikimedia.org/wikipedia/commons/a/a4/BBH_gravitational_lensing_of_gw150914.webm";
document.body.append(video);
await video.play();
return video;
}
video,canvas {
max-width: 100%
}
<button>start</button>
<select disabled>
</select>
<canvas></canvas>
For your Firefox users, Mozilla's non-standard HTMLMediaElement.seekToNextFrame()
As its name implies, this will make your <video> element seek to the next frame.
Combining this with the seeked event, we can build a loop that will grab every frame of our source, faster than reading speed (yeah!).
But this method is proprietary, available only in Gecko based browsers, not on any standard tracks, and probably gonna be removed in the future when they'll implement the methods exposed above.
But for the time being, it is the best option for Firefox users:
const frames = [];
const button = document.querySelector("button");
const select = document.querySelector("select");
const canvas = document.querySelector("canvas");
const ctx = canvas.getContext("2d");
button.onclick = async(evt) => {
if (HTMLMediaElement.prototype.seekToNextFrame) {
let stopped = false;
const video = await getVideoElement();
const requestNextFrame = (callback) => {
video.addEventListener("seeked", () => callback(video.currentTime), {
once: true
});
video.seekToNextFrame();
};
const drawingLoop = async(timestamp, frame) => {
if(video.ended) {
select.disabled = false;
return; // FF apparently doesn't like to create ImageBitmaps
// from ended videos...
}
const bitmap = await createImageBitmap(video);
const index = frames.length;
frames.push(bitmap);
select.append(new Option("Frame #" + (index + 1), index));
if (!video.ended && !stopped) {
requestNextFrame(drawingLoop);
} else {
select.disabled = false;
}
};
requestNextFrame(drawingLoop);
button.onclick = (evt) => stopped = true;
button.textContent = "stop";
} else {
console.error("your browser doesn't support this API yet");
}
};
select.onchange = (evt) => {
const frame = frames[select.value];
canvas.width = frame.width;
canvas.height = frame.height;
ctx.drawImage(frame, 0, 0);
};
async function getVideoElement() {
const video = document.createElement("video");
video.crossOrigin = "anonymous";
video.src = "https://upload.wikimedia.org/wikipedia/commons/a/a4/BBH_gravitational_lensing_of_gw150914.webm";
document.body.append(video);
await video.play();
return video;
}
video,canvas {
max-width: 100%
}
<button>start</button>
<select disabled>
</select>
<canvas></canvas>
The least reliable, that did stop working over time: HTMLVideoElement.ontimeupdate
The strategy pause - draw - play - wait for timeupdate used to be (in 2015) a quite reliable way to know when a new frame got painted to the element, but since then, browsers have put serious limitations on this event which was firing at great rate and now there isn't much information we can grab from it...
I am not sure I can still advocate for its use, I didn't check how Safari (which is currently the only one without a solution) handles this event (their handling of medias is very weird for me), and there is a good chance that a simple setTimeout(fn, 1000 / 30) loop is actually more reliable in most of the cases.
Here's a working function that was tweaked from this question:
async function extractFramesFromVideo(videoUrl, fps = 25) {
return new Promise(async (resolve) => {
// fully download it first (no buffering):
let videoBlob = await fetch(videoUrl).then((r) => r.blob());
let videoObjectUrl = URL.createObjectURL(videoBlob);
let video = document.createElement("video");
let seekResolve;
video.addEventListener("seeked", async function () {
if (seekResolve) seekResolve();
});
video.src = videoObjectUrl;
// workaround chromium metadata bug (https://stackoverflow.com/q/38062864/993683)
while (
(video.duration === Infinity || isNaN(video.duration)) &&
video.readyState < 2
) {
await new Promise((r) => setTimeout(r, 1000));
video.currentTime = 10000000 * Math.random();
}
let duration = video.duration;
let canvas = document.createElement("canvas");
let context = canvas.getContext("2d");
let [w, h] = [video.videoWidth, video.videoHeight];
canvas.width = w;
canvas.height = h;
let frames = [];
let interval = 1 / fps;
let currentTime = 0;
while (currentTime < duration) {
video.currentTime = currentTime;
await new Promise((r) => (seekResolve = r));
context.drawImage(video, 0, 0, w, h);
let base64ImageData = canvas.toDataURL();
frames.push(base64ImageData);
currentTime += interval;
}
resolve(frames);
});
}
Usage:
let frames = await extractFramesFromVideo("https://example.com/video.webm");
Note that there's currently no easy way to determine the actual/natural frame rate of a video unless perhaps you use ffmpeg.js, but that's a 10+ megabyte javascript file (since it's an emscripten port of the actual ffmpeg library, which is obviously huge).
2023 answer:
If you want to extract all frames reliably (i.e. no "seeking" and missing frames), and do so as fast as possible (i.e. not limited by playback speed or other factors) then you probably want to use the WebCodecs API. As of writing it's supported in Chrome and Edge. Other browsers will soon follow - hopefully by the end of 2023 there will be wide support.
I put together a simple library for this, but it currently only supports mp4 files. Here's an example:
<canvas id="canvasEl"></canvas>
<script type="module">
import getVideoFrames from "https://deno.land/x/get_video_frames#v0.0.8/mod.js"
let ctx = canvasEl.getContext("2d");
// `getVideoFrames` requires a video URL as input.
// If you have a file/blob instead of a videoUrl, turn it into a URL like this:
let videoUrl = URL.createObjectURL(fileOrBlob);
await getVideoFrames({
videoUrl,
onFrame(frame) { // `frame` is a VideoFrame object: https://developer.mozilla.org/en-US/docs/Web/API/VideoFrame
ctx.drawImage(frame, 0, 0, canvasEl.width, canvasEl.height);
frame.close();
},
onConfig(config) {
canvasEl.width = config.codedWidth;
canvasEl.height = config.codedHeight;
},
});
URL.revokeObjectURL(fileOrBlob); // revoke URL to prevent memory leak
</script>
Demo: https://jsbin.com/mugoguxiha/edit?html,output
Github: https://github.com/josephrocca/getVideoFrames.js
(Note that the WebCodecs API is mentioned in #Kaiido's excellent answer, but this API alone unfortunately doesn't solve the issue - the example above uses mp4box.js to handle the stuff that the WebCodecs doesn't handle. Perhaps WebCodecs will eventually support the container side of things and this answer will become mostly irrelevant, but until then I hope that this is useful.)

How do I create a video that has seek-able timestamps from an unknown number of incoming video blobs/chunks, using ts-ebml (on-the-fly)?

I am creating a live stream component that utilizes the videojs-record component. Every x amount of milliseconds, the component triggers an event that returns a blob. As seen, the blob contains data from the video recording. It's not the full recording but a piece, for this got returned x seconds into the recording
After saving it in the backend and playing it back, I find that I am unable to skip through the video; it's not seek-able.
Because this is a task that I'm trying to keep in the frontend, I have to inject this metadata within the browser using ts-ebml. After injecting the metadata, the modified blob is sent to the backend.
The function that receives this blob looks as follows:
timestampHandler(player) {
const { length: recordedDataLength } = player.recordedData;
if (recordedDataLength != 0) {
const { convertStream } = this.converter;
convertStream(player.recordedData[recordedDataLength - 1]).then((blob) => {
console.log(blob);
blob.arrayBuffer().then(async response => {
const bytes = new Uint8Array(response);
let binary = '';
let len = bytes.byteLength;
for (let i = 0; i < len; i++) {
binary += String.fromCharCode(bytes[i]);
}
this.$backend.videoDataSendToServer({ bytes: window.btoa(binary), id: this.videoId })
})
.catch(error => {
console.log('Error Converting:\t', error);
})
})
}
}
convertStream is a function located in a class called TsEBMLEngine. This class looks as follows:
import videojs from "video.js/dist/video";
import { Buffer } from "buffer";
window.Buffer = Buffer;
import { Decoder, tools, Reader } from "ts-ebml";
class TsEBMLEngine {
//constructor(){
//this.chunkDecoder = new Decoder();
//this.chunkReader = new Reader();
//}
convertStream = (data) => {
const chunkDecoder = new Decoder();
const chunkReader = new Reader();
chunkReader.logging = false;
chunkReader.drop_default_duration = false;
// save timestamp
const timestamp = new Date();
timestamp.setTime(data.lastModified);
// load and convert blob
return data.arrayBuffer().then((buffer) => {
// decode
const elms = chunkDecoder.decode(buffer);
elms.forEach((elm) => {
chunkReader.read(elm);
});
chunkReader.stop();
// generate metadata
let refinedMetadataBuf = tools.makeMetadataSeekable(
chunkReader.metadatas,
chunkReader.duration,
chunkReader.cues
);
let body = buffer.slice(chunkReader.metadataSize);
// create new blob
let convertedData = new Blob([refinedMetadataBuf, body], { type: data.type });
// store convertedData
return convertedData;
});
}
}
// expose plugin
videojs.TsEBMLEngine = TsEBMLEngine;
export default TsEBMLEngine;
After recording for more than 10 seconds I stop the recording, go to the DB, and watch the retrieved video. The video is seek-able for the first 3 seconds before the dot reaches the very end of the seek-able line. When I'm watching the video in a live stream, the video freezes after the first 3 seconds.
When I look at the size of the file in the DB, it increases after x seconds which means it's being appended to it, just not properly.
Any help would be greatly appreciated.
For being seekable a video (at least talking about EBMLs) needs to have a SeekHead tag, Cues tags and defined duration in Info tag.
For creating new metadata of the video you can use ts-ebml's exporting function makeMetadataSeekable
Then slice the beginning of video and replace it with new metadata like it was done in the example :
const decoder = new Decoder();
const reader = new Reader();
const webMBuf = await fetch("path/to/file").then(res=> res.arrayBuffer());
const elms = decoder.decode(webMBuf);
elms.forEach((elm)=>{ reader.read(elm); });
reader.stop();
const refinedMetadataBuf = tools.makeMetadataSeekable(reader.metadatas, reader.duration, reader.cues);
const body = webMBuf.slice(reader.metadataSize);
const refinedWebM = new Blob([refinedMetadataBuf, body], {type: "video/webm"});
And voila! new video file becomes seekable

How to get stream data from (web) MediaRecorder based on size instead of time?

I'm currently using MediaRecorder to get chunks of a video stream from the user's webcam / mic like so:
navigator.mediaDevices
.getUserMedia({audio: true, video: true})
.then((stream) => {
var mediaRecorder = new MediaRecorder(stream);
mediaRecorder.start(5000);
mediaRecorder.ondataavailable = (blobEvent) => {
console.log('got 5 seconds of stream', blobEvent);
}
})
this emits a chunk of the stream of every 5 seconds of video, however, I want chunks at a particular size, rather than length. There doesn't seem to be a clear way to do this, since I can't check the size of the current chunk without starting a new one, and I can't combine them after the fact. This is the work around I've come up with, using 2 media recorders:
const MIN_BLOB_SIZE = 5000000;
var partSize = 0;
navigator.mediaDevices
.getUserMedia({audio: true, video: true})
.then((stream) => {
var mediaRecorder = new MediaRecorder(stream);
var mediaRecorder2 = new MediaRecorder(stream)
mediaRecorder.start();
mediaRecorder2.start(5000)
mediaRecorder2.ondataavailable = (blobEvent) => {
console.log('got 5 seconds of data', blobEvent)
const blob = blobEvent.data
partSize += blob.size;
if (partSize >= MIN_BLOB_SIZE) {
partSize = 0;
mediaRecorder.requestData();
}
}
mediaRecorder.ondataavailable = (blobEvent) => {
console.log('got a chunk at least 5mb', blobEvent);
};
});
but this feels hacky, gives imprecise chunks, requires me to make sure these two recorders are perfectly synced, complicates error handling / edge cases, and I'm concerned about performance implications of using 2 media recorders. Would accept an answer that gets this done with one media recorder, or, if that’s just not possible, if someone can convince me that 2 media recorders is fine performance wise, I would also accept that and handle the complications it creates.
Combining Chunks
You can combine chunks after the fact with new Blob(blobArray). If you absolutely need to process blobs of a certain size, you can use Blob.slice. It could look something like this:
const MIN_BLOB_SIZE = 5000000;
var partSize = 0, parts = [];
navigator.mediaDevices
.getUserMedia({audio: true, video: true})
.then((stream) => {
var mediaRecorder = new MediaRecorder(stream);
mediaRecorder.start(5000);
mediaRecorder.ondataavailable = (blobEvent) => {
console.log('got 5 seconds of data', blobEvent);
const blob = blobEvent.data;
partSize += blob.size;
parts.push(blob);
if (partSize >= MIN_BLOB_SIZE) {
let bigBlob = new Blob(parts, { type: blob.type });
// if the blob size doesn't need to be precise:
partSize = 0;
parts = [];
// do something with bigBlob
// or if the blob size needs to be precise:
// let sizedBlob = bigBlob.slice(0, MIN_BLOB_SIZE, blob.type);
// parts = [ bigBlob.slice(MIN_BLOB_SIZE, bigBlob.size, blob.type) ];
// partSize = parts[0].size;
// do something with sizedBlob
}
};
mediaRecorder.onstop = (blobEvent) => {
console.log('Recording Stopped');
const blob = blobEvent.data;
partSize += blob.size;
parts.push(blob);
let bigBlob = new Blob(parts, { type: blob.type });
// do something with bigBlob, which may be smaller than MIN_BLOB_SIZE
};
});
Tuning Blob Size with Bitrate
You also might be interested in the bitsPerSecond (and also the audioBitsPerSecond and videoBitsPerSecond) option in the MediaRecorder API, which might give you more predictable blob sizes for a given recorded amount of time. This may affect video and audio quality though.

How do I record AND download / upload the webcam stream on server (javascript) WHILE using the webcam input for facial recognition (opencv4nodejs)?

I had to write a program for facial recognition in JavaScript , for which I used the opencv4nodejs API , since there's NOT many working examples ; Now I somehow want to record and save the stream (for saving on the client-side or uploading on the server) alongwith the audio. This is where I am stuck. Any help is appreciated.
In simple words I need to use the Webcam input for multiple purposes , one for facial recognition and two to somehow save , latter is what i'm unable to do. Also in the worst case, If it's not possible Instead of recording and saving the webcam video I could also save the Complete Screen recording , Please Answer if there's a workaround to this.
Below is what i tried to do, But it doesn't work for obvious reasons.
$(document).ready(function () {
run1()
})
let chunks = []
// run1() for uploading model and for facecam
async function run1() {
const MODELS = "/models";
await faceapi.loadSsdMobilenetv1Model(MODELS)
await faceapi.loadFaceLandmarkModel(MODELS)
await faceapi.loadFaceRecognitionModel(MODELS)
var _stream
//Accessing the user webcam
const videoEl = document.getElementById('inputVideo')
navigator.mediaDevices.getUserMedia({
video: true,
audio: true
}).then(
(stream) => {
_stream = stream
recorder = new MediaRecorder(_stream);
recorder.ondataavailable = (e) => {
chunks.push(e.data);
console.log(chunks, i);
if (i == 20) makeLink(); //Trying to make Link from the blob for some i==20
};
videoEl.srcObject = stream
},
(err) => {
console.error(err)
}
)
}
// run2() main recognition code and training
async function run2() {
// wait for the results of mtcnn ,
const input = document.getElementById('inputVideo')
const mtcnnResults = await faceapi.ssdMobilenetv1(input)
// Detect All the faces in the webcam
const fullFaceDescriptions = await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceDescriptors()
// Training the algorithm with given data of the Current Student
const labeledFaceDescriptors = await Promise.all(
CurrentStudent.map(
async function (label) {
// Training the Algorithm with the current students
for (let i = 1; i <= 10; i++) {
// console.log(label);
const imgUrl = `http://localhost:5500/StudentData/${label}/${i}.jpg`
const img = await faceapi.fetchImage(imgUrl)
// detect the face with the highest score in the image and compute it's landmarks and face descriptor
const fullFaceDescription = await faceapi.detectSingleFace(img).withFaceLandmarks().withFaceDescriptor()
if (!fullFaceDescription) {
throw new Error(`no faces detected for ${label}`)
}
const faceDescriptors = [fullFaceDescription.descriptor]
return new faceapi.LabeledFaceDescriptors(label, faceDescriptors)
}
}
)
)
const maxDescriptorDistance = 0.65
const faceMatcher = new faceapi.FaceMatcher(labeledFaceDescriptors, maxDescriptorDistance)
const results = fullFaceDescriptions.map(fd => faceMatcher.findBestMatch(fd.descriptor))
i++;
}
// I somehow want this to work
function makeLink() {
alert("ML")
console.log("IN MAKE LINK");
let blob = new Blob(chunks, {
type: media.type
}),
url = URL.createObjectURL(blob),
li = document.createElement('li'),
mt = document.createElement(media.tag),
hf = document.createElement('a');
mt.controls = true;
mt.src = url;
hf.href = url;
hf.download = `${counter++}${media.ext}`;
hf.innerHTML = `donwload ${hf.download}`;
li.appendChild(mt);
li.appendChild(hf);
ul.appendChild(li);
}
// onPlay(video) function
async function onPlay(videoEl) {
run2()
setTimeout(() => onPlay(videoEl), 50)
}
I'm not familiar with JavaScript. But in general only one program may communicate with the camera. You will probably need to write a server which will read the data from the camera. Then the server will send the data to your facial recognition, recording, etc.

Struggling to playback a Float 32 Array (Web Audio API)

I'm building a simple looper, to help me come to an understanding of the Web Audio API however struggling to to get a buffer source to play back the recorded audio.
The code has been simplified as much as possible however with annotation it's still 70+ lines, ommitting the CSS and HTML, so apologies for that. A version including the CSS and HTML can be found on JSFiddle:
https://jsfiddle.net/b5w9j4yk/10/
Any help would be much appreciated. Thank you :)
// Aim of the code is to record the input from the mike to a float32 array. then prass that to a buffer which is linked to a buffer source, so the audio can be played back.
// Grab DOM Elements
const playButton = document.getElementById('play');
const recordButton = document.getElementById('record');
// If allowed access to microphone run this code
const promise = navigator.mediaDevices.getUserMedia({audio: true, video: false})
.then((stream) => {
recordButton.addEventListener('click', () => {
// when the record button is pressed clear enstanciate the record buffer
if (!recordArmed) {
recordArmed = true;
recordButton.classList.add('on');
console.log('recording armed')
recordBuffer = new Float32Array(audioCtx.sampleRate * 10);
}
else {
recordArmed = false;
recordButton.classList.remove('on');
// After the recording has stopped pass the recordBuffer the source's buffer
myArrayBuffer.copyToChannel(recordBuffer, 0);
//Looks like the buffer has been passed
console.log(myArrayBuffer.getChannelData(0));
}
});
// this should stat the playback of the source intended to be used adter the audio has been recorded, I can't get it to work in this given context
playButton.addEventListener('click', () => {
playButton.classList.add('on');
source.start();
});
//Transport variables
let recordArmed = false;
let playing = false;
// this buffer will later be assigned a Float 32 Array / I'd like to keep this intimediate buffer so the audio can be sliced and minipulated with ease later
let recordBuffer;
// Declear Context, input source and a block processor to pass the input sorce to the recordBuffer
const audioCtx = new AudioContext();
const audioIn = audioCtx.createMediaStreamSource(stream);
const processor = audioCtx.createScriptProcessor(512, 1, 1);
// Create a source and corrisponding buffer for playback and then assign link
const myArrayBuffer = audioCtx.createBuffer(1, audioCtx.sampleRate * 10, audioCtx.sampleRate);
const source = audioCtx.createBufferSource();
source.buffer = myArrayBuffer;
// Audio Routing
audioIn.connect(processor);
source.connect(audioCtx.destination);
// When recording is armed pass the samples of the block one at a time to the record buffer
processor.onaudioprocess = ((audioProcessingEvent) => {
let inputBuffer = audioProcessingEvent.inputBuffer;
let i = 0;
if (recordArmed) {
for (let channel = 0; channel < inputBuffer.numberOfChannels; channel++) {
let inputData = inputBuffer.getChannelData(channel);
let avg = 0;
inputData.forEach(sample => {
recordBuffer.set([sample], i);
i++;
});
}
}
else {
i = 0;
}
});
})

Categories