I am working on a computer vision project using OpneCV.JS and springboot using Thymeleaf as my HTML5 template engine. I have been following the OpenCJ.Js tutorial here. I am suppose to get two output, one that will display the VideoInput and the other one the canvas that will display the Canvaoutput where the face tracking will take place.
However, the Video display and its working as expected. However, the Canvas display did not show. When I inspect my code in the chrome browser, I realize that I am getting an Uncaught Reference error which says CV is not defined.
Can somebody assist to tell me if their is anything I am doing wrong in my Code.
Below is my code
<!DOCTYPE html>
<html xmlns:th="http://www.thymeleaf.org" xmlns:layout="http://www.ultraq.net.nz/web/thymeleaf/layout" layout:decorate="layout">
<head>
<script async type="text/javascript" th:src="#{/opencv/opencv.js}"></script>
<title>Self-Service Portal</title>
</head>
<body>
<h2>Trying OpenCV Javascript Computer Vision</h2>
<p id="status">Loading with OpenCV.js...</p>
<video id="video" autoplay="true" play width="300" height="225"></video> <br/>
<canvas id="canvasOutput" autoplay="true" width="300" height="225"></canvas>
<!--div>
<div class="inputoutput">
<img id="imageSrc" alt="No Image" />
<div class="caption">ImageScr<input type="file" id="fileInput" name="file" /></div>
</div>
<div class="inputoutput">
<canvas id="canvasOutput" ></canvas>
<div class="caption">canvasOutput</div>
</div>
</div-->
<script type="text/javascript">
navigator.mediaDevices.getUserMedia({
video: true,
audio: false
})
.then(function(stream) {
video.srcObject = stream;
video.play();
})
.catch(function(err) {
console.log("An error occured While accessing media! " + err);
});
let video = document.getElementById('video');
let src = new cv.Mat(video.height, video.width, cv.CV_8UC4);
let dst = new cv.Mat(video.height, video.width, cv.CV_8UC4);
let gray = new cv.Mat();
let cap = new cv.VideoCapture(video);
let faces = new cv.RectVector();
let classifier = new cv.CascadeClassifier();
//load pre-trained classifiers
classifier.load('haarcascade_frontalface_default.xml');
const FPS = 30;
function processVideo() {
try {
if (!streaming) {
// clean and stop.
src.delete();
dst.delete();
gray.delete();
faces.delete();
classifier.delete();
return;
}
let begin = Date.now();
// start processing.
cap.read(src);
src.copyTo(dst);
cv.cvtColor(dst, gray, cv.COLOR_RGBA2GRAY, 0);
// detect faces.
classifier.detectMultiScale(gray, faces, 1.1, 3, 0);
// draw faces.
for (let i = 0; i < faces.size(); ++i) {
let face = faces.get(i);
let point1 = new cv.Point(face.x, face.y);
let point2 = new cv.Point(face.x + face.width, face.y + face.height);
cv.rectangle(dst, point1, point2, [255, 0, 0, 255]);
}
cv.imshow('canvasOutput', dst);
// schedule the next one.
let delay = 1000 / FPS - (Date.now() - begin);
setTimeout(processVideo, delay);
}
catch (err) {
utils.printError(err);
}
};
//schedule the first one.
setTimeout(processVideo, 0);
</script>
<!--script async src="/opencv/opencv.js" onload="onOpenCvReady;" type="text/javascript"></script-->
</body>
</html>
The cv is not defined error can be fixed by assigning cv to the window cv object like, let cv = window.cv
Turning off async would not be ideal because the OpenCV JS library is large and it would affect the time to initial load. Maybe assign a state variable that changes when it finishes loading and run a check on this variable and update the UI accordingly
the code you find in the examples on the opencv page usually focuses on the functionality of the library and sometimes they do not have all the necessary functions to work. The examples on the opencv page usually use a utils.js file that contains the additional functions to work and this is not so obvious.
Most likely your problem will be solved with the answer of this forum.
Additionally I have created a codesandbox that contains all the necessary functions for the video examples to work. Normally they will work for you if you replace the code under the comment
/ ** Your example code here * /
by your facial recognition code.
The async on
<script async type="text/javascript" th:src="#{/opencv/opencv.js}"></script>
...means it won't hold up parsing and processing of the following markup while waiting for the file to load; details. So your inline script later in the file can run before that file is loaded and its JavaScript is run.
Remove async and move that script tag to the bottom of the document, just before your inline script that uses what it defines.
Related
I am using plyr as wrapper around HTML5 video tag and using Hls.js to stream my .m3u8 video .
I was going around a lot of issues on plyr to enable quality selectors and came arounf multiple PR's which had this question but was closed saying the implementation is merged, till i came around this PR which says it's still open, but there was a custom implementation in the Comments which assured that it works . I was trying that implementation locally in order to check if we can add a quality selector but seems like i am missing something/ or the implementation dosent work .
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>HLS Demo</title>
<link rel="stylesheet" href="https://cdn.plyr.io/3.5.10/plyr.css" />
<style>
body {
max-width: 1024px;
}
</style>
</head>
<body>
<video preload="none" id="player" autoplay controls crossorigin></video>
<script src="https://cdn.plyr.io/3.5.10/plyr.js"></script>
<script src="https://cdn.jsdelivr.net/hls.js/latest/hls.js"></script>
<script>
(function () {
var video = document.querySelector('#player');
var playerOptions= {
quality: {
default: '720',
options: ['720']
}
};
var player;
player = new Plyr(video,playerOptions);
if (Hls.isSupported()) {
var hls = new Hls();
hls.loadSource('https://content.jwplatform.com/manifests/vM7nH0Kl.m3u8');
//hls.loadSource('https://test-streams.mux.dev/x36xhzz/x36xhzz.m3u8');
hls.attachMedia(video);
hls.on(Hls.Events.MANIFEST_PARSED,function(event,data) {
// uncomment to see data here
// console.log('levels', hls.levels); we get data here but not able to see in settings .
playerOptions.quality = {
default: hls.levels[hls.levels.length - 1].height,
options: hls.levels.map((level) => level.height),
forced: true,
// Manage quality changes
onChange: (quality) => {
console.log('changes',quality);
hls.levels.forEach((level, levelIndex) => {
if (level.height === quality) {
hls.currentLevel = levelIndex;
}
});
}
};
});
}
// Start HLS load on play event
player.on('play', () => hls.startLoad());
// Handle HLS quality changes
player.on('qualitychange', () => {
console.log('changed');
if (player.currentTime !== 0) {
hls.startLoad();
}
});
})();
</script>
</body>
</html>
The above snippet works please run , but also if you uncomment the
line in HLS Manifest you will see we get data in levels and also pass
the data to player options but it dosent come up in settings.How can
we add a quality selector to plyr when using Hls stream .
I made a lengthy comment about this on github [1].
Working example: https://codepen.io/datlife/pen/dyGoEXo
The main idea to fix this is:
Configure Plyr options properly to allow the switching happen.
Let HLS perform the quality switching, not Plyr. Hence, we only need a single source tag in video tag.
<video>
<source
type="application/x-mpegURL"
<!-- contain all the stream -->
src="https://bitdash-a.akamaihd.net/content/sintel/hls/playlist.m3u8">
</video>
[1] https://github.com/sampotts/plyr/issues/1741#issuecomment-640293554
I am getting this bug TypeError: cv.Mat is not a constructor
I tried doing almost everything can't find any solution on internet
Index.html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Hello OpenCV.js</title>
</head>
<body>
<h2>Hello OpenCV.js</h2>
<p id="status">OpenCV.js is loading...</p>
<div>
<img src="dd.jpg" style="display:none;" id ="img">
<canvas id = "videoInput" height="500" width="500"></canvas>
<canvas id = "canvasOutput" height="500" width="500"></canvas>
<script async type="text/javascript" src="index.js"></script>
<script async src="opencv.js" onload="onOpenCvReady();" type="text/javascript"></script>
</body>
</html>
index.js
document.getElementById('status').innerHTML = 'OpenCV.js is ready.';
let video = document.getElementById('videoInput');
let src = new cv.Mat(video.height, video.width, cv.CV_8UC4);
let dst = new cv.Mat(video.height, video.width, cv.CV_8UC4);
let gray = new cv.Mat();
let cap = new cv.VideoCapture(video);
let faces = new cv.RectVector();
let classifier = new cv.CascadeClassifier();
// load pre-trained classifiers
classifier.load('haarcascade_frontalface_default.xml');
const FPS = 30;
function processVideo() {
try {
if (!streaming) {
// clean and stop.
src.delete();
dst.delete();
gray.delete();
faces.delete();
classifier.delete();
return;
}
let begin = Date.now();
// start processing.
cap.read(src);
src.copyTo(dst);
cv.cvtColor(dst, gray, cv.COLOR_RGBA2GRAY, 0);
// detect faces.
classifier.detectMultiScale(gray, faces, 1.1, 3, 0);
// draw faces.
for (let i = 0; i < faces.size(); ++i) {
let face = faces.get(i);
let point1 = new cv.Point(face.x, face.y);
let point2 = new cv.Point(face.x + face.width, face.y + face.height);
cv.rectangle(dst, point1, point2, [255, 0, 0, 255]);
}
cv.imshow('canvasOutput', dst);
// schedule the next one.
let delay = 1000/FPS - (Date.now() - begin);
setTimeout(processVideo, delay);
} catch (err) {
utils.printError(err);
}
};
// schedule the first one.
setTimeout(processVideo, 0);
}
I am also importing opencv.js as i have a downloaded version of it. Guess theres some initialization problem , please help me solve it ...
opencv.js loads and fires the onload event before it's really initialized. To wait until opencv.js is really ready, opencv.js provides a on hook "onRuntimeInitialized". Use it something like this:
function openCvReady() {
cv['onRuntimeInitialized']=()=>{
// do all your work here
};
}
Make sure that the script is really loaded. If the error is "cv is not defined", then either remove the async in the script tag or add this (or just an onload attribute in the <script> tag)
script.addEventListener('load', () => {
In the WASM build (and only the WASM build), cv will not be immediately available because WASM is compiled in runtime. Assign the start function to cv.onRuntimeInitialized.
Note that the WASM version should be faster; however, it incurs some start overhead (a few CPU seconds). The non-WASM doesn't call onRuntimeInitialized at all.
To check both cases, it's possible to do this
if (cv.getBuildInformation)
{
console.log(cv.getBuildInformation());
onloadCallback();
}
else
{
// WASM
cv['onRuntimeInitialized']=()=>{
console.log(cv.getBuildInformation());
onloadCallback();
}
}
Source:
https://answers.opencv.org/question/207288/how-can-i-compile-opencvjs-so-that-it-does-not-require-defining-the-onruntimeinitialized-field/
https://docs.opencv.org/master/utils.js
I had the exact same issue here. My solution was different to the one suggested. It has more flexibility since your operations will not be limited to a certain method called in the beginning of your code.
Step 1: and it's a very important one, since it actually affected my code:
Make sure there are no unrelated errors on loading the page. It disrupted the loading of opencv.js
Step 2: make sure you load the script synchronously.
<script src="<%= BASE_URL %>static/opencv.js" id="opencvjs"></script>
That was it for me. It worked perfectly from that point.
I have a similar slideshow displayed a few times here! It works fine but I don't get the right mapping on an a-sky. I am not a coder but I guess drawImage is just made for rectangular objects instead of spherical? Is there an alternative to drawImage which works for spherical?
Here are my codes:
AFRAME.registerComponent('draw-canvas', {
schema: {
type: 'selector'
},
init: function() {
var canvas = this.canvas = this.data;
var ctx = this.ctx = canvas.getContext('2d');
var i = 0; // Start Point
var images = []; // Images Array
var time = 3000; // Time Between Switch
// Image List
images[0] = "Tulips.jpg";
images[1] = "Tulips2.jpg";
images[2] = "Tulips3.jpg";
// Change Image
function changeImg() {
document.getElementById('pic01').src = images[i];
ctx.drawImage(document.getElementById('pic01'), 0, 0, 300, 300);
// Check If Index Is Under Max
if (i < images.length - 1) {
// Add 1 to Index
i++;
} else {
// Reset Back To O
i = 0;
}
// Run function every x seconds
setTimeout(function() {
changeImg()
}, time);
}
// Run function when page loads
window.onload = changeImg;
console.log("Hello World!");
}
});
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Canvas Texture</title>
<meta name="description" content="Canvas Texture - A-Frame">
<script src="./components/aframe-v0.6.0.js"></script>
<script src="./components/slideshow.js"></script>
</head>
<body>
<a-scene>
<a-assets>
<img id="pic01" src="Tulips.jpg">
<img id="pic02" src="Tulips2.jpg">
<img id="pic03" src="Tulips3.jpg">
<canvas id="slide" name="slide" crossOrigin="anonymous"> </canvas>
</a-assets>
<a-sky material="shader: flat; src: #slide" draw-canvas="#slide">
<a-sky/>
</a-scene>
</body>
</html>
And if anybody knows how to nicely fade over the pictures, please feel free to share! I bet a lot of people would be happy about a nice A-Frame Slideshow.
I've got a solution, but I've altered quite a lot of Your stuff.
I've got rid of the canvas, You already have three image assets, no need to rewrite, or buffer them on each other.
Just store the asset id's and use setAttribute("material", "src", picID)
Furthermore I've added the a-animation component, so Your slideshow will have a nice smooth transition. You need to set the animation duration to the slideshow's time / 2, for it goes back and forth.
This said, check out my fiddle.
As for the drawImage part of the question, drawImage draws (writes) an image onto a canvas element. The mapping is fine, since You only need to make sure You have a spherical photo, otherwise it will get stretched all over the model.
I work on an JS application which use canvas to manipulate a picture (i.e and convert to png/base64 with .toBlob() and .toDataURL().
I would use .transferControlToProxy() to let a worker do the job and get a smooth GUI.
But it seems to be unsupported... as they said on Mozilla devs
Some of you have other information ?
Maybe a workaround ?
Whatwg.org has a javascript sample of using canvas.transferControlToProxy() at https://developers.whatwg.org/the-canvas-element.html#dom-canvas-transfercontroltoproxy, but it does not seem to work in any browser, even not in bleeding edge versions (Chrome Canary or Opera Next).
Even turning "Enable experimental canvas features" at chrome://flags has no effect in Chrome Canary.
Test live: http://jsbin.com/bocoti/5/edit?html,output
It says: "TypeError: canvas.transferControlToProxy is not a function".
This would be a very fine addition. Think of drawing everything on canvas in a worker and make a blob/arraybuffer/dataurl of canvas and transfer this to main thread using Transferable objects. Nowadays if you want to draw something on canvas using canvas functions (fill(), drawImage() etc.), you have to do it in main thread...
<!DOCTYPE html><html><head><meta charset="utf-8" /></head><body>
<div id="log"></div>
<canvas style="border:1px solid red"></canvas>
<script id="worker1" type="javascript/worker">
self.onmessage = function(e) {
var context = new CanvasRenderingContext2D();
e.data.setContext(context); // event.data is the CanvasProxy object
setInterval(function () {
context.clearRect(0, 0, context.width, context.height);
context.fillText(new Date(), 0, 100);
context.commit();
}, 1000);
}
</script>
<script>
var blob = new Blob([document.querySelector('#worker1').textContent]);
var worker = new Worker(window.URL.createObjectURL(blob));
worker.onmessage = function(e) {
//document.querySelector("#log").innerHTML = "Received: " + e.data;
}
var canvas = document.getElementsByTagName('canvas')[0];
try { var proxy = canvas.transferControlToProxy();
worker.postMessage(proxy, [proxy]);}
catch(e) { document.querySelector("#log").innerHTML = e; }
</script>
<br>
From: https://developers.whatwg.org/the-canvas-element.html#the-canvas-element
</body></html>
I am writing a page which contains only a canvas in it's body and a simple JavaScript code.
In the JavaScript, I am creating an Image, but it's never appended to the page, instead, it runs through a timeout loop loading a URL, which sometimes may return an actual image with MIME Type set to image/jpeg, or other times may return the text NOIMAGE with MIME Type text/plain.
When the url returns an image, the Image runs its onload function, which will draw itself to the canvas in the page and decrease the loop's timeout interval.
When the url returns a text, the Image runs its onerror function, which will simply increase the loop's timeout interval, not drawing it in the canvas.
This logic works very well, but it always throw this warning in the console when the browser try to set the Image as the text, which which increasingly consumes memory:
Resource interpreted as Image but transferred with MIME type text/plain: "http://localhost:6969/image.cgi".
How can I avoid this warning to be print over and over to the console?
EDIT: Added sample working code.
<DOCTYPE html>
<html>
<head>
</head>
<body>
<canvas id="myCanvas" width="640" height="480"></canvas>
</body>
<script defer type="application/javascript">
var canvas = document.getElementById("myCanvas");
var image = new Image();
var timeout = 100;
var timer = function() {
setTimeout(function() {
image.src = "http://localhost:6969/image.cgi?timestamp=" + new Date().getTime();
}, timeout);
};
image.onload = function() {
console.log("Success");
var ctx = canvas.getContext('2d');
ctx.save();
ctx.setTransform(1, 0, 0, 1, 0, 0);
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.restore();
ctx.beginPath();
ctx.drawImage(image, 0, 0, parseInt(this.width), parseInt(this.height));
if (timeout > 50) {
timeout = timeout - 9;
}
timer();
};
image.onerror = function() {
console.log("Error");
if (timeout < 5000) {
timeout = timeout + 14;
}
timer();
};
timer();
</script>
</html>
It might have something to do with your image being of CGI extension. I've noticed a similar problem when trying to load JavaScript files that have an extension other than .js. Take a look at this chart
http://en.wikipedia.org/wiki/Comparison_of_web_browsers#Image_format_support
It has a chart in there of the supported image types for various browsers, CGI is not even labelled.