context drawImage / canvas toDataURL lagging significantly on opera mobile - javascript

I have a bit of Javascript that runs on an Android tablet, running Opera Mobile 12. The tablet is attached to the wall in an office, so it's used by everyone who works in this office, as a timekeeping system (ie. they use it to clock in/clock out of work). This javascript takes a photo of the user when a certain event is raised, then converts this photo to a data URL so it can be sent to a server. This all runs in the background, though - the video and canvas elements are both set to display:none;.
Here's the code that handles the webcam interaction:
var Webcam = function() {
var video = document.getElementById('video');
var stream_webcam = function(stream) {
video.addEventListener('loadedmetadata', function(){
video.play();
}, false);
video.src = window.URL.createObjectURL(stream);
}
// start recording
if (navigator.getUserMedia)
navigator.getUserMedia({video: true}, stream_webcam);
else
_error("No navigator.getUserMedia support detected!");
var w = {};
w.snap = function(callback) {
var canvas = document.getElementById('canvas');
canvas.getContext('2d').drawImage(video, 0, 0, canvas.width, canvas.height);
// delay is incurred in writing to canvas on opera mobile
setTimeout(function() {
callback(canvas.toDataURL());
}, 1);
};
return w;
}
w.snap(callback) gets called when the user presses a certain button. The problem is that there is a significant lag between the call to drawImage, and an image actually being drawn to the canvas. Here's the problem that occurs:
User 1 users the tablet, presses the button, has their photo "snapped" and sent to the server. The server receives a blank, transparent, image. (It's not the same issue as html5 canvas toDataURL returns blank image - the data being sent to the server is substantially less for the first image, because it's blank)
A few minutes later, user 2 uses the tablet and presses the button, and has their photo "snapped". The server receives the photo of user 1.
Later, user 3 does the same thing, and the server gets a photo of user 2.
So I'm guessing the call to toDataURL() is returning old data because drawImage() doesn't . (I read somewhere that it's async, but I can't confirm and can't find any way of attaching an event to when it finishes drawing.)
I've tried:
changing the timeout in snap() to a variety of values. The highest I tried was 2000, but the same issue occurred. (If anyone suggests higher numbers, I'm happy to try them, but I don't want to go much higher because if I go too high, there's potential for user 3 to have their photo taken before I've even processed user 2's photo, which means I might lose it entirely!)
having the canvas draw something when the page first loads, but that didn't fix it - when user 1 was photographed, the server received whatever photo was taken when the page loaded, instead of the photo of user 1.
"getting" the canvas element again, using getElementById, before calling toDataURL.
A few more notes:
This works fine on my computer (macbook air) in Chrome and Opera.
I can't reliably replicate it using a debugger (linking to the tablet using Opera Dragonfly).
AFAIK Opera Mobile is the only Android browser that supports getUserMedia and can thus take photos.
If I remove the display:none; from the video and canvas, it works correctly (on the tablet). The issue only occurs when the video and canvas have display:none; set.
Because the page is a single page app with no scrolling or zooming required, a possible workaround is to move the video and canvas below the page fold, then disable scrolling with javascript (ie. scroll to the top whenever the user scrolls).
But I would rather a proper fix than a workaround, if possible!

I don't have a very good solution but I would suggest not handling an invisible canvas through the dom by setting it invisible.
Instead of
var canvas = document.getElementById('canvas');
Try
var canvas = document.createElement('canvas');
canvas.width = video.width;
canvas.height = video.height;
canvas.getContext('2d').drawImage(video, 0, 0)
This way it's separate from the DOM.
Also Why not use
canvas.toDataURL("image/jpg")?
EDIT: Also, if you're designing it, have you tried the other browsers out there? Whats restricting you to Opera over the other browsers available or using phonegap?
EDIT2: Thinking about it, Canvas also has two other options for getting that photo into place that you would want to look in two. Those two being:
var imgData = canvas.getContext('2d').getImageData(0,0,canvas.width,canvas.height);
-or-
canvas.getContext('2d').putImageData(imgData,0,0);
These two ignore any scaling you've done to the context but I've found that they are much more direct and often faster. They are a solid alternative to toDataURL and drawImage but the image data you put and get from these are encoded as an array in the form:
[r1, b1, g1, a1, r2, b2, g2, a2....]
you can find documentation for them here:
http://www.w3schools.com/tags/canvas_putimagedata.asp
http://www.w3schools.com/tags/canvas_getimagedata.asp

Some version of Android have problems with toDataUrl(); Maybe this is the solution?
http://code.google.com/p/todataurl-png-js/wiki/FirstSteps
The script: http://todataurl-png-js.googlecode.com/svn/trunk/todataurl.js
Paste this in your head:
<script src="todataurl.js"></script>

Related

Create.js and Animate CC -> Navigate to a frame in a "Bitmap video"

I am working on what will hopefully be my first create.js project in Animate CC at my current employer.
I am trying to load in a .mp4 into a Bitmap object, using
HTML5ElementForVideo = document.createElement('video');
HTML5ElementForVideo.src = 'bridge-animation-resized-794x652.mp4';
HTML5ElementForVideo.autoplay = false;
video = new createjs.Bitmap(HTML5ElementForVideo);
video.x = 110.00;
video.y = 42.5;
stage.addChild(video);
..which works okay, and as expected. In the video, there are a series of steps which we would like the user to be able to go between, using "Previous" and "Next" step buttons.
I assumed that navigation wise, I would be able to use something along the lines of:
[video].gotoAndPlay(x)
To move to the right frame in the video. However, this does not seem to work or be supported? I only seem to be able to play the video or stop it with .play and .pause?
Any suggestions, please?
Dave
The video is not an Animate or CreateJS object - only the Bitmap drawing it to the stage is. Check out the HTML Video API for how to control it. To set the position, use the currentTime property to control the video position.
Hope that helps!

Twilio video: how to increase video size?

I have a button that allows a user to preview their video that comes through their camera. The video stream is successfully displayed but I am struggling to find out how to alter the dimensions of the displayed video. This is what I have:
HTML:
<div id="local-media"></div>
JavaScript:
previewMedia = new Twilio.Conversations.LocalMedia();
Twilio.Conversations.getUserMedia().then(
function (mediaStream) {
previewMedia = new Twilio.Conversations.LocalMedia();
previewMedia.on('trackAdded', function (track) {
if(track.kind === "video"){
track.dimensions.height = 1200;
track.on('started', function (track) { // DOES NOT FIRE
console.log("Track started");
});
track.on('dimensionsChanged', function (videoTrack) { // DOES NOT FIRE
console.log("Track dimensions changed");
});
}
previewMedia.addStream(mediaStream);
previewMedia.attach('#local-media')
}),
function (error) {
console.error('Unable to access local media', error);
};
);
The trackAdded event fires but I don't get the started or dimensionsChanged events firing and setting the track.dimensions.height does not work.
I can shrink the video by using:
div#local-media {
width:270px;
height:202px;
}
div#local-media video {
max-width:100%;
max-height:100%;
}
but I cannot increase it beyond 640x375 pixels.
Based upon some interactions with our support team it seems you should first try setting the size of a <div> using CSS before attaching the video track. This technique is used in the quickstart application.
https://www.twilio.com/docs/api/video/guide/quickstart-js
Then, try passing in the optional localStreamConstraints when calling inviteToConversation
https://media.twiliocdn.com/sdk/js/conversations/releases/0.13.5/docs/Client.html#inviteToConversation
It looks like you can specify the dimensions for video:
https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia
which is then used by getUserMedia (the WebRTC function)
Keep in mind that you can adjust the capture size locally.This is the size of the Video Track being captured from the camera.
However, depending on network conditions, the WebRTC engine in your browser (and the receivers browser) may decide that the video resolution being captured is too high to send across the network at the desired frame rate (you can also set frame rate constraints on the capturer if you'd like to trade off temporal vs spatial resolution). This means that the receiving side may receive a video feed that is smaller than what you intended to send. To overcome this, you can use CSS to style the <video> element to ensure that it stays at a certain size, which will result in video upscaling/downscaling where required on the receiving side.
We plan to update our documentation with more of these specifics in the future. But you can always find additional support from help#twilio.com.
you can adjust the screensize using following css. you can find this css file in Quickstart->public->index.css
Remote Media Video Size
div#remote-media video
{
width: 50%;
height: 15%;
background-color: #272726;
background-repeat: no-repeat;
}

Mobile App "unable to load previous operation due to low memory" after loading image?

I have a simple mobile app. It starts by prompting the user to select an image which is then loaded locally via the following code and displayed on a canvas:
function handleFileSelect(evt) {
var files = evt.target.files;
var f = files[0];
console.log(evt);
var ctx = document.getElementById("myCanvas").getContext('2d'),
img = new Image(),
f = document.getElementById("uploadbutton").files[0],
url = window.URL || window.webkitURL,
src = url.createObjectURL(f);
img.src = src;
lastImage = img;
img.onload = function()
{
window.requestAnimationFrame(function(){
var w = window.innerWidth;
var h = window.innerHeight;
ctx.clearRect(0, 0, w, h);
ctx.drawImage(lastImage, 0, 0);
//url.revokeObjectURL(src);
});
}}
In my web browser and on my phone, the image is loaded and displayed in a canvas properly, however I have heard from a user who was running it on a Razr Maxx HD with Google Chrome, that they would receive "Error: unable to load previous operation due to low memory." I suspect that this issue cannot be solved by first reading in the image and then scaling it prior to displaying because the full image will have already been loaded into memory even if not displayed. Is there any way to load a scaled version of an image into memory without first loading the entire thing? Any idea what is causing this issue as I am able to load massive images on my Galaxy S3 in chrome and the Galaxy S3 has the same amount of RAM as the Razr Maxx HD?
It may depend on which applications the user is running as to how much memory is available.
One solution is to have the application request the image with certain width and the server can return a scaled image, which should reduce memory requirements.
I have had the same problem on my Samsung Galaxy S4. I have found the solution on my phone and I have had no issues since:
Go to the Settings on your phone.
Select More in the top right task item.
Scroll to Developer Options. - If you don`t have developer options enabled follow steps 1 & 2, Then scroll down to about device and click on it. The phone information should be displayed and you should scroll down till you get to Build Number. The next thing to do is click on Build Number several times... ( Each time you click on it an info bar will be displayed telling you how many clicks are needed for Developer options. When you have clicked BN several times then DO will now be open and we can continue with the above.
If you have clicked into Developer Options successfully then you must scroll down to the apps options near the bottom of D.O menu.
If ( Do not keep activities) is selected then be sure to untick it making sure the box has no tick in it as it will end every app or running operation as soon as it leaves the page, Hence the reason photos wont load up to Facebook etc. If this is done correctly then you should now be able to upload photos.

HTML5: Manually "playing" a video inside a canvas, in IOS Safari

So, here's what I'm trying to do:
I want to load a video into a video element, but not have it played in the "normal" way.
Using a timed interval, calculated according to the movie's framerate, I want on each iteration to
A. Manually advance the video one 'frame' (or as close as possible to that).
B. Draw that frame into a canvas.
Thereby having the video "play" inside the canvas.
Here's some code:
<video width="400" height="224" id="mainVideo" src="urltovideo.mp4" type="video/mp4"></video>
<canvas width="400" height="224" id="videoCanvas"></canvas>
<script type="text/javascript">
var videoDom = document.querySelector("#mainVideo");
var videoCanvas = document.querySelector("#videoCanvas");
var videoCtx = null;
var interval = null;
videoDom.addEventListener('canplay',function() {
// The video's framerate is 24fps, so we should move one frame each 1000/24=41.66 ms
interval = setInterval(function() { doVideoCanvas(); }, 41.66);
});
videoDom.addEventListener('loadeddata',function() {
videoCtx = videoCanvas.getContext('2d');
});
function doVideoCanvas() {
videoCtx.drawImage(videoDom,0,0);
//AFAIK and seen, currentTime is in seconds
videoDom.currentTime += 0.0416;
}
</script>
This works perfectly in Google Chrome, but it doesn't work in an Iphone's Safari;
The video frames does not get drawn at all to the canvas.
I made sure that:
The video events I hooked into does get triggered (did 'alerts' and they were shown).
I have control over the canvas (did a 'fillRect' and it filled).
[I also tried specifying dimensions in the drawImage - it didn't help]
Is drawImage with a video object not applicable at all in Iphone Safari...?
Even if I'll manage to find a way to capture the video frames, there are also some other issues in the Iphone's browser:
Access to the currentTime property of the video is only granted once the video has started playing (in a standard way). I thought about maybe somehow "playing it hidden" and then capturing, but didn't manage to do that. Also thought of maybe somehow start playing the video and then immediately stop it;
There doesn't seem to be any way to forcefully start the playing of a video in the IOS Safari. video.play(), or autoplay doesn't seem to do anything. Only if the user taps the "play circle" on the video then the video starts playing (taking all over the screen, as usually with videos on the IPhone's browser).
Once the video plays - the currentTime property does get forwarded. On the video itself. When you pause the video and go back to the normal html page - you can see the frames on the video element changing. Though, in a slow phase (unlike in Google Chrome, where the rate seems to be smooth enough to make it look like it's playing) - in the iphone it looks to be a rate of something like 2-3 frames per second maybe. It stays the same even if I try changing the interval timing, I guess there's a minimum time limit that the browser on the IPhone can handle.
"Bonus question" :)
- When the frames on the video element progresses from the event - the circle "play button" is visible on the video element (since it is not actually 'playing'). Is there anyway to hide it and make it invisible?
This has been tested on Iphone 3GS (with both 3G and Wifi) and Iphone 4, both running IOS 5, both having the same results as described.
Unfortunately I don't have an iOS device to test this, but I don't think you need to actually capture the video frames in the way that you're attempting using the currentTime property. The usual process looks something like this:
create a video element without controls (omit the controls attribute)
hide the element
when the video is playing draw it to the canvas on an interval
As you've discovered, iOS devices do not support autoplay on HTML5 video (or indeed audio) but you can create a separate control to initiate playback of the video using the play() method.
This approach should solve the issue you're having with the play button being visible since in this case you are actually playing the video.
I don't believe the loadeddata event is called on iOS, try the loadedmetadata event instead. I also found it necessary on iOS to call the videoDom.load() method after setting videoDom.src.
For my use case, I need to do a "dRAF" (double requestAnimationFrame) after the seeked event to ensure something was actually drawn to the canvas rather than a transparent rectangle.
Try something like:
videoDom.onloadedmetadata = () => {
videoCanvas.height = videoDom.videoHeight
videoCanvas.width = videoDom.videoWidth
videoDom.currentTime = 0
}
videoDom.onseeked = () => {
// delay the drawImage call, otherwise we get an empty videoCanvas on iOS
// see https://stackoverflow.com/questions/44145740/how-does-double-requestanimationframe-work
window.requestAnimationFrame(() => {
window.requestAnimationFrame(() => {
videoCanvas.getContext('2d').drawImage(videoDom, 0, 0, videoCanvas.width, videoCanvas.height)
videoDom.currentTime += 0.0416
})
})
}
videoDom.load()
From this gist

Why would images in the Mootools Multibox have scrollbars the FIRST time they are shown?

We are using Mootools Multibox to display images.
The first time we view it with Chrome and Safari, the images are zoomed in and have scrollbars.
When we reload the page, the images display correctly without scrollbars.
What could be the cause of this?
How can we fix this so that the images are displayed with their correct sizes the first time viewed in Chrome and Safari?
in this block of code:
showContent: function(){
this.box.removeClass('MultiBoxLoading');
this.removeContent();
this.contentContainer = new Element('div', {
'id': 'MultiBoxContentContainer',
'styles': {
opacity: 0,
width: this.contentObj.width,
height: (Number(this.contentObj.height)+this.contentObj.xH)
}
}).inject(this.box,'inside');
it sets with width of the content box to the contentObj.width direct. which is fine if the browser has the image in the cache - at which point it will work but not so fine when it does not.
it uses Asset.js to load an image here:
load: function(element){
this.box.addClass('MultiBoxLoading');
this.getContent(element);
if(this.type == 'image'){
var xH = this.contentObj.xH;
this.contentObj = new Asset.image(element.href,{onload:this.resize.bind(this)});
this.contentObj.xH = xH;
}else{
this.resize();
};
},
the problem is, only after the onload fires does the browser know the actual width and height of the image (available through this.width / this.height if not bonund to class scope). although this will return an image object early (into contentObj), it probably shouldn't just yet and should do it after the onload fires. the onload here should be what injects the image into the container and sets width and height to host it. instead, it applies this.resize(image)
i hope this gives you some ideas as to how to refactor the class to make it work better.
ADDITONALLY: var xH = this.contentObj.xH; and this.contentObj.xH = xH; -> element storage for other elements direct into the object? this pre-dates mootools 1.2 which introduced closure based uid specific storage per element. bad practice, can cause slowness in IE, memory leaks etc.
refactor to this.contentObj.store("xH", something) with this.contentObj.retrieve("xH") to get it back

Categories