HTML5 Media Capture crashes on Android when uploading multiple files - javascript

I face problems with my HTML5 App for decoding barcodes with JavaScript. For testing reasons I run the implemented algorithms against a database with 1055 pictures (Muenster BarcodeDB with a resolution of 600x800px). It works fine in Chrome on Windows and Safari on iPad 2. But Chrome on my Moto G (Android) crashes after 20-30 pictures without any message. When I use HTML5 Media Capture with camera photos it also crashes after taking several pictures and Chrome reports, that there isn't enough memory for the previous operation. It crashes directly, when the picture was taken and the camera app is closed. Then the browser is shown again with a reaload of the page.
Did anyone face the same problems? Below is some code on how to use the pictures.
HTML Media Capture Input:
<input id="upload" type="file" accept="image/*" capture style="display:none;">
JavaScript Handler (exif.js, megapixImg.js for rotating/scaling image):
fileInput.onchange = function () {
var file = fileInput.files[0];
imgOrientation = null;
// get orientation of image from exif data
EXIF.getData(file, function () {
imgOrientation = EXIF.getTag(this, "Orientation");
});
// MegaPixImage constructor accepts File/Blob object.
megapixImg = new MegaPixImage(file);
// Render resized image into image element using quality option.
// Quality option is valid when rendering into image element.
megapixImg.render(tempImg, { maxWidth: maxDimension, maxHeight: maxDimension, quality: 1.0 });
};
tempImg.onload = function () {
// Render resized image into canvas element.
megapixImg.render(tempCanvas, { maxWidth: maxDimension, maxHeight: maxDimension, orientation: imgOrientation });
// TRIGGER ALGORITHM
};

For your HTML5 media capture problem, you are likely running into the following bug:
https://code.google.com/p/android/issues/detail?id=53088
Unfortunately, the android developers have marked the issue as obsolete, which it is not. A few potential workarounds are discussed in the aforementioned thread, but none of them really suit my needs. Maybe you will have better luck. :)

Related

Recording in HD - Ziggeo Recorder v2 JS

I am having difficulty capturing HD video with the Ziggeo Recorder.
I have set up a Recorder, basically, it’s a clone of Ziggeo's hosted solution:
<script>
$(document).ready( function() {
//assigning the event handler for click on the Next button on first screen
$("#step1 button").on("click", function() {
//hide first screen
$("#step1").hide();
//show second screen
$("#step2").show();
//add our embedding to the page
var recorder = new ZiggeoApi.V2.Recorder({
//we find the element with id="recorder" to attach v2 recorder to it
element: document.getElementById("recorder"),
attrs: {
//we make the recorder responsive
responsive: true,
//we set the max time for recording to 120 seconds
timelimit: 2 * 60,
theme: "modern",
video_width: 1920,
video_height: 1080,
video_profile_transcoding: "HDcapture",
hd: true,
//we add name and email to the video as a custom data
"custom-data": {
name: $("#name").val(),
email: $("#email").val()
}
}
});
//we activate the recorder
recorder.activate();
recorder.on("verified", function() {
//once video is uploaded and seen that it can be processed the verified event fires we show the
// button to Submit the form
$("#step2 button").show();
});
});
//When Submit button is clicked
$("#step2 button").on("click", function() {
//hide second screen showing recorder
$("#step2").hide();
//show the "Thank you" screen
$("#step3").show();
});
});
</script>
I’ve tried the following in the attrs array with no avail.
video_width: 1920,
video_height: 1080,
video_profile_transcoding: "HDcapture",
hd: true,
I set up a video transcoding profile (and made it default), but it isn’t catching.
All videos are coming through at:
video_width: 640,
video_heigh: 480,
hd: false,
These Ziggeo support resources don’t seem to answer how to record HD (w/ v2 & JS)...
https://support.ziggeo.com/hc/en-us/articles/206452028-How-do-I-record-in-HD-
https://ziggeo.com/blog/record-video-in-hd/
And I don't see reference to HD anywhere here:
https://ziggeo.com/docs/api
Thanks in advance for any help or guidance. The promise of the Ziggeo product is awesome–I just need to get it to deliver HD!
It is awesome that you included the links and the codes Jon. Looking at them I can see why it is not working for you. My suggestion would be to check out the code from one of your links: https://support.ziggeo.com/hc/en-us/articles/206452028-How-do-I-record-in-HD-
This is the code currently shown there:
<ziggeorecorder
ziggeo-recordingwidth=1280
ziggeo-recordingheight=720
ziggeo-theme="modern"
ziggeo-themecolor="red"
ziggeo-video-profile="_name_of_your_hd_profile">
</ziggeorecorder>
The key parameters for recording HD videos would be setting recordingwidth and recordingheight to the values you wish to use. The default would be 640x480.
The HTML codes require ziggeo- prefix, while JavaScript one does not.
So changing the above example would result in the code that would look like so:
var recorder = new ZiggeoApi.V2.Recorder({
//we find the element with id="recorder" to attach v2 recorder to it
element: document.getElementById("recorder"),
attrs: {
theme: "modern",
recordingwidth: 1920,
recordingheight: 1080,
'video-profile': "_HDcapture"
}
});
I have removed most other parameters to show the most basic parameters you need to set.
Now, in the above code you can also notice that I used underscore before the video profile name resulting in _HDcapture. This is because the tokens (IDs generated by our system) are used without underscore, however if you made the ID, this is then a key (your unique ID) and to make our system aware that it is a key, it will look for the underscore. So if you do not put the underscore in your embedding, then it will just ignore it.
You can see "identifier: _HDcapture" shown to you when you create video profile in the dashboard helping you to know what exactly you should use.
Now looking at the parameters you have used, I believe that you used them from the video data then adding them to your embedding.
This video data is just showing you what you can expect in the JavaScript functions like recorder.get() or what would come in webhook. For actual parameters that you can use you should check out the docs here.
One thing to point out: You can only record 1080 if your camera supports 1080. If you use 720 then your camera has to support 720. Most cameras support 640x480 and is the reason why it is our default. I am saying this because:
You need the camera to be able to record in resolution you wish
You might want to also have alternative for people that do not have HD cameras
For anything JavaScript and HTML related, I would suggest checking out the docs here: https://ziggeo.com/docs/sdks/javascript/. Still do post here, or reach out to the support team of Ziggeo, either over email (support#ziggeo.com) or through forum: https://support.ziggeo.com/hc/en-us/community/topics
PS: I am part of the Ziggeo team, we all like to help, and I hope the above is helpful to you and anyone else looking for the same :)

Unable to create thumbnail from video on iOS Safari - works on desktop (Javascript)

I am having trouble getting code that works on Chrome desktop to work on iPhone. I've create a demo page (https://jsfiddle.net/0nryc7uf/) that allows users to select (or capture) a video. Once selected, the video is displayed in a video element and a thumbnail image is displayed in a IMG element. This works well on desktop, but does not on iPhone. The video will play on iPhone (no poster is shown) but the thumbnail doesn't appear. Any thoughts?
function fileSelected(e)
{ //User captured or selected file: Event
var file = e.target.files[0];
if (file)
readFile(file);
else
alert('Not a valid image!');
}
function readFile(file)
{//set up a FileReader
var reader = new FileReader();
reader.onloadend = function () {
makeThumbnail(reader.result);
}
reader.onerror = function () {
alert('There was an error reading the file!');
}
//read in the file data
reader.readAsDataURL(file);
}
function makeThumbnail(fileData) {
//Find target elements in page
var myvid = document.getElementById('myvid');
var mythumb = document.getElementById('mythumb');
//Create a video element and load with
var video = document.createElement('video');
video.onloadedmetadata = function() {
};
video.oncanplay = function() { //need this for iphone
this.currentTime = 0.1;
video.oncanplay = null;
};
video.onseeked = function(e) {
//Create canvas to hold thumbnail
var canvas = document.createElement('canvas');
canvas.height = video.videoHeight;
canvas.width = video.videoWidth;
var ctx = canvas.getContext('2d');
ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
//display thumbnail
mythumb.src = canvas.toDataURL("image/png");
myvid.src = fileData;
myvid.play();
};
video.src = fileData;
video.load(); //need this for iphone
}
UPDATE:
The issue appears to have something to do with which camera (front/back) was used to capture the video and which orientation was used. All seem to work fine except the portrait-mode with the back camera. I have updated the jsFiddle here: https://jsfiddle.net/roknjohn/j0gyn52k/4/ and have the same test on github: https://roknjohn.github.io/thumbnail/index.html to better illustrate the problem.
UPDATE 2:
I have given up on this functionality. FYI, my PWA app allows users to capture video and upload to the server. The app uses a <input type="file" accept="video/mp4,video/x-m4v,video/*"> tag to capture the video and javascript to post it with an AJAX call. Before posting, the PWA created a thumbnail of the video and sent it along as well. Since producing the thumbnail on the client-side was problematic for the reasons mentioned above, I now create the thumbnail on the server once the video arrives. I use FFMPEG to extract a PNG image from the uploaded movie and store for later use.
But, alas, I have discovered another issue with iOS Safari. PWA apps on this device lose access to the camera once the app loses focus (i.e. sent to background.) When the app is first launched, the camera works nicely until you open a different app and come back. Then, the camera screen is totally black, requiring the user to end the app and re-launch.
UPDATE 3: Well, it gets worse. I just upgraded to iOS 13.4 and now the camera doesn't work at all with a <input> tag. Guess I'll look into WebRTC...
UPDATE 4: I may have spoken too soon. The 13.4 update actually broke the way that I was styling the <input> tag, as I had it wrapped in a <label> and hid the standard (ugly) one. I just added a javascript routine to call the hidden input's click method and all seems to be working now. Fingers crossed...this has been a real doozy.

Can't save canvas as image on Edge browser

I am using fabric js for my web application. For saving HTML5 canvas as image using fabric js, I found the following code which works in chrome and firefox browser. But when i run th same code in microsoft edge, the browser just opens a blank new window without any image on it.
$(".save").click(function () {
canvas.deactivateAll().renderAll();
window.open(canvas.toDataURL('png'));
});
However I also tested many fiddle projects which shows how to convert canvas to an image. Those fiddles works on chrome but not in edge. An example fiddle is here
If you will use FileSaver.js it will download on Edge. For using FileSaver.js you need to convert base64 data into the blob data. To do that, please check this post on StackOverflow
Here is an updated fiddle
You need to include FileSaver.js into your project, and your save button will have the following code:
$("#canvas2png").click(function(){
canvas.isDrawingMode = false;
if(!window.localStorage){alert("This function is not supported by your browser."); return;}
var blob = new Blob([b64toBlob(canvas.toDataURL('png').replace(/^data:image\/(png|jpg);base64,/, ""),"image/png")], {type: "image/png"});
saveAs(blob, "testfile1.png");
});
Alternative, quick and dirt solution is to write html data into new tab, and right click on the image and save it. This solution is not required any plugins or libraries.
Your save will simply change to:
$("#canvas2png").click(function(){
canvas.isDrawingMode = false;
if(!window.localStorage){alert("This function is not supported by your browser."); return;}
var html="<img src='"+canvas.toDataURL()+"' alt='canvas image'/>";
var newTab=window.open();
newTab.document.write(html);
});
Check the modified fiddle:
http://jsfiddle.net/9hrcp/164/
document.getElementById('test').src = canvas.toDataURL('image/png');
i added an image tag and and set the src attribute to the dataUrl to the image and it loads fine.
So edge is generating a correct png, is refusing to load it as a dataUrl redirect. May be a feature and not a bug.

How to initialise SoundJS for IOS 9

We've been using the CreateJS suite for a while now, but have just realised that our audio is not working on IOS9. Unfortunately we only have one IOS9 test device, running IOS9.2.4, but are getting mixed results with audio playback.
The rough process I'm using is;
Preload all assets (scripts/images/audio) via PreloadJS
Construct an initial splash screen in EaselJS, including a start button
Continue with the main content
It would be advantageous to be able to preload all audio before presenting that splash screen. The splash screen was added initially to allow audio to play on mobile Safari, with an empty sound played on click. This does of course work for IOS7/8, but not for 9.
I've created this test case on codepen as an attempt to track down the issue and try some options.
Codepen sample
HTML
<p id="status">Hello World</p>
<canvas id="canvas" width="200" height="200"></canvas>
JS
var canvas, stage, rect;
function init() {
canvas = document.getElementById('canvas');
canvas.style.background = "rgb(10,10,30)";
stage = new createjs.Stage("canvas");
createjs.Touch.enable(stage);
rect = new createjs.Shape();
rect.graphics.f("#f00").dr(50,75,100, 50);
rect.on("mousedown", handleStart, null, true);
rect.on("touchend", handleStart, null, true);
//rect.on("click", handleStart, null, true);
stage.addChild(rect);
stage.update();
$('#status').text("Touch to Start");
createjs.Sound.initializeDefaultPlugins();
//createjs.Sound.alternateExtensions = ["ogg"];
createjs.Sound.registerSound("https://www.freesound.org/data/previews/66/66136_606715-lq.mp3", "ding1");
}
function handleStart(event) {
createjs.WebAudioPlugin.playEmptySound();
$('#status').text("Touch to Play");
rect.graphics._fill.style = '#0f0';
rect.removeAllEventListeners();
rect.on('click', handlePlay);
stage.update();
}
function handlePlay(){
createjs.Sound.play("ding1");
$('#status').text("Playing");
}
init();
Apologies for the lack of ogg version, was struggling to get test files to load x-domain.
With this, audio partially works for us on IOS9. If we leave clicking the red rectangle (figure start button) and leave it ~20 seconds, then click the green button no audio plays. If we click it immediately, audio plays fine.
I have been reviewing this bug/thread and attempting to follow Grant's suggestions. I gather SoundJS v0.6.2 now automatically attempts to play the empty sound appropriate when plugins are initialized, however moving the initializeDefaultPlugins and registerSounds calls into the handleStart function appears to make no difference. If I'm understanding the issue correctly, calling the WebAudioPlugin.playEmptySound method should be sufficient?
Have also been looking at the event binding, trying mousedown/touchend instead of click, but the result is the same with the 20 second wait. Event also appears to fire twice, although I could probably dig deeper into that if I could get it to work correctly.
I'm aware of the Mobile Safe Approach article aimed at this issue, but the need for a namespace at this level would mean a substantial rewrite of our existing content. Could someone perhaps advise if it is completely necessary to take this approach? I'm under the impression it should be feasible by correctly playing some empty audio within that initial handler.
Can't actually get the main project to this point, but if I can get a working example perhaps I'll be a step closer.
Any thoughts would be appreciated!
The Mobile-safe tutorial is not really relevant anymore since the updates in 0.6.2. It will likely be updated or removed in the near future.
You should never need to initializeDefaultPlugins(), unless you want to act on the result (ie, check the activePlugin before doing something). This method fires automatically the first time you try and register a sound.
The playEmptySound is also no longer necessary. SoundJS will listen for the first document mousedown/click, and automatically do this in the background.
touch events are not directly propagated from the EaselJS stage, but are instead turned into mouse events (touchstart=mousedown, touchmove=mousemove, touchend=pressup/click)
Based on this, you should be able to play sound once anywhere in the document has been clicked, regardless of whether you listen for it or not.
That is not to say that there isn't a bug, just that the steps you are taking shouldn't be necessary. I did some testing, and it appears to work in iOS9. Here is a simplified fiddle.
https://jsfiddle.net/lannymcnie/b4k19fwc/ (link updated Dec 3, 2018)

Event Listeners in HTML5 Video on the iPad Safari not working?

I've got this in the <head>:
<script>
function log(event){
var Url = "./log.php?session=<?php echo session_id(); ?>&event=" + event;
xmlHttp = new XMLHttpRequest();
xmlHttp.open( "GET", Url, true );
xmlHttp.send( null );
}
</script>
And this in the <body>:
<video id="video" src="./video/LarryVideo.mp4"
controls="controls"
poster="./img/video_poster.jpg"
onabort="log('onabort');"
oncanplay="log('oncanplay');"
oncanplaythrough="log('oncanplaythrough');"
ondurationchange="log('ondurationchange');"
onemptied="log('onemptied');"
onended="log('onended');"
onerror="log('onerror');"
onloadeddata="log('onloadeddata');"
onloadedmetadata="log('onloadedmetadata');"
onloadstart="log('onloadstart');"
onpause="log('onpause');"
onplay="log('onplay');"
onplaying="log('onplaying');"
onprogress="log('onprogress');"
onratechange="log('onratechange');"
onreadystatechange="log('onreadystatechange');"
onseeked="log('onseeked');"
onseeking="log('onseeking');"
onstalled="log('onstalled');"
onsuspend="log('onsuspend');"
ontimeupdate="log('ontimeupdate');"
onvolumechange="log('onvolumechange');"
onwaiting="log('onwaiting');">
<script>
QT_WriteOBJECT('./video/LarryVideo.mp4',
'380px', '285px', // width & height
'', // required version of the ActiveX control, we're OK with the default value
'scale', 'tofit', // scale to fit element size exactly so resizing works
'emb#id', 'video_embed', // ID for embed tag only
'obj#id', 'video_obj'); // ID for object tag only
</script>
</video>
My normal Safari creates nice log-file entries as expected. Mobile Safari from iPad however doesn't do anything at all.
What could be wrong with this?
I have not been able to get a hold of readystate on an ipad either, but you can get other events that more-or-less let you infer the readystate.
var audio = new Audio("someSource.mp3");
audio.play();
/* you may need to use .load() depending on how the event was initiated */
audio.addEventListener("canplay", handleCanPlay, false);
audio.addEventListener("durationchange", handleDurationChange, false);
But let's be clear, the problem is Apple pretty much telling the whole world they're using the internet wrong. Granted, everyone hates sites that start playing music the second they load, but then Apple goes nuts and kills ANY/ALL buffering of audio/video that isn't explicitly initiated by a user gesture because Apple, apparently, thinks their users can't hit "back" if a site bothers them; fanboys agree too. This basically leaves the rest of us to hack up our applications if we dare try and manage any kind of sound effects. I know this isn't the place to rant, but building ANY soft of interesting/interactive experience in HTML5 on the iPad is one facepalm after another -- be it the 5mb cache limit that simply crashes the browser if a page has "too" many (according to Apple) images, or the difficulty to preload any sort of media to enhance UI. Seriously, outside of Wordpress blogs and RSS readers, mobile Safari's implementation of HTML5 is pretty worthless. And so the dream of HTML5 "build once, play anywhere" value proposition is dead and we go back to developing native apps. At least this gives us good job security! /rant

Categories