I have a custom Node.JS addon that transfers a jpg capture to my application which is working great - if I write the buffer contents to disk, it's a proper jpg image, as expected.
var wstream = fs.createWriteStream(filename);
wstream.write(getImageResult.imagebuffer);
wstream.end();
I'm struggling to display that image as an img.src rather than saving it to disk, something like
var image = document.createElement('img');
var b64encoded = btoa(String.fromCharCode.apply(null, getImageResult.imagebuffer));
image.src = 'data:image/jpeg;base64,' + b64encoded;
The data in b64encoded after the conversion IS correct, as I tried it on http://codebeautify.org/base64-to-image-converter and the correct image does show up. I must be missing something stupid. Any help would be amazing!
Thanks for the help!
Is that what you want?
// Buffer for the jpg data
var buf = getImageResult.imagebuffer;
// Create an HTML img tag
var imageElem = document.createElement('img');
// Just use the toString() method from your buffer instance
// to get date as base64 type
imageElem.src = 'data:image/jpeg;base64,' + buf.toString('base64');
facepalm...There was an extra leading space in the text...
var getImageResult = addon.getlatestimage();
var b64encoded = btoa(String.fromCharCode.apply(null, getImageResult.imagebuffer));
var datajpg = "data:image/jpg;base64," + b64encoded;
document.getElementById("myimage").src = datajpg;
Works perfectly.
You need to add img to the DOM.
Also if you are generating the buffer in the main process you need to pass it on the the render process of electron.
I'm fighting with PDF.JS. All I want is to ignore to add a pdf file in the GET-Paramter, for real.. who does this today??
So my problem is, I'm trying to load a pdf file into my loaded pdf.js-file. I want to use viewer.html and the viewer.js. The file is served as base64-encoded-string. For tests I am loading the base64 code into the html to have directly access over Javascript.
What files I'm loading:
build/pdf.js
build/pdf.worker.js
web/viewer.js
No loading errors
var BASE64_MARKER = ';base64,';
var pdfAsArray = convertDataURIToBinary("data:application/pdf;base64, " + document.getElementById('pdfData').value);
pdfjsLib.getDocument(pdfAsArray).then(function (pdf) {
//var url = URL.createObjectURL(blob);
console.log(pdfjsLib);
pdf.getPage(1).then(function(page) {
// you can now use *page* here
var scale = 1.5;
var viewport = page.getViewport({ scale: scale, });
// Prepare canvas using PDF page dimensions.
var canvas = document.getElementById('viewer');
var context = canvas.getContext('2d');
canvas.height = viewport.height;
canvas.width = viewport.width;
// Render PDF page into canvas context.
var renderContext = {
canvasContext: context,
viewport: viewport,
};
page.render(renderContext);
});
//pdfjsLib.load(pdf);
})
function convertDataURIToBinary(dataURI) {
var base64Index = dataURI.indexOf(BASE64_MARKER) + BASE64_MARKER.length;
var base64 = dataURI.substring(base64Index);
var raw = window.atob(base64);
var rawLength = raw.length;
var array = new Uint8Array(new ArrayBuffer(rawLength));
for (i = 0; i < rawLength; i++) {
array[i] = raw.charCodeAt(i);
}
return array;
}
Console.log here is..
{build: "d7afb74a", version: "2.2.228", getDocument: ƒ, LoopbackPort: ƒ, PDFDataRangeTransport: ƒ, …}
The PDF comes right and I can see Javascript gives me a correct console.log. I can see it has 2 pages and more than 1MB data. So I think the code and pdf is okay.
So and now I dont't want to use a fkn canvas. (I only saw tutorials where users working with canvas and not with the viewer.html) I'm not going to work with iframes,canvas or objects. I just want that the viewer.js is taking MY pdf not any other. (example.pdf)
I want that pdf.js is loading the pdf im parsing with PHP and onload it should just appears. PHP is giving the pdf base64 encoded.
I saw the article on the docu of pdf.js: https://github.com/mozilla/pdf.js/wiki/Frequently-Asked-Questions#file
You can use raw binary data to open a PDF document: use Uint8Array instead of URL in the PDFViewerApplication.open call. If you have base64 encoded data, please decode it first -- not all browsers have atob or data URI scheme support. (The base64 conversion operation uses more memory, so we recommend delivering raw PDF data as typed array in first place.)
What a nice tipp. BUT nobody says where you have access to PDFViewerApplication. If I do this:
pdfjsLib.PDFViewerApplication.open(pdfAsArray);
I will get an error like 'open is not a function' (i tried with load() too)
Sry for my bad english, hope you understand my problem and can help me ..
I'm trying to base64 encode a local file. It's next to my .js file so there's no uploading going on. Solutions like this (using XMLHttpRequest) get a cross-site scripting error.
I'm trying something like this (which doesn't work but it might help explain my problem):
var file = 'file.jpg'
var reader = new FileReader();
reader.onload = function(e) {
var res = e.target.result;
console.log(res);
};
var f = reader.readAsDataURL(file);
Anyone have any experience doing this locally?
Solutions like this (using XMLHttpRequest) get a cross-site
scripting error.
If using chrome or chromium browser, you could launch with --allow-file-access-from-files flag set to allow request of resource from local filesystem using XMLHttpRequest() or canvas.toDataURL().
You can use <img> element, <canvas> element .toDataURL() to create data URL of local image file without using XMLHttpRequest()
var file = "file.jpg";
var img = new Image;
var canvas = document.createElement("canvas");
var ctx = canvas.getContext("2d");
img.onload = function() {
canvas.width = this.naturalWidth;
canvas.height = this.naturalHeight;
ctx.drawImage(this, 0, 0);
var res = canvas.toDataURL("image/jpeg", 1); // set image `type` to `image/jpeg`
console.log(res);
}
img.src = file;
You could alternatively use XMLHttpRequest() as described at Convert local image to base64 string in Javascript.
See also How to print all the txt files inside a folder using java script .
For a details of difference of returned data URI from either approach see canvas2d toDataURL() different output on different browser
As described by #Kaiido at comment below
it will first decode it, at this stage it's still your file, then it
will paint it to the canvas (now it's just raw pixels) and finally it
will reencode it (it has nothing to do with your original file
anymore) check the dataURI strings... They're compeltely different and
even if you do the canvas operation from two different browsers,
you'll have different outputs, while FileReader will always give you
the same output, since it encode the file directly, it doesn't decode
it.
I'm attempting to use CefSharp (Offscreen) to get image information in a webpage. I'd like to avoid downloading the content twice (I'm aware I can pull the src string from an image tag, and then download the image again). Right now, this is my code:
using (var browser = new ChromiumWebBrowser("http://example.com"))
{
//All this does is wait for the entire frame to be loaded.
await LoadPageAsync(browser);
var res1 = await browser.EvaluateScriptAsync("document.getElementsByTagName('img')[0].src");
//res1.Result = the source of the image (A string)
var res2 = await browser.EvaluateScriptAsync("document.getElementsByTagName('img')[0]");
//This causes an APPCRASH on CefSharp.BrowserSubprocess.exe
}
The way I figure it, CefSharp is already downloading these images to render a webpage. I'd like to avoid making a second request to pull these images from the client, and pull them directly from the client. Is this possible? What are the limitations of the JavascriptResponse object, and why is it causing an APPCRASH here?
Some thoughts: I thought about base64 encoding the image and then pulling it out this way, but this would require me to generate a canvas and fill that canvas every time for each image I want, generate a base64 string, bring it to c# as a string, and then decode it back to an image. I don't know how efficient that would be, but I'm hoping there could be a better solution.
This is how I solved it:
result = await browser.EvaluateScriptAsync(#"
;(function() {
var getDataFromImg = function(img) {
var canvas = document.createElement('canvas');
var context = canvas.getContext('2d');
context.drawImage(img, 0, 0 );
var dataURL = canvas.toDataURL('image/png');
return dataURL.replace(/^data:image\/(png|jpg);base64,/, '');
}
var images = document.querySelectorAll('.image');
var finalArray = {};
for ( var i=0; i<images.length; i++ )
{
//I just filled in array. Depending on what you're grabbing, you may want to fill
//This with objects instead with text to identify each image.
finalArray.push(getDataFromDiv(images[i]));
}
return finalArray;
})()");
//Helper function for below
private static string FixBase64ForImage(string image)
{
var sbText = new StringBuilder(image, image.Length);
sbText.Replace("\r\n", string.Empty);
sbText.Replace(" ", string.Empty);
return sbText.ToString();
}
//In c# convert the data to a memory stream, and then load it from that.
var bitmapData = Convert.FromBase64String(FixBase64ForImage(image));
var streamBitmap = new MemoryStream(bitmapData);
var sourceImage = (Bitmap) Image.FromStream(streamBitmap);
Try executing this javascript...
How to get base64 encoded data from html image
CefSharp should have FileReader api.
Then you can have the EvaluateScriptAsync call return the base64 encoded image data.
I have a webpage that rapidly streams JSON from the server and displays bits of it, about 10 times/second. One part is a base64-encoded PNG image. I've found a few different ways to display the image, but all of them cause unbounded memory usage. It rises from 50mb to 2gb within minutes. Happens with Chrome, Safari, and Firefox. Haven't tried IE.
I discovered the memory usage first by looking at Activity Monitor.app -- the Google Chrome Renderer process continuously eats memory. Then, I looked at Chrome's Resource inspector (View > Developer > Developer Tools, Resources), and I saw that it was caching the images. Every time I changed the img src, or created a new Image() and set its src, Chrome cached it. I can only imagine the other browsers are doing the same.
Is there any way to control this caching? Can I turn it off, or do something sneaky so it never happens?
Edit: I'd like to be able to use the technique in Safari/Mobile Safari. Also, I'm open to other methods of rapidly refreshing an image if anyone has any ideas.
Here are the methods I've tried. Each one resides in a function that gets called on AJAX completion.
Method 1 - Directly set the src attribute on an img tag
Fast. Displays nicely. Leaks like crazy.
$('#placeholder_img').attr('src', 'data:image/png;base64,' + imgString);
Method 2 - Replace img with a canvas, and use drawImage
Displays fine, but still leaks.
var canvas = document.getElementById("placeholder_canvas");
var ctx = canvas.getContext("2d");
var img = new Image();
img.onload = function() {
ctx.drawImage(img, 0, 0);
}
img.src = "data:image/png;base64," + imgString;
Method 3 - Convert to binary and replace canvas contents
I'm doing something wrong here -- the images display small and look like random noise. This method uses a controlled amount of memory (grows to 100mb and stops), but it is slow, especially in Safari (~50% CPU usage there, 17% in Chrome). The idea came from this similar SO question: Data URI leak in Safari (was: Memory Leak with HTML5 canvas)
var img = atob(imgString);
var binimg = [];
for(var i = 0; i < img.length; i++) {
binimg.push(img.charCodeAt(i));
}
var bytearray = new Uint8Array(binimg);
// Grab the existing image from canvas
var ctx = document.getElementById("placeholder_canvas").getContext("2d");
var width = ctx.canvas.width,
height = ctx.canvas.height;
var imgdata = ctx.getImageData(0, 0, width, height);
// Overwrite it with new data
for(var i = 8, len = imgdata.data.length; i < len; i++) {
imgdata.data[i-8] = bytearray[i];
}
// Write it back
ctx.putImageData(imgdata, 0, 0);
I know it's been years since this issue was posted, but the problem still exists in recent versions of Safari Browser. So I have a definitive solution that works in all browsers, and I think this could save jobs or lives!.
Copy the following code somewhere in your html page:
// Methods to address the memory leaks problems in Safari
var BASE64_MARKER = ';base64,';
var temporaryImage;
var objectURL = window.URL || window.webkitURL;
function convertDataURIToBlob(dataURI) {
// Validate input data
if(!dataURI) return;
// Convert image (in base64) to binary data
var base64Index = dataURI.indexOf(BASE64_MARKER) + BASE64_MARKER.length;
var base64 = dataURI.substring(base64Index);
var raw = window.atob(base64);
var rawLength = raw.length;
var array = new Uint8Array(new ArrayBuffer(rawLength));
for(i = 0; i < rawLength; i++) {
array[i] = raw.charCodeAt(i);
}
// Create and return a new blob object using binary data
return new Blob([array], {type: "image/jpeg"});
}
Then when you receive a new frame/image base64Image in base64 format (e.g. data:image/jpeg;base64, LzlqLzRBQ...) and you want to update a html <img /> object imageElement, then use this code:
// Destroy old image
if(temporaryImage) objectURL.revokeObjectURL(temporaryImage);
// Create a new image from binary data
var imageDataBlob = convertDataURIToBlob(base64Image);
// Create a new object URL object
temporaryImage = objectURL.createObjectURL(imageDataBlob);
// Set the new image
imageElement.src = temporaryImage;
Repeat this last code as much as needed and no memory leaks will appear. This solution doesn't require the use of the canvas element, but you can adapt the code to make it work.
Try setting image.src = "" after drawing.
var canvas = document.getElementById("placeholder_canvas");
var ctx = canvas.getContext("2d");
var img = new Image();
img.onload = function() {
ctx.drawImage(img, 0, 0);
//after drawing set src empty
img.src = "";
}
img.src = "data:image/png;base64," + imgString;
This might be helped
I don't think there are any guarantees given about the memory usage of data URLs. If you can figure out a way to get them to behave in one browser, it guarantees little if not nothing about other browsers or versions.
If you put your image data into a blob and then create a blob URL, you can then deallocate that data.
Here's an example which turns a data URI into a blob URL; you may need to change / drop the webkit- & WebKit- prefixes on browsers other than Chrome and possibly future versions of Chrome.
var parts = dataURL.match(/data:([^;]*)(;base64)?,([0-9A-Za-z+/]+)/);
//assume base64 encoding
var binStr = atob(parts[3]);
//might be able to replace the following lines with just
// var view = new Uint8Array(binStr);
//haven't tested.
//convert to binary in ArrayBuffer
var buf = new ArrayBuffer(binStr.length);
var view = new Uint8Array(buf);
for(var i = 0; i < view.length; i++)
view[i] = binStr.charCodeAt(i);
//end of the possibly unnecessary lines
var builder = new WebKitBlobBuilder();
builder.append(buf);
//create blob with mime type, create URL for it
var URL = webkitURL.createObjectURL(builder.getBlob(parts[1]))
return URL;
Deallocating is as easy as :
webkitURL.revokeObjectURL(URL);
And you can use your blob URL as your img's src.
Unfortunately, blob URLs do not appear to be supported in IE prior to v10.
API reference:
http://www.w3.org/TR/FileAPI/#dfn-createObjectURL
http://www.w3.org/TR/FileAPI/#dfn-revokeObjectURL
Compatibility reference:
http://caniuse.com/#search=blob%20url
I had a very similar issue.
Setting img.src to dataUrl Leaks Memory
Long story short, I simply worked around the Image element. I use a javascript decoder to decode and display the image data onto a canvas. Unless the user tries to download the image, they'll never know the difference either. The other downside is that you're going to be limited to modern browsers. The up side is that this method doesn't leak like a sieve :)
patching up ellisbben's answer, since BlobBuilder is obsoleted and https://developer.mozilla.org/en-US/Add-ons/Code_snippets/StringView provides what appears to be a nice quick conversion from base64 to UInt8Array:
in html:
<script src='js/stringview.js'></script>
in js:
window.URL = window.URL ||
window.webkitURL;
function blobify_dataurl(dataURL){
var parts = dataURL.match(/data:([^;]*)(;base64)?,([0-9A-Za-z+/]+)/);
//assume base64 encoding
var binStr = atob(parts[3]);
//convert to binary in StringView
var view = StringView.base64ToBytes(parts[3]);
var blob = new Blob([view], {type: parts[1]}); // pass a useful mime type here
//create blob with mime type, create URL for it
var outURL = URL.createObjectURL(blob);
return outURL;
}
I still don't see it actually updating the image in Safari mobile, but chrome can receive dataurls rapid-fire over websocket and keep up with them far better than having to manually iterate over the string. And if you know you'll always have the same type of dataurl, you could even swap the regex out for a substring (likely quicker...?)
Running some quick memory profiles, it looks like Chrome is even able to keep up with deallocations (if you remember to do them...):
URL.revokeObjectURL(outURL);
I have used different methods to solve this problem, none of them works. It seems that memory leaks when img.src = base64string and those memory can never get released. Here is my solution.
fs.writeFile('img0.jpg', img_data, function (err) {
// console.log("save img!" );
});
document.getElementById("my-img").src = 'img0.jpg?'+img_step;
img_step+=1;
Note that you should convert base64 to jpeg buffer.
My Electron app updating img every 50ms, and memory doesn't leak.
Forget about disk usage. Chrome's memory management piss me off.
Unless Safari or Mobile Safari don't leak data urls, server-side might be the only way to do this on all browsers.
Probably most straightforward would be to make a URL for your image stream, GETting it gives a 302 or 303 response redirecting to a single-use URL that will give the desired image. You will probably have to destroy and re-create the image tags to force a reload of the URL.
You will also be at the mercy of the browser regarding its img caching behavior. And the mercy of my understanding (or lack of understanding) of the HTTP spec. Still, unless server-side operation doesn't fit your requirements, try this first. It adds to the complexity of the server, but this approach uses the browser much more naturally.
But what about using the browser un-naturally? Depending on how browsers implement iframes and handle their associated content, you might be able to get data urls working without leaking the memory. This is kinda Frankenstein shit and is exactly the sort of nonsense that no one should have to do. Upside: it could work. Downside: there are a bazillion ways to try it and uneven, undocumented behavior is exactly what I'd expect.
One idea: embed an iframe containing a page; this page and the page that it is embedded in use cross document messaging (note the GREEN in the compatibility matrix!); embeddee gets the PNG string and passes it along to the embedded page, which then makes an appropriate img tag. When the embeddee needs to display a new message, it destroys the embedded iframe (hopefully releasing the memory of the data url) then creates a new one and passes it the new PNG string.
If you want to be marginally more clever, you could actually embed the source for the embedded frame in the embeddee page as a data url; however, this might leak that data url, which I guess would be poetic justice for trying such a reacharound.
"Something that works in Safari would be better." Browser technology keeps on moving forward, unevenly. When they don't hand the functionality to you on a plate, you gotta get devious.
var inc = 1;
var Bulk = 540;
var tot = 540;
var audtot = 35.90;
var canvas = document.getElementById("myCanvas");
//var imggg = document.getElementById("myimg");
canvas.width = 550;
canvas.height = 400;
var context = canvas.getContext("2d");
var variation = 0.2;
var interval = 65;
function JLoop() {
if (inc < tot) {
if (vid.currentTime < ((audtot * inc) / tot) - variation || (vid.currentTime > ((audtot * inc) / tot) + variation)) {
contflag = 1;
vid.currentTime = ((audtot * inc) / tot);
}
// Draw the animation
try {
context.clearRect(0, 0, canvas.width, canvas.height);
if (arr[inc - 1] != undefined) {
context.drawImage(arr[inc - 1], 0, 0, canvas.width, canvas.height);
arr[inc - 1].src = "";
//document.getElementById("myimg" + inc).style.display = "block";;
// document.getElementById("myimg" + (inc-1)).style.display = "none";
//imggg.src = arr[inc - 1].src;
}
$("#audiofile").val(inc);
// clearInterval(ref);
} catch (e) {
}
inc++;
// interval = 60;
//setTimeout(JLoop, interval);
}
else {
}
}
var ref = setInterval(JLoop, interval);
});
Worked for me on memory leak thanks dude.