I am trying to decrease the resolution of a video to under 500x500. I don't want to change it to exactly 500x500 because that would mess up video quality. So what I am trying to do is to decrease the resolution by 75% in a loop, and that loop would only stop when the video is under 500x500. In theory that would not be hard, but I can't seem to figure it out.
var vidwidth = 501; //Create variable and put it to 501
var vidheight = 501; //so that it won't go through the If Statement
fs.copyFile(filepath2, './media/media.mp4', (err: any) => { //Copy given file to directory
console.log('filepath2 was copied to media.mp4'); //Log confirmation (Not appearing for some reason, but file is copied)
})
while (true) {
getDimensions('./media/media.mp4').then(function (dimensions: any) { //Get dimensions of copied video
var vidwidth = parseInt(dimensions.width) //Parse to Int
var vidheight = parseInt(dimensions.height) //and put in variables
})
ffmpeg('./media/media.mp4') //Call ffmpeg function with copied video path
.output('./media/media.mp4') //Set output to the same file so we can loop it
.size('75%') //Reduce resolution by 75%
.on('end', function() { //Log confirmation on end
console.log('Finished processing'); //(Not appearing)
}) //
.run(); //Run function
if (vidwidth < 500 && vidheight < 500) { //Check if both the width and height is under 500px
break; //If true, break the loop and continue
}
}
This is the current code I am using with comments. Basically what happens is it gets stuck in the while loop because the dimensions of the video won't change. Tested with console.log() line. I think that if I can fix the ffmpeg problem somehow it will all be fixed.
I'd appreciate any help :)
PS: This is all made in typescript, then build into js using npx tsc
The problem is that the loop prevents the callbacks from getting called because javascript runs on one thread (read more about this in this other SO question: Callback of an asynchronous function is never called). One of those callbacks that doesn't get called is the callback of then where the variables vidwidth and vidheight get changed, so the condition that checks if they're less than 500 and eventually break the loop is never true and the loop keeps running forever. This is not the proper way to deal with asynchronous functions anyway (read more about this in this other SO question: How do I return the response from an asynchronous call?).
By the way, copyFile and the while loop are not necessary at all for this kind of work, you can just use getDimensions to get the dimensions of the video, calculate the desired dimensions based on them and start an ffmpeg task (ffmpeg will handle the creation of the resulting file without altering the input file so no need for copyFile). Like so:
getDimensions(filepath2).then((dimensions: any) => { // get the dimension of the input file
let sizeStr = dimensions.width < dimensions.height ? "?x500" : "500x?"; // if width is smaller than height, reduce the height to 500 and calculate width based on that, same goes for the other way around
ffmpeg(filepath2) // the input is the original video, don't worry 'ffmpeg' won't alter the input file
.output('./media/media.mp4') // the output file path
.size(sizeStr) // use the 'sizeStr' string calculated previously (read more about it here: https://github.com/fluent-ffmpeg/node-fluent-ffmpeg#video-frame-size-options)
.on('end', () => console.log('Finished processing'))
.run();
});
As simple as that!
Related
I have a method that runs every 2 seconds to capture a video stream to canvas and write it to file:
function capture(streamName, callback) {
var buffer,
dataURL,
dataSplit,
_ctx;
_ctx = _canvas[streamName].getContext('2d');
_ctx.drawImage(_video[streamName], 0, 0);
dataURL = _canvas[streamName].toDataURL('image/png');
dataSplit = dataURL.split(",")[1];
buffer = new Buffer(dataSplit, 'base64');
fs.writeFileSync(directory + streamName + '.png', buffer);
}
setInterval(function() {
// Called from here
captureState.capture(activeScreens[currentScreenIndex]);
gameState.pollForState(processId, activeScreens[currentScreenIndex], function() {
// do things...
});
}, 2000);
Assuming _video[streamName] exists as a running <video> and _canvas[streamName] exists as a <canvas>. The method works, it just causes a memory leak.
The issue:
Garbage collection can't keep up with the amount of memory the method uses, memory leak ensues.
I have narrowed it down to this line:
buffer = new Buffer(dataSplit, 'base64');
If I comment that out, there is some accumulation of memory (~100MB) but it drops back down every 30s or so.
What I've tried:
Some posts suggested buffer = null; to remove the reference and mark for garbage collection, but that hasn't changed anything.
Any suggestions?
Timeline:
https://i.imgur.com/wH7yFjI.png
https://i.imgur.com/ozFwuxY.png
Allocation Profile:
https://www.dropbox.com/s/zfezp46um6kin7g/Heap-20160929T140250.heaptimeline?dl=0
Just to quantify. After about 30 minutes of run time it sits at 2 GB memory used. This is an Electron (chromium / desktop) app.
SOLVED
Pre-allocating the buffer is what fixed it. This means that in addition to scoping buffer outside of the function, you need to reuse the created buffer with buffer.write. In order to keep proper headers, make sure that you use the encoded parameter of buffer.write.
Matt, I am not sure what was not working with the pre-allocated buffers, so I've posted an algorithm of how such pre-allocated buffers could be used. The key thing here is that buffers are allocated only once for that reason there should not be any memory leak.
var buffers = [];
var bsize = 10000;
// allocate buffer pool
for(var i = 0; i < 10; i++ ){
buffers.push({free:true, buf: new Buffer(bsize)});
}
// sample method that picks one of the buffers into use
function useOneBuffer(data){
// find a free buffer
var theBuf;
var i = 10;
while((typeof theBuf==='undefined')&& i < 10){
if(buffers[i].free){
theBuf = buffers[i];
}
i++;
}
theBuf.free = false;
// start doing whatever you need with the buffer, write data in needed format to it first
// BUT do not allocate
// also, you may want to clear-write the existing data int he buffer, just in case before reuse or after the use.
if(typeof theBuf==='undefined'){
// return or throw... no free buffers left for now
return;
}
theBuf.buf.write(data);
// .... continue using
// dont forget to pass the reference to the buffers member along because
// when you are done, toy have to mark it as free, so that it could be used again
// theBuf.free = true;
}
Did you try something like this? Where did it fail?
There is no leak of buffer object in your code.
Any Buffer objects that you no longer retain a reference to in your code will be immediately available for garbage collection.
the problem caused by callback and how you use it out of capture function.
notice that GC can not cleans the buffer or any other variable as long as callback is running.
I have narrowed it down to this line:
buffer = new Buffer(dataSplit, 'base64');
Short solution is not to use Buffer, as it is not necessary to write file to filesystem, where a file reference exists at base64 portion of data URI. setInterval does not appear to be cleared. You can define a reference for setInterval, then call clearInterval() at <video> ended event.
You can perform function without declaring any variables. Remove data, MIME type, and base64 portions of data URI returned by HTMLCanvasElement.prototype.toDataURL() as described at NodeJS: Saving a base64-encoded image to disk , this Answer at NodeJS write base64 image-file
function capture(streamName, callback) {
_canvas[streamName].getContext("2d")
.drawImage(_video[streamName], 0, 0);
fs.writeFileSync(directory + streamName + ".png"
, _canvas[streamName].toDataURL("image/png").split(",")[1], "base64");
}
var interval = setInterval(function() {
// Called from here
captureState.capture(activeScreens[currentScreenIndex]);
gameState.pollForState(processId, activeScreens[currentScreenIndex]
, function() {
// do things...
});
}, 2000);
video[/* streamName */].addEventListener("ended", function(e) {
clearInterval(interval);
});
I was having a similar issue recently with a software app that uses ~500MB of data in arrayBuffer form. I thought I had a memory leak, but it turns out Chrome was trying to do optimizations on a set of large-ish ArrayBuffer's and corresponding operations (each buffer ~60mb in size and some slightly larger objects). The CPU usage appeared to never allow for GC to run, or at least that's how it appeared. I had to do two things to resolve my issues. I Have not read any specific spec for when the GC gets scheduled to prove or disprove that. What I had to do:
I had to break the reference to the data in my arrayBuffers and some other large objects.
I had to force Chrome to have downtime, which appeared to give it time to schedule and then run the GC.
After applying those two steps, things ran for me and were garbage collected. Unfortunately, when applying those two things independently from each other, my app kept on crashing (exploding into GB of memory used before doing so). The following would be my thoughts on what I'd try on your code.
The problem with the garbage collector is that you cannot force it to run. So you can have objects that are ready to be malloced, but for whatever reason the browser doesn't give the garbage collector opportunity. Another approach to the buffer = null would be instead to break the reference explicitly with the delete operator -- this is what I did, but in theory ... = null is equivalent. It's important to note that delete cannot be run on any variable created by the var operator. So something like the following would be my suggestion:
function capture(streamName, callback) {
this._ctx = _canvas[streamName].getContext('2d');
this._ctx.drawImage(_video[streamName], 0, 0);
this.dataURL = _canvas[streamName].toDataURL('image/png');
this.dataSplit = dataURL.split(",")[1];
this.buffer = new Buffer(dataSplit, 'base64');
fs.writeFileSync(directory + streamName + '.png', this.buffer);
delete this._ctx;//because the context with the image used still exists
delete this.dataURL;//because the data used in dataSplit exists here
delete this.dataSplit;//because the data used in buffer exists here
delete this.buffer;
//again ... = null likely would work as well, I used delete
}
Second, the small break. So it appears you've got some intensive processes going on and the system cannot keep up. It's not actually hitting the 2s save mark, because it needs more than 2 seconds per save. There is always a function on the queue for executing the captureState.capture(...) method and it never has time to garbage collect. Some helpful posts on the scheduler and differences between setInterval and setTimeout:
http://javascript.info/tutorial/settimeout-setinterval
http://ejohn.org/blog/how-javascript-timers-work/
If that is for sure the case, why not use setTimeout and simple check that roughly 2 seconds (or more) time has passed and execute. In doing that check always force your code to wait a set period of time between saves. Give the browser time to schedule/run GC -- something like what follows (100 ms setTimeout in the pollForState):
var MINIMUM_DELAY_BETWEEN_SAVES = 100;
var POLLING_DELAY = 100;
//get the time in ms
var ts = Date.now();
function interValCheck(){
//check if 2000 ms have passed
if(Date.now()-ts > 2000){
//reset the timestamp of the last time save was run
ts = Date.now();
// Called from here
captureState.capture(activeScreens[currentScreenIndex]);
//upon callback, force the system to take a break.
setTimeout(function(){
gameState.pollForState(processId, activeScreens[currentScreenIndex], function() {
// do things...
//and then schedule the interValCheck again, but give it some time
//to potentially garbage collect.
setTimeout(intervalCheck,MINIMUM_DELAY_BETWEEN_SAVES);
});
}
}else{
//reschedule check back in 1/10th of a second.
//or after whatever may be executing next.
setTimeout(intervalCheck,POLLING_DELAY);
}
}
This means that a capture will happen no more than once every 2 seconds, but will also in some sense trick the browser into having the time to GC and remove any data that was left.
Last thoughts, entertaining a more traditional definition of memory leak, The candidates for a memory leak based on what I see in your code would be activeScreens, _canvas or _video which appear to be objects of some sort? Might be worthwhile to explore those if the above doesn't resolve your issue (wouldn't be able to make any assessments based on what is currently shared).
Hope that helps!
In general, I would recommend using a local map of UUID / something that will allow you to control your memory when dealing with getImageData and other buffers.
The UUID can be a pre-defined identifier e.g: "current-image" and "prev-image" if comparing between slides
E.g
existingBuffers: Record<string, UInt8ClampedArray> = {}
existingBuffers[ptrUid] = ImageData.data (OR something equivalent)
then if you want to override ("current-image") you can (overkill here):
existingBuffers[ptrUid] = new UInt8ClampedArray();
delete existingBuffers[ptrUid]
In addition, you will always be able to check your buffers and make sure they are not going out of control.
Maybe it is a bit old-school, but I found it comfortable.
I already checked multiple responses to asynchronous javascript behaviours and experimented with callbacks, experimenting with promises next, but parallell I want to send SoS request here :/
Because often first answers are ("What do you want to do?")
I have to find out all the non-empty or transparent pixels in an Image.(kinda like a mask) and give them back an array of binary arithmetical numbers.
What my program does in short:
it gets Image URL and creates new Image() from it.
it creates a Canvas and adds it to DOM
in image.onLoad() it draws the image on the canvas
after the Image is on canvas, it scans every pixel for its color and gives back a data array.
My problem is to force the calculation to WAIT until the image is loaded. I tried to do with something like this:
return getDataArray(image.onLoad()= function(){
// ...init things...
}));
Still, it goes into the getDataArray function, before the image.onLoad happens.
I'm gonna take a break and walk outside because I'm out of productive ideas.
Here is the original function:
getImageScan: function() {
this.myImage.src = imageScanner.imgUrl;
var is = this;
return is.getArrayFromCanvas(this.myImage.onload = function () {
is.imageHeight = is.myImage.height;
is.imageWidth = is.myImage.width;
is.appendCanvasToBody();
is.initGraphicalContent();
is.drawImageOnCanvas();
console.log("image is loaded");
})
},
getArrayFromCanvas: function () {
console.log("request for calculations");
var booleanJson = this.getJsonFromCanvas()
return this.getBinaryArithmeticFromBooleans(booleanJson);
}
and this is the result
request for calculations
[]
the image is loaded
here is the entire *.js if you want more information (it's my private project in slacktime, so no copyright issues):
https://github.com/Vilkaz/gridToImage/blob/master/web/resources/js/imageScanner.js
You try to pass something to getArrayFromCanvas although it has no parameters. I don't understand why you do this, but I guess you want something like this:
getImageScan: function(callback) {
this.myImage.src = imageScanner.imgUrl;
var is = this;
this.myImage.onload = function () {
is.imageHeight = is.myImage.height;
is.imageWidth = is.myImage.width;
is.appendCanvasToBody();
is.initGraphicalContent();
is.drawImageOnCanvas();
console.log("image is loaded");
callback(is.getArrayFromCanvas());
}
}
One difference between the asynchronous behavior above and your original code is that getImageScan returns immediately and that a callback is called later to "return" the result.
I've been playing with Readable and Transforming streams, and I can't solve a mystery of disappearing lines.
Consider a text file in which the lines contain sequential numbers, from 1 to 20000:
$ seq 1 20000 > file.txt
I create a Readable stream and a LineStream (from a library called byline: npm install byline; I'm using version 4.1.1):
var file = (require('fs')).createReadStream('file.txt');
var lines = new (require('byline').LineStream)();
Consider the following code:
setTimeout(function() {
lines.on('readable', function() {
var line;
while (null !== (line = lines.read())) {
console.log(line);
}
});
}, 1500);
setTimeout(function() {
file.on('readable', function() {
var chunk;
while (null !== (chunk = file.read())) {
lines.write(chunk);
}
});
}, 1000);
Notice that it first attaches a listener to the 'readable' event of the file Readable stream, which writes to the lines stream, and only half a second later it attaches a listener to the 'readable' event of the lines stream, which simply prints lines to the console.
If I run this code, it will only print 16384 (which is 2^14) lines and stop. It won't finish the file. However, if I change the 1500ms timeout to 500ms -- effectively swapping the order in which the listeners are attached, it will happily print the whole file.
I've tried playing with highWaterMark, with specifying an amount of bytes to read from the file stream, attaching listeners to other events of the lines stream, all in vain.
What can explain this behavior?
Thanks!
I think this behaviour can be explained with two things:
How you use streams.
How byline works.
What you do is manual piping. The problem with it is that it doesn't respect highWaterMark and forces the whole to be buffered.
All this causes byline to behave badly. See this: https://github.com/jahewson/node-byline/blob/master/lib/byline.js#L110-L112. It means that it stops pushing lines, when buffers length > highWaterMark. But this doesn't make any sense! It doesn't prevent memory usage growth (lines are still stored in special line buffer), but stream doesn't know about these lines and if it ends in overflown state, they will be lost forever.
What you can do:
Use pipe
Modify highWaterMark : lines._readableState.highWaterMark = Infinity;
Stop using byline
I need to load an array of images in Javascript, but I want to make sure that all the images are loaded before starting drawing them. So, I busy-wait for every image onLoad event to be called. First I create images and set their source and onload function:
// Load images from names
for (i = 0; i < this.nImages; i++) {
this.imagesArray[i] = new Image();
this.imagesArray[i].onload = this.onLoad(i);
this.imagesArray[i].src = images[i];
}
This is the onLoad function, member of the class I'm developing (the first two steps were in the constructor):
MyClass.prototype.onLoad = function (nimage) {
console.log ("Image completed? ", nimage, " ", this.imagesArray[nimage].complete);
this.imagesLoaded++;
}
Then I busy wait for all the onLoad functions to increment the counter (again, in the constructor):
while (this.imagesLoaded < this.nImages) {
// This is busy wait, and I don't like it.
continue;
}
So far, so good. But when I try to draw it on an HTMl5 canvas with my drawClass:
MyClass.prototype.refresh = function () {
// Gets one of the images in the range
var imageNum = this.GetImageNum();
// Test for completeness. This gives FALSE :(
console.log ("completeness for image number ", imageNum, " is: ", this.imagesArray[imageNum].complete);
this.drawClass.draw(this.imagesArray[imageNum], this.xOrigin, this.yOrigin);
}
The console.log line gives false and I get the infamous NS_ERROR_NOT_AVAILABLE exception.
Please not that the refresh() function is called after the onLoad() function, according to Firebug.
What am I missing here?
You need to assign onload before setting the source, otherwise the loading may be completed before the script gets to set the handler. Maybe that already fixes it.
Re the busy waiting, that is indeed never a good thing. It's hard to suggest alternatives, as you are not showing why you need to wait in the first place. But what might be a good idea is extending the onload handler to detect whether the image array is complete, and if it is, to start the following action - that would make the busy waiting unnecessary.
I have some javascript functions that take about 1 to 3 seconds. (some loops or mooML templating code.)
During this time, the browser is just frozen. I tried showing a "loading" animation (gif image) before starting the operation and hiding it afterwords. but it just doesnt work. The browser freezes before it could render the image and hides it immediately when the function ends.
Is there anything I can do to tell the browser to update the screen before going into javascript execution., Something like Application.DoEvents or background worker threads.
So any comments/suggestions about how to show javascript execution progress. My primary target browser is IE6, but should also work on all latest browsers
This is due to everything in IE6 being executed in the same thread - even animating the gif.
The only way to ensure that the gif is displayed prior to starting is by detaching the execution.
function longRunningProcess(){
....
hideGif();
}
displayGif();
window.setTimeout(longRunningProcess, 0);
But this will still render the browser frozen while longRunningProcess executes.
In order to allow interaction you will have to break your code in to smaller fragments, perhaps like this
var process = {
steps: [
function(){
// step 1
// display gif
},
function(){
// step 2
},
function(){
// step 3
},
function(){
// step 4
// hide gif
}
],
index: 0,
nextStep: function(){
this.steps[this.index++]();
if (this.index != this.steps.length) {
var me = this;
window.setTimeout(function(){
me.nextStep();
}, 0);
}
}
};
process.nextStep();
Perhaps you can put in a delay between showing the animated gif and running the heavy code.
Show the gif, and call:
window.setTimeout(myFunction, 100)
Do the heavy stuff in "myFunction".
You have to use a little more sophisticated technique to show the progress of the long running function.
Let's say you have a function like this that runs long enough:
function longLoop() {
for (var i = 0; i < 100; i++) {
// Here the actual "long" code
}
}
To keep the interface responsive and to show progress (also to avoid "script is taking too long..." messages in some browsers) you have to split the execution into the several parts.
function longLoop() {
// We get the loopStart variable from the _function_ instance.
// arguments.callee - a reference to function longLoop in this scope
var loopStart = arguments.callee.start || 0;
// Then we're not doing the whole loop, but only 10% of it
// note that we're not starting from 0, but from the point where we finished last
for (var i = loopStart; i < loopStart + 10; i++) {
// Here the actual "long" code
}
// Next time we'll start from the next index
var next = arguments.callee.start = loopStart + 10;
if (next < 100) {
updateProgress(next); // Draw progress bar, whatever.
setTimeout(arguments.callee, 10);
}
}
I haven't tested this actual code, but I have used this technique before.
Try setting a wait cursor before you run the function, and removing it afterwards. In jQuery you can do it this way:
var body = $('body');
body.css("cursor", "wait");
lengthyProcess();
body.css("cursor", "");