I'm using OfflineAudioContext to render WebAudio signals into a buffer so I can analyze the results. When I do this repeatedly, it seems that the associated memory is never released, eventually causing the browser tab to crash.
Here's a minimal reproduction:
// render 10 minutes of audio into a big buffer
var ctx = new OfflineAudioContext(1, 44100 * 600, 44100)
var osc = ctx.createOscillator()
osc.start()
ctx.startRendering().then(buffer => {
// attempt to clean up
osc.stop()
osc = null
buffer = null
ctx = null
})
Running that in a JS console will render a ~100MB buffer that never gets released. Running it repeatedly will chew up memory until the tab eventually crashes (tested in mac chrome/mozilla, windows chrome/mozilla/edge).
How can I get browsers to free the memory associated with an OfflineAudioContext?
This has been confirmed as a bug, with no workarounds. Until it's fixed, it looks like this is a fact of life.
This one was a major headache, but I finally found a workaround: create an iframe in which you run the audio rendering code, return the result to your main window using postMessage, and immediately remove the iframe from the DOM when you receive the result. That clears all resources associated with it, including the offlineAudioContext.
Of course, that's only practical if your use case is to do a relatively small number of relatively long renders.
Note that to transfer the data back in an efficient manner, you should send ArrayBuffer objects, which are Transferable:
context.oncomplete = function (e) {
var channels = [];
for (var c = 0; c < e.renderedBuffer.numberOfChannels; c++) {
channels.push(e.renderedBuffer.getChannelData(c));
}
channels = channels.map( function(c) {return c.buffer;} );
window.parent.postMessage({
action:'renderResult',
data: channels
}, "*", channels);
}
and recreate Float32Arrays from them on the receiving end.
Related
I'm trying to make music player with live ability to change output device (headphones or speakers).
I have working function to change destination device with setSinkId.
I also have working Biquad Filters (low pass, high pass...) and audio processor to generate gain level bars (image below).
Filter sliders and gain bars
Some of the code (it's a lot).
Setting output device:
if (audiooutput_ch1active == 0) {
if (typeof audiooutput_headphonesid !== "undefined") {
audiooutput_ch1active = 1;
await audio_ch1.setSinkId(audiooutput_headphonesid);
return;
}
} else if (audiooutput_ch1active == 1) {
if (typeof audiooutput_speakersid !== "undefined") {
audiooutput_ch1active = 0;
await audio_ch1.setSinkId(audiooutput_speakersid);
return;
}
}
Defining filters:
var filter_ch1_lowpass = audioCtx_ch1.createBiquadFilter();
filter_ch1_lowpass.type = "lowpass";
filter_ch1_lowpass.frequency.value = 12000;
var filter_ch1_highpass = audioCtx_ch1.createBiquadFilter();
filter_ch1_highpass.type = "highpass";
filter_ch1_highpass.frequency.value = 0;
var filter_ch1_lowshelf = audioCtx_ch1.createBiquadFilter();
filter_ch1_lowshelf.type = "lowshelf";
filter_ch1_lowshelf.frequency.value = 100;
filter_ch1_lowshelf.gain.value = 0;
Connecting filters and processor:
audio_ch1.src = path;
source_ch1 = audioCtx_ch1.createMediaElementSource(audio_ch1);
source_ch1.connect(filter_ch1_lowpass);
filter_ch1_lowpass.connect(filter_ch1_highpass);
filter_ch1_highpass.connect(filter_ch1_lowshelf);
filter_ch1_lowshelf.connect(processor_ch1);
filter_ch1_lowshelf.connect(audioCtx_ch1.destination);
processor_ch1.connect(filter_ch1_lowshelf);
When I connect filters to my audio context, I can't use setSinkId - Error: Uncaught (in promise) DOMException: The operation could not be performed and was aborted
When I skip code that connects filters, setSinkId works fine.
Does setSinkId not support audio context filters?
I'm new to JavaScript audio.
You are setting the sink of an <audio> element after you did route that <audio>'s output to your audio context. The <audio> already lost the control over its output, it's now the audio context that has control over it, you thus can't set the sink of the <audio> element anymore.
That you can call this method when not using the filter is part of large Chrome bug in the Web Audio API which I believe they're actively working on: basically, they really create the connections only when the audio graph gets connected to a destination.
You could avoid that error to be thrown by creating a MediaStream from your <audio>, by calling its captureStream() method, and then connecting a MediaStreamAudioSourceNode made from that MediaStream object to your audio graph. However you'd still only be setting the sink of the input <audio>, and not of the one you've been modifying in your audio context.
So instead, what you probably want is to set the sink of the audio context directly.
There is an active proposal to add a setSinkId() method on the AudioContext interface, latest Chrome Canary does expose it. However it's still a proposal, and AFAICT only Canary exposes it. So in a near future you'll just have to do
audioCtx_ch1.setSinkId(audiooutput_speakersid).catch(handlePossibleError);
but for now, you'll need to do some gymnastic by first creating a MediaStreamAudioDestinationNode where you'll connect your audio graph, set its MediaStream as the srcObject of another <audio> element on which you'll call setSinkId().
udio_ch1.src = path;
// might be better to use createMediaStreamSource(audio_ch1.captureStream()) anyway here
source_ch1 = audioCtx_ch1.createMediaElementSource(audio_ch1);
source_ch1.connect(filter_ch1_lowpass);
filter_ch1_lowpass.connect(filter_ch1_highpass);
filter_ch1_highpass.connect(filter_ch1_lowshelf);
filter_ch1_lowshelf.connect(processor_ch1);
processor_ch1.connect(filter_ch1_lowshelf);
const outputNode = audioCtx_ch1.createMediaStreamDestination();
filter_ch1_lowshelf.connect(outputNode);
const outputElem = new Audio();
outputElem.srcObject = outputNode.stream;
outputElem.setSinkId(audiooutput_speakersid);
outputElem.play();
Browsers impose limits on the number of active WebGL contexts. Exceed the limit and the browser will start to dump older contexts. My understanding is that there are limits per domain as well as an overall max.
Two questions:
Is there a way to determine how close you are to the limits, i.e., how many active WebGL contexts there are and how many available?
I've found scraps of info here and there, but haven't been able to nail down exactly what the limits are for each browser, both per-domain and max. What are the limits for Chrome, Firefox, Safari, Edge and IE 11?
There is no reliable way to know how many contexts a browser supports and even if you did know it could change tomorrow or it could change based on various conditions of the machine the browser is running on or the browser's own heuristics, like maybe if vram is low it allows less contexts. Or maybe if the newest context uses too many resources it tries to free up space by losing other contexts.
My personal rule of thumb is browsers support at least 8 contexts. That's what I've built my sites to assume.
You can probably understand why there's a limit. WebGL apps use tons of resources. Maybe not all of them but games in particular can easily use gigs of vram and that vram is not easily virutalized like regular ram is, especially since in order to display the results in the browser itself somehow the results all have to make it to the same process. So, since they are likely not virtualized and since they can use so many resources the browser needs to limit how many can be created at once to free up resources for the latest page the user visits.
There are plenty of techinques to use a single context to get display lots of things all over a webpage which are covered or referenced in the Q&A you linked to.
You can count though like this:
const canvases = [];
let done;
function createCanvas() {
const canvas = document.createElement('canvas');
canvas.addEventListener('webglcontextlost', () => {
done = true;
console.log('num contexts:', canvases.length - 1);
});
const gl = canvas.getContext('webgl');
canvases.push(canvas);
}
const wait = ms => new Promise(resolve => setTimeout(resolve, ms));
async function main() {
while (!done) {
createCanvas();
await wait();
}
}
main();
On my Macbook Pro Chrome 80, Firefox 75, Safari 13.1, and Safari on an iPhone 11, all reported 16 contexts. Chrome 81 on Android on a Pixel 2 XL reported 8, Firefox on the same device reported 9. But like it says above, all those numbers could change tomorrow or even today under different conditions.
Follow Up
Whatever the browser's limit is seems to be per domain in Chrome but per page in Firefox and Safari. You can test this here. Open 2 windows. Create 16 contexts in one window, then create one more in another. In Chrome you'll see the first window lose a context as soon as the one more context is created in the 2nd window. In Firefox and Safari each window has it's own limit.
I also found lots of variation in other behavior. On Chrome I expected that if I created 16 contexts in one window, 1 in another I'd see one lost in the first window but that when I closed the 2nd window I'd see that one lost context restored in the first window. I don't. I don't know what if anything would trigger that context to get restored.
In Firefox with the code linked above, as soon as I create the 17th context in the same window it goes into an infinite loop. It loses the first context, that context registers to be restored, firefox restores it immediately, which loses another context, repeat. This seems like it makes it impossible to use in Firefox.
Trying another example where I don't keep a reference to the canvases, which means they can be garbage collected, I see in Firefox I never get a context lost event which makes some sense since I no longer have an effective reference to the context there's no reason to send a lost context event. Chrome on the other hand does still send the event which is also technically not wrong since I registered for the event so the event itself still has a reference and if I didn't want to know I should have unregistered the event.
Apparently this is an under specified and under tested part of the WebGL spec.
It looks like the only thing you can really do for a lost context is notify the user they got one and provide them a button to start over (create a new one or refresh the page)
const groups = [];
let done;
function createCanvas(parent, lostCallback) {
const canvas = document.createElement('canvas');
parent.appendChild(canvas);
canvas.addEventListener('webglcontextlost', lostCallback);
const gl = canvas.getContext('webgl');
return {canvas, gl};
}
function createGroup() {
const div = document.createElement('div');
const group = {div};
div.className = 'c';
document.body.appendChild(div);
function restore() {
div.innerHTML = '';
const {canvas, gl} = createCanvas(div, () => {
done = true;
group.gl = undefined;
div.innerHTML = "context lost, click to restore";
div.addEventListener('click', restore, {once: true});
});
group.gl = gl;
group.canvas = canvas;
}
restore();
groups.push(group);
}
const wait = ms => new Promise(resolve => setTimeout(resolve, ms));
async function main() {
while (!done) {
createGroup();
await wait();
}
}
function render() {
for (const {gl} of groups) {
if (!gl) {
continue;
}
gl.clearColor(Math.random() * 0.5 + 0.5, 0, 0, 1);
gl.clear(gl.COLOR_BUFFER_BIT);
}
requestAnimationFrame(render);
}
requestAnimationFrame(render);
main();
.c {
display: inline-block;
border: 1px solid black;
margin: 1px;
font-size: xx-small;
}
canvas {
width: 100px;
height: 10px;
display: block;
}
Also note, the original point of WebGL losing the context is that WebGL has no control over the OS. In Windows for example, if any app does something on the GPU that takes too long the OS itself will reset the GPU which effectively loses the contexts for all apps, not just your browser. ALL APPS. There's nothing a browser can do to prevent that and so the browser just has to pass that info down to your webpage. Also in Windows, you can enable/disable GPUs without rebooting. That's another case where the browser has no control and just has to tell you the context was lost.
I have a long-polling application written in JS fetching XML files to update a web page. It fetches every 5 seconds and is about 8KB of data. I have had this web page open for about 1 week straight (although computer goes to sleep in evening).
When first opening Chrome it starts at about 33K of my PC's memory. After I left it open for a week, constantly updating while the PC was awake, it was consuming 384K for just one tab. This is a common method that my application will be run (leaving the web page open for very long periods of time).
I feel like I am hindering Chrome's GC or am not doing some specific memory management (or maybe even a memory leak). I don't really know how a memory leak would be achievable in JS.
My app paradigm is very typical, following this endless sequence:
function getXml(file){
return $.get(file);
}
function parseXml(Xml){
return {
someTag : $(Xml).find('someTag').attr('val'),
someOtherTag: $(Xml).find('someOtherTag').attr('val')
}
}
function polling(modules){
var defer = $.Deferred();
function module1(){
var xmlData = getXml('myFile.xml').done(function(xmlData){
var data = parseXml(xmlData);
modules.module1.update(data);
}).fail(function(){
alert('error getting XML');
}).always(function(){
module2();
});
});
function module2(){
var xmlData = getXml('myFile.xml').done(function(xmlData){
var data = parseXml(xmlData);
modules.module2.update(data);
}).fail(function(){
alert('error getting XML');
}).always(function(){
defer.resolve(modules);
});
});
return defer.promise(modules);
}
$(document).on('ready', function(){
var myModules = {
module1 : new Module(),
module2 : new ModuleOtherModule()
}
// Begin polling
var update = null;
polling(myModules).done(function(modules){
update = setInterval(polling.bind(this, modules), 5000);
});
That's the jist of it... Is there some manual memory management I should be doing for apps built like this? Do I need to better management my variables or memory? Or is this just a typical symptom of having a web browser (crome/ff) open for 1-2 weeks?
Thanks
Your code seems ok but You don't posted what happens on method "udpate" inside "modules". Why I said that? Because could be that method who is leaking your app.
I recomender you two things:
Deep into update method and look how are you updating the DOM (be careful if there are a lot of nodes). Check if this content that you are updating could have associated events because if you assign a event listener to a node and then you remove the dom node, your listener still kepts in memory (until javascript garbage collector trash it)
Read this article. It's the best way to find your memory leak: https://developer.chrome.com/devtools/docs/javascript-memory-profiling
I have a web page which is chock full of javascript, and a few references to resources like images for the javascript to work with. I use a websocket to communicate with the server; the javascript parses the socket's data and does things with the page presentation accordingly. It all works fine, except when it doesn't.
The problem appears to be that that page contains images which I want to display parts of, under javascript control. No matter how I play with defer, there are apparently situations in which the images don't seem to be fully downloaded before the javascript tries to use them. The result is images are missing when the page is rendered, some small percentage of the time.
I'm not very used to languages and protocols where you don't have strict control over what happens when, so the server and browser shipping stuff and executing stuff in an uncontrolled and asynch order annoys me. So I'd like to stop depending on apparently unreliable tricks like defer. What I'd like to do is just download the whole page, and then open my websocket and send my images and other resources down through it. When that process is complete, I'll know it's safe to accept other commands from the websocket and get on with doing what the page does. In other words I want to subvert the browsers asynch handling of resources, and handle it all serially under javascript control.
Pouring an image file from the server down a socket is easy and I have no trouble coming up with protocols to do it. Capturing the data as byte arrays, also easy.
But how do I get them interpreted as images?
I know there are downsides to this approach. I won't get browser caching of my images and the initial page won't load as quickly. I'm ok with that. I'm just tired of 95% working solutions and having to wonder if what I did works in every browser imaginable. (Working on everything from IE 8 to next year's Chrome is a requirement for me.)
Is this approach viable? Are there better ways to get strict, portable control of resource loading?
You still haven't been very specific about what resources you are waiting for other than images, but if they are all images, then you can use this loadMonitor object to monitor when N images are done loading:
function loadMonitor(/* img1, img2, img3 */) {
var cntr = 0, doneFn, self = this;
function checkDone() {
if (cntr === 0 && doneFn) {
// clear out doneFn so nothing that is done in the doneFn callback
// accidentally cause the callback to get called again
var f = doneFn;
doneFn = null;
f.call(self);
}
}
function handleEvents(obj, eventList) {
var events = eventList.split(" "), i;
function handler() {
--cntr;
for (i = 0; i < events.length; i++) {
obj.removeEventListener(events[i], handler);
}
checkDone();
}
for (i = 0; i < events.length; i++) {
obj.addEventListener(events[i], handler);
}
}
this.add = function(/* img1, img2, img3 */) {
if (doneFn) {
throw new Error("Can't call loadMonitor.add() after calling loadMonitor.start(fn)");
}
var img;
for (var i = 0; i < arguments.length; i++) {
img = arguments[i];
if (!img.src || !img.complete) {
++cntr;
handleEvents(img, "load error abort");
}
}
};
this.start = function(fn) {
if (!fn) {
throw new Error("must pass completion function as loadMonitor.start(fn)");
}
doneFn = fn;
checkDone();
};
// process constructor arguments
this.add.apply(this, arguments);
}
// example usage code
var cardsImage = new Image();
cardsImage.src = ...
var playerImage = new Image();
playerImage.src = ...
var tableImage = new Image();
var watcher = new loadMonitor(cardsImage, playerImage, tableImage);
// .start() tells the monitor that all images are now in the monitor
// and passes it our callback so it can now tell us when things are done
watcher.start(function() {
// put code here that wants to run when all the images are loaded
});
// the .src value can be set before or after the image has been
// added to the loadMonitor
tableImage.src = ...
Note, you must make sure that all images you put in the loadMonitor do get a .src assigned or the loadMonitor will never call its callback because that image will never finish.
Working demo: http://jsfiddle.net/jfriend00/g9x74d2j/
I am trying to implement a cross tab mutex for my needs. I found a implementation here. which seems quite promising. Basically, it implements Leslie Lamport's algorithm with needs atomic read/writes for creating a mutex.
However it relies on localStorage providing atomic read/writes. This works well in most browsers except for Chrome.
So my question is, can I use cookie read/write instead? Are cookie reads/writes atomic in all mainstream browsers (IE, Chrome, Safari, Firefox)?
Neither cookies, nor localStorage provide atomic transactions.
I think you might have misunderstood that blog post, it doesn't say that his implementation doesn't work in Chrome, it does not rely on localStorage providing atomic read/writes. He says that normal localStorage access is more volatile in Chrome. I'm assuming this is related to the fact that Chrome uses a separate process for each tab, whereas most other browsers tend to use a single process for all tabs. His code implements a locking system on top of localStorage which should protect against things getting overwritten.
Another solution would be to use IndexedDB. IndexedDB does provide atomic transactions. Being a new standard it is not supported in as many browsers as localStorage, but it does have good support in recent versions of Firefox, Chrome and IE10.
No. Even if the browsers probably implement a read and a write lock on the cookie it won't protect you from changes that happens between a read and a consequent write. This is easy to see by looking at the javascript API for cookies, there is no mutex functionality there...
I ran into this concurrency issue using localStorage today (two years alter..)
Scenario: Multiple tabs of a browser (e.g. Chrome) have identical script code that gets executed, basically at the same time (called by e.g. SignalR). The code reads and writes to localStorage. Since the tabs run in different processes but access the shared local storage collectively, reading and writing leads to undefined results since a locking mechanism is missing here. In my case I wanted to make sure that only one of the tabs actually works with the local storage and not all of them..
I tried the locking mechanism of Benjamin Dumke-von der Ehe metioned in the question above but got undesired results. So I decided to roll my own experimental code:
localStorage lock:
Object.getPrototypeOf(localStorage).lockRndId = new Date().getTime() + '.' + Math.random();
Object.getPrototypeOf(localStorage).lock = function (lockName, maxHold, callback) {
var that = this;
var value = this.getItem(lockName);
var start = new Date().getTime();
var wait = setInterval(function() {
if ((value == null) || (parseInt(value.split('_')[1]) + maxHold < start)) {
that.setItem(lockName, that.lockRndId + '_' + start);
setTimeout(function () {
if (that.getItem(lockName) == (that.lockRndId + '_' + start)) {
clearInterval(wait);
try { callback(); }
catch (e) { throw 'exeption in user callback'; }
finally { localStorage.removeItem(lockName); }
}
}, 100);
}
}, 200);
};
usage:
localStorage.lock(lockName, maxHold, callback);
lockName - a global scope unique name for the lock - string
maxHold - the maximum time to protect the script in milliseconds - integer
callback - the function containing the script that gets protected
example: "only play a sound in one tab"
//var msgSound = new Audio('/sounds/message.mp3');
localStorage.lock('lock1', 5000, function(){
// only one of the tabs / browser processes gets here at a time
console.log('lock aquired:' + new Date().getTime());
// work here with local storage using getItem, setItem
// e.g. only one of the tabs is supposed to play a sound and only if none played it within 3 seconds
var tm = new Date().getTime();
if ((localStorage.lastMsgBeep == null)||(localStorage.lastMsgBeep <tm-3000)) {
localStorage.lastMsgBeep = tm;
//msgSound.play();
console.log('beep');
}
});