No problems to record the microphone, connect the analyzer for a nice vu-meter, re-sample the massive amount of data to something we can handle (8Khz, Mono) using 'xaudio.js' from the speex.js lib and to wrap it into an appropriate WAV envelope.
Stopping the recorder seems to be a different story, because the recording process severely lags behind the onaudioprocess functionality. But even this is not a problem as I can calculate the missing samples and wait for them to arrive before I actually store the data.
But what now? How do I stop the audio-process from calling onaudioprocess? Disconnecting all nodes doesn't make a difference. How am I able to re-initialize all buffers to reate a clean and fresh jump-in point for the next recording? Should I destroy the AudioContext? How would I do that? Or is it enough to 'null' the createMediaStreamSource?
What needs to be done to truly set everything up for sequential independent recordings?
Any hint is appreciated.
I'm not sure of all your code structure; personally, I'd try to hang on to the AudioContext and the input stream (from the getUserMedia callback), even if I removed the MediaStreamSourceNode.
To get rid of the ScriptProcessor, though - set the .onaudioprocess in the script processor node to null. That will stop it calling the callback - then if you disconnect it and release all references, it should be garbage collected as usual.
[edit] Oh, and the only way to delete an AudioContext is to get rid of any processing that's happening (disconnect all nodes, remove any onaudioprocess), remove any references to it, and wait for it to be garbage-collected.
Related
I have a performance problem in Javascript causing a crash lately at work. With the objective modernising our applications, we are looking into running our applications as webservers, onto which our client would connect via a browser (chrome, firefox, ...), and having all our interfaces running as HTML+JS webpages.
To give you an overview of our performance needs, our application run image processing from camera sources, running in some cases at more than 20 fps, but in the most case around 2-3fps max.
Basically, we have a Webserver written in C++, which HTTP requests, and provides the user with the HTML pages of the interface and the corresponding JS scripts of the application.
In order to simplify the communication between the two applications, I then open a web socket between the webpage and the c++ server to send formatted messages back and forth. These messages can be pretty big, up to several Mos.
It all works pretty well as long as the FPS stays relatively low. When the fps increases the following two things happen.
Either the c++ webserver memory footprint increases pretty fast and crashes when no more memory is available. After investigation, this happens when the network usage full, and the websocket cache fills up. I think this is due to the websocket TCP-IP way of doing stuff, as the socket must wait for the message to be sent and received to send the next one.
Or the browser crashes after a while, showing the Aw snap screen (see figure below). It seems in that case that the same thing more or less happen but it seems this time due to the garbage collection strategy. The other figure below shows the printscreen of the memory usage when the application is running, clearly showing saw pattern. It seems to indicate that garbage collection is doing its work at intervals that are further and further away.
I have trapped the problem down to very big messages (>100Ko) being sent at fast rate per second. And the bigger the message, the faster it happens. In order to use the message I receive, I start a web worker, pass the blob i received to the web worker, the webworker uses a FileReaderSync to convert the message as an ArrayBuffer, and passes it back to the main thread. I expect this to have quite a lot of copies under the hood, but I am not so well versed in JS yet so to be sure of this statement. Also, I initialy did the same thing without the webworker (FileReader), but the framerate and CPU usage were really bad...
Here is the code I call to decode the messages:
function OnDataMessage(msg)
{
var webworkerDataMessage = new Worker('/js/EDXLib/MessageDecoderEvent.js'); // please no comments about this, it's actually a bit nicer on the CPU than reusing the same worker :-)
webworkerDataMessage.onmessage = MessageFileReaderOnLoadComManagerCBack;
webworkerDataMessage.onerror=ErrorHandler;
webworkerDataMessage.postMessage(msg.data);
}
function MessageFileReaderOnLoadComManagerCBack(e)
{
comManager.OnDataMessageReceived(e.data);
}
and the webworker code:
function DecodeMessage(msg)
{
var retMsg = new FileReaderSync().readAsArrayBuffer(msg);
postMessage(retMsg);
}
function receiveDecodingRequest(e)
{
DecodeMessage(e.data);
}
addEventListener("message", receiveDecodingRequest, true);
My question are the following:
Is there a way to make the GC not have to collect so much memory, by for instance telling some of the parts I use to reuse buffers instead of recreating them, or keeping the GC work intervals fixed ? This is something I know how to do in C++, but in JS ?
Is there another method I should use for my big payloads? Keep in mind that the transmission should be as fast as possible.
Is there another method for reading blob data as arraybuffers that would faster than what I did?
I thank you in advance for you help/comments.
As it turns out, the memory problem was due to the new WebWorker line and the new FileReaderSync line in the WebWorker.
Removing these greatly improved the performances!
Also, it turns out that this decoding operation is not necessary if I want to use the websocket as array buffer. I just need to set the binaryType attribute of websockets to "arraybuffer"...
So all in all, a very simple solution to a pain in the *** problem :-)
I am working in real time trading application using Node.js(v0.12.4) and Socket.io(1.3.2). In that, I am facing some time delay nearly (100ms) when the response emitting from Node.js to GUI(Socket.Io).
I don't have a clue why the time delay is there while emitting data from Node.js to GUI (Socket.IO).
This happening in Production Site. And we tried to debug this in production server location also because of network latency. But same result.
Please anyone help me on this?
One huge thing to note before doing the following.
When calculating timing from back-end(server side) to front end
(client side) you need to run this on the same computer that uses the same
timing crystal.
quartz crystal-driven timing even on high quality motherboards deffer
from one another.
If you find no delay when calculating time delay from back-end(server side) to front end (client side) on the same pc then the delay you originally found was caused by either the network connection or the deference in the motherboards timing crystals. Which would eliminate Node.js and Socket.io as the cause of the time delay.
Basically you need to find out where the delay is happening before you can solve the problem.
What you need to do is find out what is causing the largest performance hit in your project. In order to do this you will need to isolate the time each process takes. You also need to measure the time delay from the initial retrieval of the data to the release of the data. Then measure the time delay of each function, method and process. Try to isolate the problem. You need to ask what is taking the most time to do?
In order to find out where your performance hit is coming from you need to do the following.
Find out how long it takes for your code to get the information it needs.
Find out how long each information manipulation process takes before it sends out the data i.e function/methods...
After the information is sent out find how long it takes to get the information to the client side and load.
Add all of the times up and make sure it is equal to your performance delay in order to insure you have all the data you require to isolate the performance leak.
Order every method, function, process… by its time delay most time consuming to least time consuming. When you find what processes are causing the largest delays you will then be able to fix the problem... Or at least explore tangible solutions...
Here are some tools you can use to measure the performance throughout your code.
First: Chrome has a really good tool that lets you see the performance of each piece of executed code.
https://developers.google.com/web/tools/chrome-devtools/profile/evaluate-performance/timeline-tool?hl=en
https://developer.chrome.com/devtools/docs/timeline
Second: performance-now node package. You can use it for dev performance testing.
Third: Stack overflow post on measuring time/performance in js.
Fourth: you can use things like console.time()
https://nodejs.org/api/console.html
https://developer.mozilla.org/en-US/docs/Web/API/Console/time
https://developer.chrome.com/devtools/docs/console-api
I have found out that time delay where it is happening.
Once, I have emitted the data from Client Socket to Node, I will show the some alert message ("Data Processed"). The alert message taking time to render in GUI.
This alert message blocking the response data from the Node to Socket.
Thanks for your help Guys.
I have an application that does the following:
WebSocket.onmessage
put bytes into a queue (Array)
on requestAnimationFrame
flush queue, rendering the received bytes using canvas/webgl
The application receives and plots realtime data. It has a bit of jerk/jank and while profiling my rendering code, I noticed that while the actual rendering seems execute quickly, there are large chunks of idle time during the WebSocket.onmessage handler.
I tried shrinking the window, mentioned in Nat Duca's post, to check if I am "GPU bound". But, even with a small window, the timeline gives pretty much the same results.
What I am suspecting now is Garbage Collection. The application reads data from the WebSocket, plots it, then discards it. So to me, it seems unsurprising that I have a saw-tooth memory profile:
So my question is now two-fold:
1) In the browser, is it even possible to avoid this memory footprint? In other languages, I know I'd be able have a buffer that was allocated once and read from the socket straight into it. But with the WebSocket interface, this doesn't seem possible; I'm getting a newly allocated buffer of bytes that I use briefly and then no longer need.
Update:--- Per pherris' suggestion, I removed the WebSocket from the equation, and while I see improvement, the issue still seems to persist. See the screenshots below:
2) Is this even my main issue? Are there other things in an application like this that I can do to avoid this blocking/idle time?
We know that when you invoke #.stop() on an AudioBufferNode, you cannot then #.start(). Why is the behavior so?
This issue came up when playing around with WebAudio API as we all find out sooner or later when trying to implement pause functionality. What piqued my interest was, sure, I understand that it's a stream and you can't simple "pause" a stream. But why does it get destroyed? Internally, is there not a pointer to the data, or does the data simply get pushed to the destination and forgotten about by the buffer?
You're not calling .stop() on an AudioBuffer, you're calling .stop() on a BufferSourceNode - the AudioBuffer can be used multiple times.
The short version is it's an optimization that allows for fire-and-forget playback of buffers in a very lightweight way - you can build a higher level media player around it, but in and of itself BufferSourceNodes are very lightweight. The sound data itself is not forgotten, and can be reused - in fact, used simultaneously by other BufferSourceNodes - because it's in the separate AudioBuffer object.
I'm using web workers to do some CPU intensive work but have the requirement that the worker will respond to messages from the parent script while the worker is still processing.
The worker however will not respond to messages while it is locked in a processing loop, and I have not found a way to say poll the message queue. Thus it seems like the only solution is to break processing at an interval to allow any messages in the queue to be serviced.
The obvious options are to use a timer (say with setInterval) however I have read that the minimum delay between firings is quite long (http://ajaxian.com/archives/settimeout-delay) which is unfortunate as it will slow down processing alot.
What are other peoples thoughts on this? I'm going to try have the worker dispatch onmessage to itself at the end of each onmessage, thus effectively implementing one step of the processing loop per event received from itself, but just wanted to see if anyone had any ideas about this.
Thanks,
A worker can spawn sub workers. You can have your main worker act as your message queue, and when it receives a request for a long running operation, spawn a sub worker to process that data. The sub worker can then send the results back to the main worker to remove the event from the queue and return the results to the main thread. That way your main worker will always be free to listen for new messages and you have complete control over the queue.
--Nick
I ran into this issue myself when playing with workers for the first time. I also debated using setInterval, but I felt that this would be a rather hacky approach to the problem (and I had already went this way for my emulated multithreading). Instead, I settled on terminating the workers from the main thread (worker.terminate()) and recreating them if the task that they are involved in needs to be interrupted. Garbage collection etc seemed to be handled in my testing.
If there is data from these tasks that you want to save, you can always post it back to the main thread for storage at regular intervals, and if there is some logic you wish to implement regarding whether they are terminated or not, you can post the relevant data back at regular enough intervals to allow it.
Spawning subworkers would lead to the same set of issues anyway; you'd still have to terminate the subworkers (or create new ones) according to some logic, and I'm not sure it's as well supported (on chrome for example).
James
Having the same problem I searched the web workers draft and found something in the Processing model section, steps from 9 to 12. As far as I have understood, a worker that starts processing a task will not process another one until the first is completed. So, if you don't care about stopping and resuming a task, nciagra's answer should give better performances than rescheduling each iteration of the task.
Still investigating, though.