We know that when you invoke #.stop() on an AudioBufferNode, you cannot then #.start(). Why is the behavior so?
This issue came up when playing around with WebAudio API as we all find out sooner or later when trying to implement pause functionality. What piqued my interest was, sure, I understand that it's a stream and you can't simple "pause" a stream. But why does it get destroyed? Internally, is there not a pointer to the data, or does the data simply get pushed to the destination and forgotten about by the buffer?
You're not calling .stop() on an AudioBuffer, you're calling .stop() on a BufferSourceNode - the AudioBuffer can be used multiple times.
The short version is it's an optimization that allows for fire-and-forget playback of buffers in a very lightweight way - you can build a higher level media player around it, but in and of itself BufferSourceNodes are very lightweight. The sound data itself is not forgotten, and can be reused - in fact, used simultaneously by other BufferSourceNodes - because it's in the separate AudioBuffer object.
Related
So reading through the MDN docs on requestAnimationFrame and I see that when you fire the function you are returned an integer ID for the entry and that you can use that ID to later cancel the request similar to intervals and timeouts.
I also know you can use the function multiple times to create multiple request entities.
My question is: is there a way to see all of the requested entities in the browser session?
I know I can push each of the ids onto an array and use that to track and manage the requests, but is there a native way to see all of the requests in the browser without having to roll my own list? Considering the typical pattern for requestAnimationFrame for things like Three.js is something like:
function animate() {
requestAnimationFrame(animate)
}
requestAnimationFrame(this.animate)
It seems like it would be beneficial to be able to check and see the list of requests actually made.
I'm afraid there's no native way, currently. Neither the WHATWG Living Standard section for animation frames nor the W3C spec mention anything beyond requestAnimationFrame() and cancelAnimationFrame(). The browser is definitely supposed to keep a list of animation frame callbacks, but I see no way to access it.
In a recent SO question, I outlined an OOM condition that I'm running into while processing a large number of csv files with millions of records in each.
The more I am looking into the problem and the more I'm reading up on Node.js the more convinced I become that the OOM isn't happening because of a memory leak but because I'm not throttling the data input into the system.
The code just blindly sucks in all data, creating a single callback event for each line. The events keep getting added to the main event loop, which eventually becomes so large that it exhausts all available memory.
What are Node's idiomatic patterns for dealing with this scenario? Should I be tying reading of csv files to a blocking queue of some sort that, once full, will block the file reader from parsing more of the data? Are there any good examples dealing with processing of large data sets?
Update: To put this differently and simpler, Node can process input faster than it can process output and the slack is being stored in memory (queued as events for the event queue). Because there is a lot of slack, the memory eventually gets exhausted. So the question is: what's the idiomatic way of throttling down input to the output's rate?
Your best bet is to set things up as streams, and rely on the built-in backpressure semantics to do so. The Streams Handbook as a really good overview on it.
Similar to unix, the node stream module's primary composition operator is called .pipe() and you get a backpressure mechanism for free to throttle writes for slow consumers.
Update
I've not used the readline module for anything other than a terminal input before, but reading the docs it looks like it accepts an input stream and an output stream. If you frame your DB writer as a writeable stream, you should be able to let readline pipe it for you internally.
I am trying to measure the time it takes for an image in webgl to load.
I was thinking about using gl.finish() to get a timestamp before and after the image has loaded and subtracting the two to get an accurate measurement, however I couldn't find a good example for this kind of usage.
Is this sort of thing possible, and if so can someone provide a sample code?
It is now possible to time WebGL2 executions with the EXT_disjoint_timer_query_webgl2 extension.
const ext = gl.getExtension('EXT_disjoint_timer_query_webgl2');
const query = gl.createQuery();
gl.beginQuery(ext.TIME_ELAPSED_EXT, query);
/* gl.draw*, etc */
gl.endQuery(ext.TIME_ELAPSED_EXT);
Then sometime later, you can get the elapsed time for your query:
const available = this.gl.getQueryParameter(query, this.gl.QUERY_RESULT_AVAILABLE);
if (available) {
const elapsedNanos = gl.getQueryParameter(query, gl.QUERY_RESULT);
}
A couple things to be aware of:
only one timing query may be in progress at once.
results may become available asynchronously. If you have more than one call to time per frame, you may consider using a query pool.
No it is not.
In fact in Chrome gl.finish is just a gl.flush. See the code and search for "::finish".
Because Chrome is multi-process and actually implements security in depth the actual GL calls are issued in another process from your JavaScript so even if Chrome did call gl.finish it would happen in another process and from the POV of JavaScript would not be accurate for timing in any way shape or form. Firefox is apparently in the process of doing something similar for similar reasons.
Even outside of Chrome, every driver handles gl.finish differently. Using gl.finish for timing is not useful information because it's not representative of actual speed since it includes stalling the GPU pipeline. In other words, timing with gl.finish includes lots of overhead that wouldn't happen in real use and so is not an accurate measurement of how fast something would execute normal circumstances.
There are GL extensions on some GPUs to get timing info. Unfortunately they (a) are not available in WebGL and (b) will not likely ever be as they are not portable as they can't really work on tiled GPUs like those found in many mobile phones.
Instead of asking how to time GL calls what specifically are you trying to achieve by timing them? Maybe people can suggest a solution to that.
Being client based, WebGL event timings depend on the current loading of the client machine (CPU loading), GPU loading, and the implementation of the client itself. One way to get a very rough estimate, is to measure the round-trip latency from server to client using a XmlHttpRequest (http://en.wikipedia.org/wiki/XMLHttpRequest). By finding the delay from server measured time to local time, a possible measure of loading can be obtained.
No problems to record the microphone, connect the analyzer for a nice vu-meter, re-sample the massive amount of data to something we can handle (8Khz, Mono) using 'xaudio.js' from the speex.js lib and to wrap it into an appropriate WAV envelope.
Stopping the recorder seems to be a different story, because the recording process severely lags behind the onaudioprocess functionality. But even this is not a problem as I can calculate the missing samples and wait for them to arrive before I actually store the data.
But what now? How do I stop the audio-process from calling onaudioprocess? Disconnecting all nodes doesn't make a difference. How am I able to re-initialize all buffers to reate a clean and fresh jump-in point for the next recording? Should I destroy the AudioContext? How would I do that? Or is it enough to 'null' the createMediaStreamSource?
What needs to be done to truly set everything up for sequential independent recordings?
Any hint is appreciated.
I'm not sure of all your code structure; personally, I'd try to hang on to the AudioContext and the input stream (from the getUserMedia callback), even if I removed the MediaStreamSourceNode.
To get rid of the ScriptProcessor, though - set the .onaudioprocess in the script processor node to null. That will stop it calling the callback - then if you disconnect it and release all references, it should be garbage collected as usual.
[edit] Oh, and the only way to delete an AudioContext is to get rid of any processing that's happening (disconnect all nodes, remove any onaudioprocess), remove any references to it, and wait for it to be garbage-collected.
I'm playing around a bit with the concept of Comet on node.js, but I'm still a bit confused and I'm wondering if anyone here can point me in the right direction.
Think on a game app where client code should ask for it's turn to make a move (for example on a chess app). What I'm thinking here is in use something like this:
When match starts a method on the node server is called to create an element on a matches array with the id of the match and the initial player.
When a player makes a move a method is called to update the current player on the array element referencing this match. This method should fire an event when the change occurs.
Before being able to make any move, client code should call a method on the server that checks if it's the turn of the user and that waits for the changing player event if wasn't it's turn.
I'm not sure if this is a good approach within the event loop, and if it is I don't see how can I make the method to wait until event to return.
Any suggestions?
Node.js and Socket.io are what you need! I have written several games similar to your description.
An real-time example: example
Another thread: Tutorial on Socket.io