i'm trying to create a "generative score" using beep.js based on some map data i have. i am using new Beep.Voice as placeholder for notes associated to specific types of data (7 voices total). as data is displayed, a voice should be played. i'm doing things pretty "brute force" so far and i'd like it to be cleaner:
// in the data processing function
voice = voices[datavoice]
voice.play()
setTimeout(function(){killVoice(voice)}, 20)
// and the killvoice:
function killVoice(voice) {
voice.pause()
}
i'd like to just "play" the voice, assuming it would have a duration of, say, 20ms (basically just beep on data). i saw the duration property of voices but couldn't make them work.
the code is here (uses grunt/node/coffeescript):
https://github.com/mgiraldo/inspectorviz/blob/master/app/scripts/main.coffee
this is how it looks like so far:
https://vimeo.com/126519613
The reason Beep.Voice.duration is undocumented in the READ ME is because it’s not finished yet! ;) There’s a line in the source code that literally says “Right now these do nothing; just here as a stand-in for the future.” This applies to .duration, .attack, etc. There’s a pull request to implement some of this functionality here but I’ve had to make some significant structural changes since that request was submitted; will need to take a closer look soon once I’ve finished fixing some larger structural issues. (It’s in the pipeline, I promise!)
Your approach in the meantime seems right on the money. I’ve reduced it a bit here and made it 200 milliseconds—rather than 20—so I could here it ring a bit more:
var voice = new Beep.Voice('4D♭')
voice.play()
setTimeout( function(){ voice.pause() }, 200 )
I saw you were using some pretty low notes in your sample code, like '1A♭' for example. If you’re just testing this out on normal laptop speakers—a position I am often myself in—you might find the tone is too low for your speakers; you’ll either hear a tick or dead silence. So don’t worry: it’s not a bug, just a hardware issue :)
Forget everything I said ;)
Inspired by your inquiry—and Sam’s old pull request—I’ve just completed a big ADSR push which includes support for Voice durations. So now with the latest Beep.js getting a quick “chiptune-y” chirp can be done like this:
var voice = new Beep.Voice( '4D♭' )
.setOscillatorType( 'square' )
.setAttackDuration( 0 )
.setDecayDuration( 0 )
.setSustainDuration( 0.002 )
.setReleaseDuration( 0 )
.play()
I’ve even included an ADSR ASCII-art diagram in the new Beep.Voice.js file for easy referencing. I hope this helps!
Related
I've been toying with:
PeoplePerception/PeopleDetected()
PeoplePerception/PopulationUpdated()
PeoplePerception/PeopleList()
PeoplePerception/NonVisiblePeopleList()
PeoplePerception/VisiblePeopleList()
Yet I cant seem to figure out how to detect if there is someone in front of Pepper. Those events trigger when the population is updated, but I can't make sense out of the returning values.
What I'm trying to accomplish is to make Pepper remain in a certain state as long as someone is within the detection area number 2 and make it go to a "screensaver" when it doesnt detect someone for 1 minute.
I'm fairly new when it comes to Pepper development, so any help would be apreciated, thank you!
It sounds like you want to combine the ALPeoplePerception API with the ALEngagmentZones API. This is described in some detail here. There's a key in ALMemory (Pepper's memory) that does what you want - stores a list of all people in engagement zone 2 (EngagementZones/PeopleInZone2).
You've tagged the question as javascript, so I'll give a brief example with how to access this.
QiSession(function (session){
session.service("ALMemory").then(function(mem) {
mem.getData("EngagementZones/PeopleInZone2").then(function(data) {
// now you can access data and do something with it...
// it should be a list of IDs of the people in the engagement zone
// so you could check data.length > 0 to see if there's any people
}, console.log);
}, console.log);
}, console.log);
There are also other events that might be useful, like EngagementZones/PersonEnteredZone2. If you haven't found it yet, there's more details about the javascript API here.
I am writing a Whack-A-Mole game for class using HTML5, CSS3 and JavaScript. I have run into a very interesting bug where, at seemingly random intervals, my moles with stop changing their "onBoard" variables and, as a result, will stop being assigned to the board. Something similar has also happened with the holes, but not as often in my testing. All of this is completely independent of user interaction.
You guys and gals are my absolute last hope before I scrap the project and start completely from scratch. This has frustrated me to no end. Here is the Codepen and my github if you prefer to have the images.
Since Codepen links apparently require accompanying code, here is the function where I believe the problem is occuring.
// Run the game
function run() {
var interval = (Math.floor(Math.random() * 7) * 1000);
if(firstRound) {
renderHole(mole(), hole(), lifeSpan());
firstRound = false;
}
setTimeout(function() {
renderHole(mole(), hole(), lifeSpan());
run();
}, interval);
}
What I believe is happening is this. The function runs at random intervals, between 0-6 seconds. If the function runs too quickly, the data that is passed to my renderHole() function gets overwritten with the new data, thus causing the previous hole and mole to never be taken off the board (variable wise at least).
EDIT: It turns out that my issue came from my not having returns on my recursive function calls. Having come from a different language, I was not aware that, in JavaScript, functions return "undefined" if nothing else is indicated. I am, however, marking GameAlchemist's answer as the correct one due to the fact that my original code was convoluted and confusing, as well as redundant in places. Thank you all for your help!
You have done here and there in your code some design mistakes that, one after another, makes the code hard to read and follow, and quite impossible to debug.
the mole() function might return a mole... or not... or create a timeout to call itself later.. what will be done with the result when mole calls itself again ? nothing, so it will just be marked as onBoard never to be seen again.
--->>> Have a clear definition and a single responsibility for mole(): for instance 'returns an available non-displayed mole character or null'. And that's all, no count, no marking of the objects, just KISS (Keep It Simple S...) : it should always return a value and never trigger a timeout.
Quite the same goes for hole() : return a free hole or null, no marking, no timeout set.
render should be simplified : get a mole, get a hole, if either couldn't be found bye bye. If a mole+hole was found, just setup the new mole/hole couple + event handler (in a separate function). Your main run function will ensure to try again and again to spawn moles.
Years ago, I heard about a nice 404 page and implemented a copy.
In working with ReactJS, the same idea is intended to be implemented, but it is slow and jerky in its motion, and after a while Chrome gives it an "unresponsive script" warning, pinpointed to line 1226, "var position = index % repeated_tokens.length;", with a few hundred milliseconds' delay between successive calls. The script consistently goes beyond an unresponsive page to bringing a computer to its knees.
Obviously, they're not the same implementation, although the ReactJS version is derived from the "I am not using jQuery yet" version. But beyond that, why is it bogging? Am I making a deep stack of closures? Why is the ReactJS port slower than the bare JavaScript original?
In both cases the work is driven by minor arithmetic and there is nothing particularly interesting about the code or what it is doing.
--UPDATE--
I see I've gotten a downvote and three close votes...
This appears to have gotten response from people who are (a) saying something sensible and (b) contradicting what Pete Hunt and other people have said.
What is claimed, among other things, by Hunt and Facebook's ReactJS video, is that the synthetic DOM is lightning-fast, enough to pull 60 frames per second on a non-JIT iPhone. And they've left an optimization hook to say "Ignore this portion of the DOM in your fast comparison," which I've used elsewhere to disclaim jurisdiction of a non-ReactJS widget.
#EdBallot's suggestion that it's "an extreme (and unnecessary) amount of work to create and render an element, and do a single document.getElementById. Now I'm factoring out that last bit; DOM manipulation is slow. But the responses here are hard to reconcile with what Facebook has been saying about performant ReactJS. There is a "Crunch all you want; we'll make more" attitude about (theoretically) throwing away the DOM and making a new one, which is lightning-fast because it's done in memory without talking to the real DOM.
In many cases I want something more surgical and can attempt to change the smallest area possible, but the letter and spirit of ReactJS videos I've seen is squarely in the spirit of "Crunch all you want; we'll make more."
Off to trying suggested edits to see what they will do...
I didn't look at all the code, but for starters, this is rather inefficient
var update = function() {
React.render(React.createElement(Pragmatometer, null),
document.getElementById('main'));
for(var instance in CKEDITOR.instances) {
CKEDITOR.instances[instance].updateElement();
}
save('Scratchpad', document.getElementById('scratchpad').value);
};
var update_interval = setInterval(update, 100);
It is doing an extreme (and unnecessary) amount of work and it is being done every 100ms. Among other things, it is calling:
React.createElement
React.render
document.getElementById
Probably with the amount of JS objects being created and released, your update function plus garbage collection is taking longer than 100ms, effectively taking the computer to its knees and lower.
At the very least, I'd recommend caching as much as you can outside of the interval callback. Also no need to call React.render multiple times. Once it is rendered into the dom, use setProps or forceUpdate to cause it to render changes.
Here's an example of what I mean:
var mainComponent = React.createElement(Pragmatometer, null);
React.render(mainComponent,
document.getElementById('main'));
var update = function() {
mainComponent.forceUpdate();
for(var instance in CKEDITOR.instances) {
CKEDITOR.instances[instance].updateElement();
}
save('Scratchpad', document.getElementById('scratchpad').value);
};
var update_interval = setInterval(update, 100);
Beyond that, I'd also recommend moving the setInterval code into whatever React component is rendering that stuff (the Scratchpad component?).
A final comment: one of the downsides of using setInterval is that it doesn't wait for the callback function to complete before queuing up the next callback. An alternative is to use setTimeout with the callback setting up the next setTimeout, like this
var update = function() {
// do some stuff
// update is done to setup the next timeout
setTimeout(update, 100);
};
setTimeout(update, 100);
I've recently coming across a lot of great examples of interactive shorts that are made with three.js.
One example is http://www.dilladimension.com/
So I wanted to ask - how does the timing in those actually work? Any known libraries for that?
Music & Visuals are synchronized perfectly and would love to know how.
I think you may be over thinking this.
// psuedo code...
// on start
music.start()
startMs = now()
// animation loop
for event in events {
if (!event.handled && (currentMs - startMs) > timelineEvent.startMs) {
event.doStuff();
event.handled = true;
}
}
Time marches on pretty predictably and measurably. If you know when you started, it's pretty easy to figure out where you are right now. Then simply compare that to a an array of timestamped events and execute their instructions.
Is there a possibility to render an visualization of an audio file?
Maybe with SoundManager2 / Canvas / HTML5 Audio?
Do you know some technics?
I want to create something like this:
You have a tone of samples and tutorials here : http://www.html5rocks.com/en/tutorials/#webaudio
For the moment it work in the last Chrome and the last last Firefox (Opera ?).
Demos : http://www.chromeexperiments.com/tag/audio/
To do it now, for all visitors of a web site, you can check SoundManagerV2.js who pass through a flash "proxy" to access audio data http://www.schillmania.com/projects/soundmanager2/demo/api/ (They already work on the HTML5 audio engine, to release it as soon as majors browsers implement it)
Up to you for drawing in a canvas 3 differents audio data : WaveForm, Equalizer and Peak.
soundManager.defaultOptions.whileplaying = function() { // AUDIO analyzer !!!
$document.trigger({ // DISPATCH ALL DATA RELATIVE TO AUDIO STREAM // AUDIO ANALYZER
type : 'musicLoader:whileplaying',
sound : {
position : this.position, // In milliseconds
duration : this.duration,
waveformDataLeft : this.waveformData.left, // Array of 256 floating-point (three decimal place) values from -1 to 1
waveformDataRight: this.waveformData.right,
eqDataLeft : this.eqData.left, // Containing two arrays of 256 floating-point (three decimal place) values from 0 to 1
eqDataRight : this.eqData.right, // ... , the result of an FFT on the waveform data. Can be used to draw a spectrum (frequency range)
peakDataLeft : this.peakData.left, // Floating-point values ranging from 0 to 1, indicating "peak" (volume) level
peakDataRight : this.peakData.right
}
});
};
With HTML5 you can get :
var freqByteData = new Uint8Array(analyser.frequencyBinCount);
var timeByteData = new Uint8Array(analyser.frequencyBinCount);
function onaudioprocess() {
analyser.getByteFrequencyData(freqByteData);
analyser.getByteTimeDomainData(timeByteData);
/* draw your canvas */
}
Time to work ! ;)
Run samples through an FFT, and then display the energy within a given range of frequencies as the height of the graph at a given point. You'll normally want the frequency ranges going from around 20 Hz at the left to roughly the sampling rate/2 at the right (or 20 KHz if the sampling rate exceeds 40 KHz).
I'm not so sure about doing this in JavaScript though. Don't get me wrong: JavaScript is perfectly capable of implementing an FFT -- but I'm not at all sure about doing it in real time. OTOH, for user viewing, you can get by with around 5-10 updates per second, which is likely to be a considerably easier target to reach. For example, 20 ms of samples updated every 200 ms might be halfway reasonable to hope for, though I certainly can't guarantee that you'll be able to keep up with that.
http://ajaxian.com/archives/amazing-audio-sampling-in-javascript-with-firefox
Check out the source code to see how they're visualizing the audio
This isn't possible yet except by fetching the audio as binary data and unpacking the MP3 (not JavaScript's forte), or maybe by using Java or Flash to extract the bits of information you need (it seems possible but it also seems like more headache than I personally would want to take on).
But you might be interested in Dave Humphrey's audio experiments, which include some cool visualization stuff. He's doing this by making modifications to the browser source code and recompiling it, so this is obviously not a realistic solution for you. But those experiments could lead to new features being added to the <audio> element in the future.
For this you would need to do a Fourier transform (look for FFT) which will be slow in javascript, and not possible in realtime at present.
If you really want to do this in the browser, I would suggest doing it in java/silverlight, since they deliver the fastest number crunching speed in the browser.