I'm writing something like a web-audio-editor, and I think I'm close to finishing the basics. I can play and stop an audiofile, but when I try to call the play method again it just won't work, even though the parameters havn't changed.
I'd be glad to see someone look into my problem as I cannot see why it doesn't work when similar code in other projects does. Unfortunately I cannot create a jsfiddle, because I would need to load an external mp3-file and it seems I'm not allowed to do that. But I have pasted the javascript here and the corresponding html file here.
All you need is a server you can upload the files to and a mp3 on it. Alternately you could use this link, but I will delete the files eventually.
You can't call start() on an AudioBufferSourceNode more than once. They're one-time-use only.
Really the only way to do this is to create a new AudioBufferSourceNode from the underlying AudioBuffer every time you need to start playback.
You can also leave it running and simply disconnect it from the gain-node, and the oscillator would presumably just idly/silently produce a waveform.
To avoid creating a new AudioBufferSourceNode, you can simply use the playbackRate like this:
source.playbackRate.value = 0; to pause.
source.playbackRate.value = 1; to play.
Related
How can I capture the datastream of a JS / Leaflet animation and download it to MP4?
I am looking for output that looks something like the smooth path traced in these demos:
https://github.com/IvanSanchez/Leaflet.Polyline.SnakeAnim
Their author appears to have done them in ffcast or some screencasting softare.
However, I am looking for an automated solution that can be run as script, ideally one that works on the data stream itself (not the screen), perhaps with a headless browser.
I have tried puppeteer-gif and puppeteer-gif-cast but the best frame rate is jumpy.
I have tried WebRTC-Experiment but it requires me to set manual permissions. Ditto the Screen Capture API mentioned here, though this at least seems to work on the data stream itself.
The canvas captureStream method combined with the MediaRecorder API should do the trick.
Mind you that Chrome only supports webm as a container format (but does record h264) so you might need a postprocessing step with ffmpeg.
Is single stream audio (or video) via Chrome's WebRTC possible when you strip a=ssrc lines from the SDP?
I have tried filtering out a=ssrc lines (with the code below), but single stream audio did not work. I tried also single stream video and renaming instead of removing lines with the same result. I modify both offer and answer SDPs. Interestingly, this filtering works when you try sending SDPs with both audio & video - audio (only) will work in such scenario. However I had issues with re-negotiation in such scenario in our app, so this is probably not a valid solution.
You can see minimum example with the single stream audio / video in this repo: https://github.com/Tev-work/webrtc-audio-demo.
If it is possible, can you please provide minimal example of code with working audio? Preferably using the repo above, what should the modifySdp function (in public/client.js) do?
Currently it modifies sdp with this code:
sdp = sdp.replace(/a=ssrc/g, 'a=xssrc');
sdp = sdp.replace(/a=msid-semantic/g, 'a=xmsid-semantic');
sdp = sdp.replace(/a=mid/g, 'a=xmid');
sdp = sdp.replace(/a=group:BUNDLE/g, 'a=xgroup:BUNDLE');
If it is not possible, do you know whether such limitation has been officialy stated somewhere (please link it), or it just at some point became unworkable? It seems like it was working before (around M29, see comments here https://bugs.chromium.org/p/webrtc/issues/detail?id=1941 - no mention that this was not supposed to be working).
Motivation: We are sometimes sending SDPs via SIP PBXs, which sometimes filter out SSRC lines. Supporting multiple streams in such situations is obviously out of question (maybe with some server side hacking streams?), but supporting at least audio-only for such scenarios would be useful for us.
that should still be possible, even though there are some side-effects like (legacy) getStats not recognizing the stream, see (this bug)[https://bugs.chromium.org/p/webrtc/issues/detail?id=3342].
What you are attempting is to remove the a=ssrc lines before calling setLocalDescription. This is probably not going to work. If you want to simulate the scenario try removing them before calling setRemoteDescription with the SDP.
In order to fully implement my custom html5 video player, I need the the exact frame rate of a video. However I have not been able to find it yet and am using a standard value of 25.
Typically videos have a frame rate value in meta-data so I accessed meta-data using something like this:
var vid = document.getElementById("myVideo");
vid.onloadedmetadata = function(e) {
console.log(e);
};
However I can't find frame rate here. Maybe I am not reading metadata at all.
I can use your help.
Thanks!
Try https://mediainfo.js.org (github)
It works on ui only, no backend needed
I just implemented it and it looks like it worked perfectly fine (at least in Chrome v 70.0.3538.77) for gettting wide media information
It looks like modern browsers beginning to work with some binary libraries
I'm 95% sure the standard html5 video api does not expose the fps information, from what I've read in the past months - other apis like MPEG-DASH and jwplayer do present more / different data.
Your best bet would be to snoop around w3schools.com/tags/ref_av_dom.asp and similar mdn pages.
You can calculate this in realtime yourself and it should work most of the time but I can imagine there's a case or two when it wouldn't. Look at PresentedFrames and then do something like:
fps = video.time / PresentedFrames
view more about PresentedFrames here (currently proposal) and similar attributes at the same link.
mediainfo.js works pretty good - even if used locally in a browser using 'http(s)://'.
To use it locally, just make sure you also download the accompanying mediainfo.wasm and put it into the same directory as mediainfo.min.js.
Alternatively you can install media-info using npm.
The only caveat is, that it doesn't run from the 'file://' protocol.
Have exported my flash cs6 game using createjs using "toolkit for createjs". All sounds exported to directory successfully.
Following code calls sounds
var manifest = [
{src:"sounds/cutter.wav", id:"cutter"}
];
var loader = new createjs.PreloadJS(false);
loader.installPlugin(createjs.SoundJS);
loader.onComplete = handleComplete;
loader.loadManifest(manifest);
function playSound(name, loop) {
createjs.SoundJS.play(name, createjs.SoundJS.INTERRUPT_EARLY, 0, 0, loop);
}
chorme and opera plays sound correctly but firefox.
Thanks in advance :)
I would recommend trying the latest code available at http://www.soundjs.com. You'll also find helpful tutorials and examples that work in firefox. It doesn't offer direct toolkit support, but it can help you understand what the exported code is doing and how to alter it.
My best guess without seeing the code in context is you are trying to call play without waiting for load to complete. This creates a race condition, where sometimes if the sound is cached it will work, and other times it will fail.
It's also possible that it has something to do with the wav encoding. With mp3s we've found you mostly need to stick to default encoding to ensure browsers can actually play the audio. You might also want to consider using mp3 and ogg files for the broadest audio support.
Hope that helps.
I know there have been tons of questions about this topic already but none of them have solved my issue, perhaps I am just missing something.
Anyways, here is the deal. I have a happy little html5 game that plays some audio and sound affects etc and it works great in every browser that supports html5. However, those that don't require a flash fallback. No big deal right? Apparently not… I've made a small swf that should accept the mp3 url from JS and then get the mp3 and play it. I have to use this way as there are a lot of audio files and I would like to try and avoid making a swf file for each one.
Here is the AS - I'm using the ExternalInterface to receive the variable from js.
import flash.external.*;
ExternalInterface.addCallback("callFlash", playSound);
function playSound(file:String):void {
var s:Sound = new Sound();
s.load(new URLRequest(file));
s.play();
}
And then my JS to pass the variable:
var flash = $('#fbplayer')[0];
console.log(flash); //returns flash object so jquery is not the issue
flash.callFlash(fallSource);
So theoretically everything should work fine (if I understand ExternalInterface correctly). However, the following error is thrown:
TypeError: flash.callFlash is not a function
flash.callFlash(fallSource);
I can't seem to find where the issue was.
I'm open to any answers or even a completely different way of doing this.
As long as it works as this is holding up the delivery of the project :C
Thanks!
I know this is really old, but I've never had success finding my flash objects properly with jquery. It's better to go with a getElementById. Also, one other crazy thing I ran into with some modern browsers just a couple months ago is that I actually needed to tell flash to wait a frame after initializing any callbacks via ExternalInterface.