So I'm trying to make a simple Digital audio workstation(D.A.W) kind of thing in react.
To solve this problem I'm using Tonejs Transport to play and sync multiple songs on Transport timeline and is working fine but problem arises when i try to pause it. Eventually it pauses when i click to pause but also it crashes the application saying Error: buffer is either not set or not loaded
I tried to do something like this but this didn't solve the problem.
Tone.Buffer.on('load', () => {
if (this.props.play)
Tone.Transport.start()
if (!this.props.play)
Tone.Transport.stop()
})
This is where I'm setting the player
Player = (src, startTime,volume) => {
Tone.Transport.bpm.value = 108;
Tone.Transport.loop = false;
let buff = new Tone.Buffer({
"url": `/MP3s/${src}`,
});
let play = new Tone.Player(buff).toMaster().sync().start(startTime)
Tone.Buffer.on('load', () => {
if (this.props.play)
Tone.Transport.start()
if (!this.props.play)
Tone.Transport.stop()
})
}
It's showing error on the let play line in code and i just couldn't figure out on my own about the reason of problem and how to resolve it. Also I'm playing these files from the public folder of my react app.
Related
I am a neophyte JS developer with a past in server-side programming.
I am creating a simple web app that allows various users to engage in live audio chatting with one another. Whenever a new user logs into an audio chat room, the following ensures they can hear everyone talking
// plays remote streams
async function playStreams(streamList) {
await Promise.all(streamList.map(async (item, index) => {
// add an audio streaming unit, and play it
var audio = document.createElement('audio');
audio.addEventListener("loadeddata", function() {
audio.play();
});
audio.srcObject = item.remoteStream;
audio.id = 'audio-stream-'+item.streamID;
audio.muted = false;
}));
}
Essentially I pass a list of streams into that function and play all of them.
Now if a user leaves the environment, I feel the prudent thing to do is to destroy their <audio> element.
To achieve that, I tried
function stopStreams(streamList) {
streamList.forEach(function (item, index) {
let stream_id = item.streamID;
let audio_elem = document.getElementById('audio-stream-'+stream_id);
if (audio_elem) {
audio_elem.stop();
}
});
}
Unfortunately, audio_elem is always null in the function above. It is not that the streamIDs are mismatched - I have checked them.
Maybe this issue has to do with scoping? I am guessing the <audio> elements created within playStreams are scoped within that function, and thus stopStreams is unable to access them.
I need a domain expert to clarify whether this is actually the case. Moreover, I also need a solution regarding how to better handle this situation - one that cleans up successfully after itself.
p.s. a similar SO question came close to asking the same thing. But their case was not numerous <audio> elements being dynamically created and destroyed as users come and go. I do not know how to use that answer to solve my issue. My concepts are unclear.
I created a global dictionary like so -
const liveStreams = {};
Next, when I play live streams, I save all the <audio> elements in the aforementioned global dictionary -
// plays remote streams
async function playStreams(streamList) {
await Promise.all(streamList.map(async (item, index) => {
// add an audio streaming unit, and play it
var audio = document.createElement('audio');
audio.addEventListener("loadeddata", function() {
audio.play();
});
audio.srcObject = item.remoteStream;
audio.muted = false;
// log the audio object in a global dictionary
liveStreams[stream_id] = audio;
}));
}
I destroy the streams via accessing them from the liveStreams dictionary, like so -
function stopStreams(streamList) {
streamList.forEach(function (item, index) {
let stream_id = item.streamID;
// Check if liveStreams contains the audio element associated to stream_id
if (liveStreams.hasOwnProperty(stream_id)) {
let audio_elem = liveStreams[stream_id];
// Stop the playback
audio_elem.pause();// now the object becomes subject to garbage collection.
// Remove audio obj's ref from dictionary
delete liveStreams.stream_id;
}
});
}
And that does it.
I'm trying to implement webrtc & simple peer to my chat. Everything works but I would like to add screen sharing option. For that I tried that:
$("#callScreenShare").click(async function(){
if(captureStream != null){
p.removeStream(captureStream)
p.addStream(videoStream)
captureStreamTrack.stop()
captureStreamTrack =captureStream= null
$("#callVideo")[0].srcObject = videoStream
$(this).text("screen_share")
}else{
captureStream = await navigator.mediaDevices.getDisplayMedia({video:true, audio:true})
captureStreamTrack = captureStream.getTracks()[0]
$("#callVideo")[0].srcObject = captureStream
p.removeStream(videoStream)
console.log(p)
p.addStream(captureStream)
$(this).text("stop_screen_share")
}
})
But I stop the camera and after that doesn't do anything and my video stream on my peer's computer is blocked. No errors, nothing only that.
I've put a console.log when the event stream is fired. The first time it fires but when I call the addStream method, it doesn't
If someone could help me it would be really helpful.
What I do is replacing the track. So instead of removing and adding the stream:
p.streams[0].getVideoTracks()[0].stop()
p.streams[0].replaceTrack(p.streams[0].getVideoTracks()[0], captureStreamTrack, p.streams[0])
This will replace the video track from the stream with the one of the display.
simple-peer docs
The below function will do the trick. Simply call the replaceTrack function, passing it the new track and the remote peer instance.
function replaceTrack(stream, recipientPeer ) {
recipientPeer.replaceTrack(
recipientPeer.streams[0].getVideoTracks()[0],
stream,
recipientPeer.streams[0]
)
}
I worked on a HTML5 game. I used createjs to preload images. When the images finish preloading, a BGM need to play automatically.
I known that chrome new audio policies. And I am very confused that I use function handleComplete to trigger the audio. Why does it still not work? How should I make it work?
Here is my code:
function Sound (id_str){ this.id = id_str; }
Sound.prototype.play = function () {
//there are other logics, but can simply like below
let audio = document.getElementById(this.id);
audio.play().catch(e =>console.log(e));
};
var audio_bg = new Sound('bgm_id');
windows.onload = function preload (handleFileProgress,handleComplete){
let manifest = [...];
loader = new createjs.LoadQueue(false);
loader.setMaxConnections(100);
loader.maintainScriptOrder=true;
loader.installPlugin(createjs.Sound);
loader.loadManifest(manifest);
loader.addEventListener('progress', handleFileProgress);
loader.addEventListener('complete', handleComplete);
};
function handleFileProgress(){...}
function handleComplete(){
//...
audio_bg.play();
}
The error I caught was:
NotAllowedError: play() failed because the user didn't interact with the document first.
The error you are seeing is due to changes in security in the browser, where you can't play audio without a user action to kick it off. SoundJS already listens for a browser input event (mouse/keyboard/touch) to unlock audio, but if you try to play audio before this happens, then it will fail.
To show this, quickly click in the browser before the file is loaded, and it will likely play.
The recommendation is to put audio behind a user event, like "click to get started" or something similar. It is an unfortunate side-effect of the security improvements in browsers :)
I'm studying WebRTC and try to figure how it works.
I modified this sample on WebRTC.github.io to make getUserMedia source of leftVideo and streaming to rightVideo.It works.
And I want to add some feature, like when I press pause on leftVideo(My browser is Chrome 69)
I change apart of Call()
...
stream.getTracks().forEach(track => {
pc1Senders.push(pc1.addTrack(track, stream));
});
...
And add function on leftVideo
leftVideo.onpause = () => {
pc1Senders.map(sender => pc1.removeTrack(sender));
}
I don't want to close the connection, I just want to turn off only video or audio.
But after I pause leftVideo, the rightVideo still gets track.
Am I doing wrong here, or maybe other place?
Thanks for your helping.
First, you need to get the stream of the peer. You can mute/hide the stream using the enabled attribute of the MediaStreamTrack. Use the below code snippet toggle media.
/* stream: MediaStream, type:trackType('audio'/'video') */
toggleTrack(stream,type) {
stream.getTracks().forEach((track) => {
if (track.kind === type) {
track.enabled = !track.enabled;
}
});
}
const senders = pc.getSenders();
senders.forEach((sender) => pc.removeTrack(sender));
newTracks.forEach((tr) => pc.addTrack(tr));
Get all the senders;
Loop Through and remove each sending track;
Add new tracks (if so desired);
Edit: or, if you won't need renegotiation (conditions listed below), use replaceTrack (https://developer.mozilla.org/en-US/docs/Web/API/RTCRtpSender/replaceTrack).
Not all track replacements require renegotiation. In fact, even
changes that seem huge can be done without requiring negotation. Here
are the changes that can trigger negotiaton:
The new track has a resolution which is outside the bounds of the
bounds of the current track; that is, the new track is either wider or
taller than the current one.
The new track's frame rate is high enough
to cause the codec's block rate to be exceeded. The new track is a
video track and its raw or pre-encoded state differs from that of the
original track.
The new track is an audio track with a different
number of channels from the original.
Media sources that have built-in
encoders — such as hardware encoders — may not be able to provide the
negotiated codec. Software sources may not implement the negotiated
codec.
async switchMicrophone(on) {
if (on) {
console.log("Turning on microphone");
const stream = await navigator.mediaDevices.getUserMedia({audio: true});
this.localAudioTrack = stream.getAudioTracks()[0];
const audioSender = this.peerConnection.getSenders().find(e => e.track?.kind === 'audio');
if (audioSender == null) {
console.log("Initiating audio sender");
this.peerConnection.addTrack(this.localAudioTrack); // will create sender, streamless track must be handled on another side here
} else {
console.log("Updating audio sender");
await audioSender.replaceTrack(this.localAudioTrack); // replaceTrack will do it gently, no new negotiation will be triggered
}
} else {
console.log("Turning off microphone");
this.localAudioTrack.stop(); // this will turn off mic and make sure you don't have active air-on indicator
}
}
This is simplified code. Solves most of the issues described in this topic.
In the attached example, I load BVH and audio data together into a player. It's actually about an animated face that speaks. The example here is comparable. Because voice and animation have to run synchronously, I have to load both before I can play. Now I'm looking for a solution where everything is loaded first, before it can be played. The charging status should also be displayed. Can someone help me? Many Thanks.
jsfiddle.net/uj740958/
Simple solution, you can just just use two booleans that switch to true when audio and bvh are loaded, something like :
let audioLoaded = false,
bvhLoaded = false
audioLoader.load( url , () => {
audioLoaded = true;
play()
})
bvhLoader.load( url , () => {
bvhLoaded = true;
play()
})
const play = () => {
if(!audioLoaded || !bvhLoaded) return console.log("Not everything is loaded!")
console.log("Everything is loaded!")
audio.play()
video.play()
}
EDIT
Okay so I have updated your JSFiddle : http://jsfiddle.net/nLvc2www/2/
Open your console
Click load_clip_1 or load_clip_2
Watch "Audio loaded!" then "BVH loaded!" show up in the console
Click play
Enjoy :)