Set name to Track - javascript

Is there any way to put a name to an Agora track? I need to differentiate if it's a ScreenShare Video or not
const videoTrack = stream.getTracks()[0];
this.rtc.screenVideoTrack = AgoraRTC.createCustomVideoTrack({
mediaStreamTrack: videoTrack,
});

Related

How to dynamically create and play audio with "AudioContext" in a web browser (using PyScript)?

Using the javascript AudioContext interface I want to create an Audiostream that is playing a dynamically created 1-second long waveform continuously. That waveform is supposed to be updated when I change a slider on the html page etc.
So basically I want to feed in a vector containing 44100 floats that represents that 1-second long waveform.
So far I have
const audio = new AudioContext({
latencyHint: "interactive",
sampleRate: 44100,
});
but I am not sure how to apply that vector/list/data structure with my actual waveform.
Hint: I want to add audio to this PyScript example.
This example might help you, it generates a random float array and plays that. I think. You can click "go" multiple times to generate a new wave.
WARNING: BE CAREFUL WITH THE VOLUME IF YOU RUN THIS. IT CAN BE DANGEROUSLY LOUD!
function makeWave(audioContext) {
const floats = []
for (let i = 44000;i--;) floats.push(Math.random()*2 - 1)
console.log("New waveform done")
const sineTerms = new Float32Array(floats)
const cosineTerms = new Float32Array(sineTerms.length)
const customWaveform = audioContext.createPeriodicWave(cosineTerms, sineTerms)
return customWaveform
}
let audioCtx, oscillator, gain, started = false
document.querySelector("#on").addEventListener("click", () => {
// Initialize only once
if (!gain) {
audioCtx = new AudioContext() // Controls speakers
gain = audioCtx.createGain() // Controls volume
oscillator = audioCtx.createOscillator() // Controls frequency
gain.connect(audioCtx.destination) // connect gain node to speakers
oscillator.connect(gain) // connect oscillator to gain
oscillator.start()
}
const customWaveform = makeWave(audioCtx)
oscillator.setPeriodicWave(customWaveform)
gain.gain.value = 0.02 // ☠🕱☠ CAREFUL WITH VOLUME ☠🕱☠
})
document.querySelector("#off").addEventListener("click", () => {
gain.gain.value = 0
})
<b>SET VOLUME VERY LOW BEFORE TEST</b>
<button id="on">GO</button>
<button id="off">STOP</button>

Use separate channels or streams for emit in socket.io

I have a multi client chat application where clients can share both texts and images.
But I'm facing some issue like when user sends and image and the image is quite large and send a text just after it, the users have to wait until the image is fully recieved.
Is there a way to separately emit and recieve the text and image data? Like text is recieved but the image is still being recieved.
Currently I'm using one emitter for both the text and image.
socket.emit('message', data, type, uId, msgId);
And if I have to use another protocol like UDP or WebRTC then which one would be the best? As far I know UDP cannot be used in the browser scripts.
So, what I've figuared out is,
Split the large image data into many [eg. 100] parts. Then use socket.emit();
let image = new Image();
image.src = selectedImage;
image.onload = async function() {
let resized = resizeImage(image, image.mimetype); //resize it before sending.
//store image in 100 parts
let partSize = resized.length / 100;
let partArray = [];
socket.emit('fileUploadStart', 'image', resized.length, id);
for (let i = 0; i < resized.length; i += partSize) {
partArray.push(resized.substring(i, i + partSize));
socket.emit('fileUploadStream', resized.substring(i, i + partSize), id);
await sleep(100); //wait a bit for other events to be sent while image is being sent.
}
socket.emit('fileUploadEnd', id);
Then finally gather the image parts back together.
I've used Map() on the server side and it's pretty fast.
Server:
const imageDataMap = new Map();
socket.on('fileUploadStart', (...) => {
imageDataMap.set(id, {metaData});
});
socket.on('fileUploadStream', (...) => {
imageDataMap.get(id).data += chunk;
});
socket.on('fileUploadEnd', (...) => {
//emit the data to other clients
});

Mixing Audio elements into one stream destination for use with MediaRecorder

MediaRecorder only lets you record media of one type per track. So I'm using JQuery to get a list off all audio elements and connect them to the same audio context destination in order to mix all the audio tracks into one audio stream to later be recorded by MediaRecorder. What I have so far works but only captures the first track and none of the others.
Any idea why only one track comes through?
my code:
function gettracks(stream){
var i = 0;
var audioTrack ;
var audioctx = new AudioContext();
var SourceNode = [];
var dest = audioctx.createMediaStreamDestination();
$('audio').each( function() {
//the audio element id
var afid = $(this).attr('id');
var audio = $('#'+afid)[0];
console.log('audio id '+ afid+' Audio= '+audio);
SourceNode[i] = audioctx.createMediaElementSource(audio);
//dont forget to connect the wires!
SourceNode[i].connect(audioctx.destination);
SourceNode[i].connect(dest);
audioTrack = dest.stream.getAudioTracks()[0];
stream.addTrack(audioTrack);
i++;
});
}
//from a mousedown event I call
stream = canvas.captureStream();
video.srcObject = stream;
gettracks(stream);
startRecording()
function startRecording() {
recorder = new MediaRecorder(stream, {
mimeType: 'video/webm'
});
recorder.start();
}
I would do it like this:
var ac = new AudioContext();
var mediaStreamDestination = new MediaStreamAudioDestinationNode(ac);
document.querySelectorAll("audio").forEach((e) => {
var mediaElementSource = new MediaElementAudioSourceNode(ac, { mediaElement: e });
mediaElementSource.connect(mediaStreamDestination);
});
console.log(mediaStreamDestination.stream.getAudioTracks()[0]); // give this to MediaRecorder
Breaking down what the above does:
var ac = new AudioContext();: create an AudioContext, to be able to route audio to something else than the default audio output.
var mediaStreamDestination = new MediaStreamAudioDestinationNode(ac); from this AudioContext, get a special type of DestinationNode, that, instead of send the output of the AudioContext to the audio output device, sends it to a MediaStream that holds a single track of audio.
document.querySelectorAll("audio").forEach((e) => {, get all the <audio> elements, and iterate over them.
var mediaElementSource = new MediaElementAudioSourceNode(ac, { mediaElement: e });, for each of those media element, capture its output and route it to the AudioContext. This gives you an AudioNode.
mediaElementSource.connect(mediaStreamDestination);, connect our AudioNode that has the output of the media element, connect it to our destination that goes to a MediaStream.
mediaStreamDestination.stream.getAudioTracks()[0] get the first audio MediaStreamTrack from this MediaStream. It has only one anyways.
Now, I suppose you can do something like stream.addTrack(mediaStreamDestination.stream.getAudioTracks()[0]), passing in the audio track above.
What if you create a gain node and connect your source nodes to that:
var gain = audioctx.createGain();
gain.connect(dest);
and in the loop
SourceNode[i].connect(gain);
Then your sources flow into a single gain node, which flows to the your destination.

RTCMultiConnection get back to current stream

Im using RTCMultiConnection v3 plugin i can't figure it out how to get current stream reopened.
Scenario;
UserA opens cam, appends the stream automaticly to $("#webcam"). Stream is open.
UserA wants to join UserB and does connection.join("UserB") (pretending UserB has also a cam stream). Now this html() to $("#webcam") so UserA ``$("#webcam")` is overrided by UserB stream but UserA stream is a live. So far so good.
Now i want to reappend UserA stream as UserA does connection.join("UserA") his own stream.
I hope someone knows how to do this ?
I don't want to reopen the whole stream.
The person who opens a room, must NOT join same room.
You can access the videos (media-streams) using following methods:
1) if you know the "video-id" commonly named as "stream-id"
var streamEvent = connection.streamEvents['stream-id'];
var mediaStreamObject = streamEvent.stream;
var videoFreshURL = URL.createObjectURL(mediaStreamObject);
yourVideo.src = videoFreshURL;
2) if you know th remote user's Unique-ID commonly named as "userid"
var userPeer = connection.peers['user-id'].peer;
var allIncomingVideos = userPeer.getRemoteStreams();
var firstIncomingVideo = allIncomingVideos[0];
var firstVideoFreshURL = URL.createObjectURL(firstIncomingVideo);
yourVideo.src = firstVideoFreshURL;
3) store event video in your own object
window.yourOwnGlobalObject = {};
connection.onstream = function(event) {
window.yourOwnGlobalObject[event.userid] = event;
document.body.appendChild( event.mediaElement );
};
function getUserXYZVideo(userid) {
var userVideo = window.yourOwnGlobalObject[userid].mediaElement;
// var userStream = window.yourOwnGlobalObject[userid].stream;
return userVideo;
}

Using ChannelSplitter and MergeSplitter nodes in Web Audio API

I am attempting to use a ChannelSplitter node to send an audio signal into both a ChannelMerger node and to the destination, and then trying to use the ChannelMerger node to merge two different audio signals (one from the split source, one from the microphone using getUserMedia) into a recorder using Recorder.js.
I keep getting the following error: "Uncaught SyntaxError: An invalid or illegal string was specified."
The error is at the following line of code:
audioSource.splitter.connect(merger);
Where audioSource is an instance of ThreeAudio.Source from the library ThreeAudio.js, splitter is a channel splitter I instantiated myself by modifying the prototype, and merger is my merger node. The code that precedes it is:
merger = context.createChannelMerger(2);
userInput.connect(merger);
Where userInput is the stream from the user's microphone. That one connects without throwing an error. Sound is getting from the audioSource to the destination (I can hear it), so it doesn't seem like the splitter is necessarily wrong - I just can't seem to connect it.
Does anyone have any insight?
I was struggling to understand the ChannelSplitterNode and ChannelMergerNode API. Finally I find the missing part, the 2nd and 3rd optional parameters of the connect() method - input and output channel.
connect(destinationNode: AudioNode, output?: number, input?: number): AudioNode;
When using the connect() method with Splitter or Merger nodes, spacify the input/output channel. This is how you split and Merge to audio data.
You can see in this example how I load audio data, split it into 2 channels, and control the left/right output. Notice the 2nd and 3rd parameter of the connect() method:
const audioUrl = "https://s3-us-west-2.amazonaws.com/s.cdpn.io/858/outfoxing.mp3";
const audioElement = new Audio(audioUrl);
audioElement.crossOrigin = "anonymous"; // cross-origin - if file is stored on remote server
const audioContext = new AudioContext();
const audioSource = audioContext.createMediaElementSource(audioElement);
const volumeNodeL = new GainNode(audioContext);
const volumeNodeR = new GainNode(audioContext);
volumeNodeL.gain.value = 2;
volumeNodeR.gain.value = 2;
const channelsCount = 2; // or read from: 'audioSource.channelCount'
const splitterNode = new ChannelSplitterNode(audioContext, { numberOfOutputs: channelsCount });
const mergerNode = new ChannelMergerNode(audioContext, { numberOfInputs: channelsCount });
audioSource.connect(splitterNode);
splitterNode.connect(volumeNodeL, 0); // connect OUTPUT channel 0
splitterNode.connect(volumeNodeR, 1); // connect OUTPUT channel 1
volumeNodeL.connect(mergerNode, 0, 0); // connect INPUT channel 0
volumeNodeR.connect(mergerNode, 0, 1); // connect INPUT channel 1
mergerNode.connect(audioContext.destination);
let isPlaying;
function playPause() {
// check if context is in suspended state (autoplay policy)
if (audioContext.state === 'suspended') {
audioContext.resume();
}
isPlaying = !isPlaying;
if (isPlaying) {
audioElement.play();
} else {
audioElement.pause();
}
}
function setBalance(val) {
volumeNodeL.gain.value = 1 - val;
volumeNodeR.gain.value = 1 + val;
}
<h3>Try using headphones</h3>
<button onclick="playPause()">play/pause</button>
<br><br>
<button onclick="setBalance(-1)">Left</button>
<button onclick="setBalance(0)">Center</button>
<button onclick="setBalance(+1)">Right</button>
P.S: The audio track isn't a real stereo track, but a left and right copy of the same Mono playback. You can try this example with a real stereo playback for a real balance effect.
Here's some working splitter/merger code that creates a ping-pong delay - that is, it sets up separate delays on the L and R channels of a stereo signal, and crosses over the feedback. This is from my input effects demo on webaudiodemos.appspot.com (code on github).
var merger = context.createChannelMerger(2);
var leftDelay = context.createDelayNode();
var rightDelay = context.createDelayNode();
var leftFeedback = audioContext.createGainNode();
var rightFeedback = audioContext.createGainNode();
var splitter = context.createChannelSplitter(2);
// Split the stereo signal.
splitter.connect( leftDelay, 0 );
// If the signal is dual copies of a mono signal, we don't want the right channel -
// it will just sound like a mono delay. If it was a real stereo signal, we do want
// it to just mirror the channels.
if (isTrueStereo)
splitter.connect( rightDelay, 1 );
leftDelay.delayTime.value = delayTime;
rightDelay.delayTime.value = delayTime;
leftFeedback.gain.value = feedback;
rightFeedback.gain.value = feedback;
// Connect the routing - left bounces to right, right bounces to left.
leftDelay.connect(leftFeedback);
leftFeedback.connect(rightDelay);
rightDelay.connect(rightFeedback);
rightFeedback.connect(leftDelay);
// Re-merge the two delay channels into stereo L/R
leftFeedback.connect(merger, 0, 0);
rightFeedback.connect(merger, 0, 1);
// Now connect your input to "splitter", and connect "merger" to your output destination.

Categories