How to record aux-in audio using Electron? - javascript

In an Electron application that I'm working on, I'm trying to record the audio that I get through my auxillary input (3.5mm jack) in my computer. I don't want to pick up any audio through the built-in microphone.
I was running the following code in order to record five seconds of audio. It works fine, but instead of recording the aux-in, it records the sound from the microphone.
const fs = require("fs");
function fiveSecondAudioRecorder() {
navigator.mediaDevices.getUserMedia({ audio: true })
.then(stream => { const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.start();
const chunks = [];
mediaRecorder.addEventListener("dataavailable", event => {
chunks.push(event.data);
});
mediaRecorder.addEventListener("stop", () => {
const blob = new Blob(chunks, { type: "audio/ogg" });
blob.arrayBuffer().then(arrayBuffer => {
const buffer = new Buffer.from(arrayBuffer);
fs.writeFile("recorded-audio.ogg", buffer, error => {
if (error) {
console.error("Failed to save recorded audio to disk", error);
} else {
console.log("Recorded audio was saved to disk");
}
});
});
});
setTimeout(() => {
mediaRecorder.stop();
}, 5000);
})
.catch(error => {
console.error("Failed to access microphone", error);
});
}
fiveSecondAudioRecorder()
I plugged in a 3.5mm jack with an active audio source and expected this to automatically switch the audio input. However, this is not the case and I only manage to pick up the sounds from my microphone. Hence my question: How can I specifically record my aux-in?
Thanks a lot for your help.

Related

How can I convert opus packets to mp3/wav

I created a discord bot with discord.js v13, I get trouble with converting the opus packet to other file types, even the discord.js official examples haven't updated for discord.js v13, I got no idea to deal with it, here is part of my code
async function record(interaction, opts = {}) {
//get voice connection, if there isn't one, create one
let connection = getVoiceConnection(interaction.guildId);
if (!connection) {
if (!interaction.member.voice.channel) return false;
connection = joinVoice(interaction.member.voice.channel, interaction)
}
const memberId = interaction.member.id;
//create the stream
const stream = connection.receiver.subscribe(memberId, {
end: {
behavior: EndBehaviorType.Manual
}
});
//create the file stream
const writableStream = fs.createWriteStream(`${opts.filename || interaction.guild.name}.${opts.format || 'opus'}`);
console.log('Created the streams, started recording');
//todo: set the stream into client and stop it in another function
return setTimeout(() => {
console.log('Creating the decoder')
let decoder = new prism.opus.Decoder();
console.log('Created');
stream.destroy();
console.log('Stopped recording and saving the stream');
stream
.pipe(writableStream)
stream.on('close', () => {
console.log('Data Stream closed')
});
stream.on('error', (e) => {
console.error(e)
});
}, 5000);
}
Try setting frameSize, channels and rate for the Decoder:
const opusDecoder = new prism.opus.Decoder({
frameSize: 960,
channels: 2,
rate: 48000,
})
Also not sure if it is intended, but you seem to destroy the stream just before you pipe it into writable stream.
Here is my example that gives stereo 48kHz signed 16-bit PCM stream:
const writeStream = fs.createWriteStream('samples/output.pcm')
const listenStream = connection.receiver.subscribe(userId)
const opusDecoder = new prism.opus.Decoder({
frameSize: 960,
channels: 2,
rate: 48000,
})
listenStream.pipe(opusDecoder).pipe(writeStream)
You can then use Audacity to play the PCM file. Use File -> Import -> Raw Data...

Adding screen share option webrtc app. unable to get on other users

i was adding screen share functionality to my app but its is not working .. its only show screen share on my side but not on other user.
here is code :
try {
navigator.mediaDevices
.getDisplayMedia({
video: true,
audio: true
})
.then((stream) => {
const video1 = document.createElement("video");
video1.controls = true;
addVideoStream(video1, stream);
socket.on("user-connected", (userId) => {
const call = peer.call(userId, stream);
stream.getVideoTracks()[0].addEventListener("ended", () => {
video1.remove();
});
call.on("close", () => {});
});
stream.getVideoTracks()[0].addEventListener("ended", () => {
video1.remove();
});
});
} catch (err) {
console.log("Error: " + err);
}
Issue could be related to signaling and that depends on each project.
You could start from a working example that streams webcam/microphone and then switch source to screen.
In this HTML5 Live Streaming example, you can switch source between camera and desktop - transmission is the same. So you could achive somethign similar by starting from an example for camera streaming and testing that first.

Streaming ffmpeg output directly to dispatcher

I'm trying to play music with my discord bot and I want to use ffmpeg to specify the start of the music, which works perfectly fine, but I can only download the music with ffmpeg and then play it. I want ffmpeg to process it and then also stream it to play the music.
Here is the code I use to download and then play the music:
message.member.voiceChannel.join().then((con, err) => {
ytPlay.search_video(op, (id) => {
let stream = ytdl("https://www.youtube.com/watch?v=" + id, {
filter: "audioonly"
});
let audio = fs.createWriteStream('opStream.divx');
proc = new ffmpeg({
source: stream
})
proc.withAudioCodec('libmp3lame')
.toFormat('mp3')
.seekInput(35)
.output(audio)
.run();
proc.on('end', function() {
let input = fs.createReadStream('opStream.divx');
console.log('finished');
guild.queue.push(id);
guild.isPlaying = true;
guild.dispatcher = con.playStream(input);
});
});
})
Is it possible to do what I want and if yes how?
Instead of using ffmpeg to specify your starting point of the music you could use the seek StreamOptions of discord.js like:
const dispatcher = connection.play(ytdl(YOUR_URL, { filter: 'audioonly' }) , {seek:35})
This worked pretty fine for me
Yes is is possible, i made it in my bot.
First of all you need to install ytdl-core
Then create a play.js file where the stream function will be in.
This code will: take the youtube url and stream it without downloading the song, add the song to a queue, make the bot leave when the song is finished
Edit the code for what you need.
exports.run = async (client, message, args, ops) => {
if (!message.member.voiceChannel) return message.channel.send('You are not connected to a voice channel!');
if (!args[0]) return message.channel.send('Insert a URL!');
let validate = await ytdl.validateURL(args[0]);
let info = await ytdl.getInfo(args[0]);
let data = ops.active.get(message.guild.id) || {};
if (!data.connection) data.connection = await message.member.voiceChannel.join();
if(!data.queue) data.queue = [];
data.guildID = message.guild.id;
data.queue.push({
songTitle: info.title,
requester: message.author.tag,
url: args[0],
announceChannel: message.channel.id
});
if (!data.dispatcher) play(client, ops, data);
else {
message.channel.send(`Added to queue: ${info.title} | requested by: ${message.author.tag}`)
}
ops.active.set(message.guild.id, data);
}
async function play(client, ops, data) {
client.channels.get(data.queue[0].announceChannel).send(`Now Playing: ${data.queue[0].songTitle} | Requested by: ${data.queue[0].requester}`);
client.user.setActivity(`${data.queue[0].songTitle}`, {type: "LISTENING"});
data.dispatcher = await data.connection.playStream(ytdl(data.queue[0].url, {filter: 'audioonly'}));
data.dispatcher.guildID = data.guildID;
data.dispatcher.once('end', function() {
end(client, ops, this);
});
}
function end(client, ops, dispatcher){
let fetched = ops.active.get(dispatcher.guildID);
fetched.queue.shift();
if (fetched.queue.length > 0) {
ops.active.set(dispatcher.guildID, fetched);
play(client, ops, fetched);
} else {
ops.active.delete(dispatcher.guildID);
let vc = client.guilds.get(dispatcher.guildID).me.voiceChannel;
if (vc) vc.leave();
}
}
module.exports.help = {
name:"play"
}```

Audio stream from cordova-plugin-audioinput to Google Speech API

For a cross-platform app project using Meteor framework, I'd like to record microphone inputs and extract speech thanks to Google Speech API
Following Google documentation, I'm more specifically trying to build an audio stream to feed the Google Speech client.
On client side, a recording button triggers the following startCapture function (based on cordova audioinput plugin):
export var startCapture = function () {
try {
if (window.audioinput && !audioinput.isCapturing()) {
setTimeout(stopCapture, 20000);
var captureCfg = {
sampleRate: 16000,
bufferSize: 2048,
}
audioinput.start(captureCfg);
}
}
catch (e) {
}
}
audioinput events allow me to get chunks of audio data as it is recorded:
window.addEventListener('audioinput', onAudioInputCapture, false);
var audioDataQueue = [];
function onAudioInputCapture(evt) {
try {
if (evt && evt.data) {
// Push the data to the audio queue (array)
audioDataQueue.push(evt.data);
// Here should probably be a call to a Meteor server method?
}
}
catch (e) {
}
}
I'm struggling to convert the recorded audio data to some ReadableStream, that I would pipe to Google Speech API client on server side.
const speech = require('#google-cloud/speech');
const client = new speech.SpeechClient();
const request = {
config: {
encoding: "LINEAR16",
sampleRateHertz: 16000,
languageCode: 'en-US',
},
interimResults: true,
};
export const recognizeStream = client
.streamingRecognize(request)
.on('error', console.error)
.on('data', data =>
console.log(data.results)
);
I tried the following approach, but it doesn't feel like the right way to proceed:
const Stream = require('stream')
var serverAudioDataQueue = [];
const readable = new Stream.Readable({
objectMode: true,
});
readable._read = function(n){
this.push(audioDataQueue.splice(0, audioDataQueue.length))
}
readable.pipe(recognizeStream);
Meteor.methods({
'feedGoogleSpeech': function(data){
data.forEach(item=>serverAudioDataQueue.push(item));
},
...
});
Any insight on this?

Can a MediaStream be used immediately after getUserMedia() returns?

I'm trying to capture the audio from a website user's phone, and transmit it to a remote RTCPeerConnection.
Assume that I have a function to get the local MediaStream:
function getLocalAudioStream(): Promise<*> {
const devices = navigator.mediaDevices;
if (!devices) {
return Promise.reject(new Error('[webrtc] Audio is not supported'));
} else {
return devices
.getUserMedia({
audio: true,
video: false,
})
.then(function(stream) {
return stream;
});
}
}
The following code works fine:
// variable is in 'global' scope
var LOCAL_STREAM: any = null;
// At application startup:
getLocalAudioStream().then(function(stream) {
LOCAL_STREAM = stream;
});
...
// Later, when the peer connection has been established:
// `pc` is an RTCPeerConnection
LOCAL_STREAM.getTracks().forEach(function(track) {
pc.addTrack(track, LOCAL_STREAM);
});
However, I don't want to have to keep a MediaStream open, and I would like to
delay fetching the stream later, so I tried this:
getLocalAudioStream().then(function(localStream) {
localStream.getTracks().forEach(function(track) {
pc.addTrack(track, localStream);
});
});
This does not work (the other end does not receive the sound.)
I tried keeping the global variable around, in case of a weird scoping / garbage collection issue:
// variable is in 'global' scope
var LOCAL_STREAM: any = null;
getLocalAudioStream().then(function(localStream) {
LOCAL_STREAM = localStream;
localStream.getTracks().forEach(function(track) {
pc.addTrack(track, localStream);
});
});
What am I missing here ?
Is there a delay to wait between the moment the getUserMedia promise is returned, and the moment it can be added to an RTCPeerConnection ? Or can I wait for a specific event ?
-- EDIT --
As #kontrollanten suggested, I made it work under Chrome by resetting my local description of the RTCPeerConnection:
getLocalAudioStream().then(function(localStream) {
localStream.getTracks().forEach(function(track) {
pc.addTrack(track, localStream);
});
pc
.createOffer({
voiceActivityDetection: false,
})
.then(offer => {
return pc.setLocalDescription(offer);
})
});
However:
it does not work on Firefox
I must still be doing something wrong, because I can not stop when I want to hang up:
I tried stopping with:
getLocalAudioStream().then(stream => {
stream.getTracks().forEach(track => {
track.stop();
});
});
No, there's no such delay. As soon as you have the media returned, you can send it to the RTCPeerConnection.
In your example
getLocalAudioStream().then(function(localStream) {
pc.addTrack(track, localStream);
});
It's unclear how stream is defined. Can it be that it's undefined?
Why can't you go with the following?
getLocalAudioStream()
.then(function (stream) {
stream
.getTracks()
.forEach(function(track) {
pc.addTrack(track, stream);
});
});

Categories