Adding screen share option webrtc app. unable to get on other users - javascript

i was adding screen share functionality to my app but its is not working .. its only show screen share on my side but not on other user.
here is code :
try {
navigator.mediaDevices
.getDisplayMedia({
video: true,
audio: true
})
.then((stream) => {
const video1 = document.createElement("video");
video1.controls = true;
addVideoStream(video1, stream);
socket.on("user-connected", (userId) => {
const call = peer.call(userId, stream);
stream.getVideoTracks()[0].addEventListener("ended", () => {
video1.remove();
});
call.on("close", () => {});
});
stream.getVideoTracks()[0].addEventListener("ended", () => {
video1.remove();
});
});
} catch (err) {
console.log("Error: " + err);
}

Issue could be related to signaling and that depends on each project.
You could start from a working example that streams webcam/microphone and then switch source to screen.
In this HTML5 Live Streaming example, you can switch source between camera and desktop - transmission is the same. So you could achive somethign similar by starting from an example for camera streaming and testing that first.

Related

How to record aux-in audio using Electron?

In an Electron application that I'm working on, I'm trying to record the audio that I get through my auxillary input (3.5mm jack) in my computer. I don't want to pick up any audio through the built-in microphone.
I was running the following code in order to record five seconds of audio. It works fine, but instead of recording the aux-in, it records the sound from the microphone.
const fs = require("fs");
function fiveSecondAudioRecorder() {
navigator.mediaDevices.getUserMedia({ audio: true })
.then(stream => { const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.start();
const chunks = [];
mediaRecorder.addEventListener("dataavailable", event => {
chunks.push(event.data);
});
mediaRecorder.addEventListener("stop", () => {
const blob = new Blob(chunks, { type: "audio/ogg" });
blob.arrayBuffer().then(arrayBuffer => {
const buffer = new Buffer.from(arrayBuffer);
fs.writeFile("recorded-audio.ogg", buffer, error => {
if (error) {
console.error("Failed to save recorded audio to disk", error);
} else {
console.log("Recorded audio was saved to disk");
}
});
});
});
setTimeout(() => {
mediaRecorder.stop();
}, 5000);
})
.catch(error => {
console.error("Failed to access microphone", error);
});
}
fiveSecondAudioRecorder()
I plugged in a 3.5mm jack with an active audio source and expected this to automatically switch the audio input. However, this is not the case and I only manage to pick up the sounds from my microphone. Hence my question: How can I specifically record my aux-in?
Thanks a lot for your help.

interaction of a chrome extension based on React TSX UI chrome API

I'm attempting to build some extension which contains a form and an option to capture screen with desktopCapture, which looks like this:
The form is written in React TypeScript and the code for capturing the screen (taken from here) is the following:
chrome.runtime.onMessage.addListener(
(message, sender, senderResponse) => {
if (message.name === "stream" && message.streamId) {
let track, canvas;
navigator.mediaDevices
.getUserMedia({
video: {
mandatory: {
chromeMediaSource: "desktop",
chromeMediaSourceId: message.streamId,
},
},
})
.then((stream) => {
track = stream.getVideoTracks()[0];
const imageCapture = new ImageCapture(track);
return imageCapture.grabFrame();
})
.then((bitmap) => {
track.stop();
canvas = document.createElement("canvas");
canvas.width = bitmap.width;
canvas.height = bitmap.height;
let context = canvas.getContext("2d");
context.drawImage(bitmap, 0, 0, bitmap.width, bitmap.height);
return canvas
.toDataURL()
.then((url) => {
//TODO download the image from the URL
chrome.runtime.sendMessage(
{ name: "download", url },
(response) => {
if (response.success) {
alert("Screenshot saved");
} else {
alert("Could not save screenshot");
}
canvas.remove();
senderResponse({ success: true });
}
);
})
.catch((err) => {
alert("Could not take screenshot");
senderResponse({ success: false, message: err });
});
});
}
return true;
}
);
My intention is that when the user will click on "take screen shot", the code above will run, and then, on save, the image will be presented in that box.
I was able to 'grab' the two elements, both the box where I wish the image to appear after screenshooting, and the "TAKE SCREEN SHOT" button.
as far as I'm aware of, content_script only injects into web-pages (browser), and has no access to extension, therefor, that's not the way to add the code inside..
What am I missing? How could I add an eventListener, that if the button is clicked, the screenCapturing code will run, and I'll be able to set the box to be the captured image?
Best regards!
As i understand,
you want to take screenshot of tab's page content.
(I assume you don't need to grab playing video or audio content)
Fix 1:
Use chrome.tabs.captureVisibleTab api for capture screenshot.
API link
chrome.tabs
Add this in background.js
const takeShot = async (windowId) => {
try {
let imgUrl64 = await chrome.tabs.captureVisibleTab(windowId, { format: "jpeg", quality: 80 });
console.log(imgUrl64);
chrome.runtime.sendMessage({ msg: "update_screenshot",imgUrl64:imgUrl64});
} catch (error) {
console.error(error);
}
};
chrome.runtime.onMessage.addListener(async (req, sender, sendResponse) => {
if(req.msg === "take_screenshot") takeShot(sender.tab.windowId)
}
Fix 2:
Content_script has limited api access.
Check this page. Understand content script capabilities
Solution:
Send message from content_script to background and ask them to capture screenshot.
Background capture screenshot
content.js
chrome.runtime.sendMessage({ msg: "take_screenshot"});
popup.js
chrome.runtime.onMessage.addListener(async (req, sender, sendResponse) => {
if(req.msg === "update_screenshot") console.log(req.imgUrl64)
}

Javascript- Web Bluetooth API GATT Error: Not supported

This question might have been asked by many people, but I have no luck to get answer from researching.
My ultimately plan is running a web app with Web Bluetooth API in smartphone with FLIC button to control audios to play. One click, play one audio.
I'm testing the program in my MAC laptop with my iPhoneX first, because I'm thinking if I can get both of them connected, then when I run the web app in smartphone, then I can connect to the FLIC button.
However, I got this error.
Something went wrong. NotSupportedError: GATT Error: Not supported.
Am I missing something? I saw someone mentioned iPhone cannot connect Latop, hopefully this is not true
Below is the code:
$("#bluetooth").on("click", function(){
const controlServiceUUID = '00001805-0000-1000-8000-00805f9b34fb'; // Full UUID
const commandCharacteristicUUID = '00002a0f-0000-1000-8000-00805f9b34fb'; //
var myCharacteristic;
navigator.bluetooth.requestDevice({
acceptAllDevices: true,
optionalServices: [controlServiceUUID]
})
.then(device => {
console.log("Got device name: ", device.name);
console.log("id: ", device.id);
return device.gatt.connect();
console.log("Here");
})
.then(server => {
serverInstance = server;
console.log("Getting PrimaryService");
return server.getPrimaryService(controlServiceUUID);
})
.then(service => {
console.log("Getting Characteristic");
return service.getCharacteristic(commandCharacteristicUUID);
})
.then(characteristic => {
// 0x01,3,0x02,0x03,0x01
myCharacteristic = characteristic;
return myCharacteristic.startNotifications().then(_ => {
log('Notifications started');
myCharacteristic.addEventListener('characteristicvaluechanged', test);
});
})
.catch(function(error) {
console.log("Something went wrong. " + error);
});
function test(event) {
if (myCharacteristic) {
myCharacteristic.startNotifications()
.then(_ => {
console.log("Notification stopped!");
})
.catch(error => {
console.log("Argh!" + error);
});
}
}
});
Web Bluetooth API is only available on ChromeOS and Android 6 or later with flag option.
(https://developer.mozilla.org/en-US/docs/Web/API/Web_Bluetooth_API)
Different platforms are at different points in implementation. I have been using this repo for updates on the status of the API:
WebBluetoothCG/web-bluetooth
Note the lack of support for ios
Not sure if this fixes your problem (i'm working on muse eeg), but one "hack" to get rid of this error is to wait some time (e.g. 500ms) after each characteristic write. Most platforms don't support write responses yet and writing multiple commands in parallel will cause this error.
https://github.com/WebBluetoothCG/web-bluetooth/blob/master/implementation-status.md
Is your command characteristic UUID filled in incorrectly? Try replacing it with one that can be written?
const controlServiceUUID = 0xfff0; // Full UUID
const commandCharacteristicUUID = 0xfff4; //

Can a MediaStream be used immediately after getUserMedia() returns?

I'm trying to capture the audio from a website user's phone, and transmit it to a remote RTCPeerConnection.
Assume that I have a function to get the local MediaStream:
function getLocalAudioStream(): Promise<*> {
const devices = navigator.mediaDevices;
if (!devices) {
return Promise.reject(new Error('[webrtc] Audio is not supported'));
} else {
return devices
.getUserMedia({
audio: true,
video: false,
})
.then(function(stream) {
return stream;
});
}
}
The following code works fine:
// variable is in 'global' scope
var LOCAL_STREAM: any = null;
// At application startup:
getLocalAudioStream().then(function(stream) {
LOCAL_STREAM = stream;
});
...
// Later, when the peer connection has been established:
// `pc` is an RTCPeerConnection
LOCAL_STREAM.getTracks().forEach(function(track) {
pc.addTrack(track, LOCAL_STREAM);
});
However, I don't want to have to keep a MediaStream open, and I would like to
delay fetching the stream later, so I tried this:
getLocalAudioStream().then(function(localStream) {
localStream.getTracks().forEach(function(track) {
pc.addTrack(track, localStream);
});
});
This does not work (the other end does not receive the sound.)
I tried keeping the global variable around, in case of a weird scoping / garbage collection issue:
// variable is in 'global' scope
var LOCAL_STREAM: any = null;
getLocalAudioStream().then(function(localStream) {
LOCAL_STREAM = localStream;
localStream.getTracks().forEach(function(track) {
pc.addTrack(track, localStream);
});
});
What am I missing here ?
Is there a delay to wait between the moment the getUserMedia promise is returned, and the moment it can be added to an RTCPeerConnection ? Or can I wait for a specific event ?
-- EDIT --
As #kontrollanten suggested, I made it work under Chrome by resetting my local description of the RTCPeerConnection:
getLocalAudioStream().then(function(localStream) {
localStream.getTracks().forEach(function(track) {
pc.addTrack(track, localStream);
});
pc
.createOffer({
voiceActivityDetection: false,
})
.then(offer => {
return pc.setLocalDescription(offer);
})
});
However:
it does not work on Firefox
I must still be doing something wrong, because I can not stop when I want to hang up:
I tried stopping with:
getLocalAudioStream().then(stream => {
stream.getTracks().forEach(track => {
track.stop();
});
});
No, there's no such delay. As soon as you have the media returned, you can send it to the RTCPeerConnection.
In your example
getLocalAudioStream().then(function(localStream) {
pc.addTrack(track, localStream);
});
It's unclear how stream is defined. Can it be that it's undefined?
Why can't you go with the following?
getLocalAudioStream()
.then(function (stream) {
stream
.getTracks()
.forEach(function(track) {
pc.addTrack(track, stream);
});
});

Real-time transcription Google Cloud Speech API with gRPC from Electron

What I want to achieve is the same real-time transcript process as Web Speech API but using Google Cloud Speech API.
The main goal is to transcribe live recording through an Electron app with Speech API using gRPC protocol.
This is a simplified version of what I implemented:
const { desktopCapturer } = window.require('electron');
const speech = require('#google-cloud/speech');
const client = speech.v1({
projectId: 'my_project_id',
credentials: {
client_email: 'my_client_email',
private_key: 'my_private_key',
},
});
desktopCapturer.getSources({ types: ['window', 'screen'] }, (error, sources) => {
navigator.mediaDevices
.getUserMedia({
audio: true,
})
.then((stream) => {
let fileReader = new FileReader();
let arrayBuffer;
fileReader.onloadend = () => {
arrayBuffer = fileReader.result;
let speechStreaming = client
.streamingRecognize({
config: {
encoding: speech.v1.types.RecognitionConfig.AudioEncoding.LINEAR16,
languageCode: 'en-US',
sampleRateHertz: 44100,
},
singleUtterance: true,
})
.on('data', (response) => response);
speechStreaming.write(arrayBuffer);
};
fileReader.readAsArrayBuffer(stream);
});
});
The error response from Speech API is that the audio stream is too slow and we are not sending it in real-time.
I feel that the reason is that I passed the stream without any formatting or object initialization so the streaming recognition cannot be performed.
This official sample project on Github appears to match what you're looking for: https://github.com/googleapis/nodejs-speech/blob/master/samples/infiniteStreaming.js
This application demonstrates how to perform infinite streaming using the streamingRecognize operation with the Google Cloud Speech API.
See also my comment for an alternative in Electron, using OtterAI's transcription service. (it's the approach I'm going to try soon)
You may use node-record-lpcm16 module to record audio and pipe directly to a speech recognition system like Google.
In the repository, there is an example using wit.ai.
For Google Speech recognition, you may use something like that:
'use strict'
const { SpeechClient } = require('#google-cloud/speech')
const recorder = require('node-record-lpcm16')
const RECORD_CONFIG = {
sampleRate: 44100,
recorder: 'arecord'
}
const RECOGNITION_CONFIG = {
config: {
sampleRateHertz: 44100,
language: 'en-US',
encoding: 'LINEAR16'
},
interimResults: true
}
const client = new SpeechClient(/* YOUR CREDENTIALS */)
const recognize = () => {
client
.streamingRecognize(RECOGNITION_CONFIG)
.on('error', err => {
console.error('Error during recognition: ', err)
})
.once('writing', data => {
console.log('Recognition started!')
}
.on('data', data => {
console.log('Received recognition data: ', data)
}
}
const recording = recorder.record(RECORD_CONFIG)
recording
.stream()
.on('error', err => {
console.error('Error during recognition: ', err)
.pipe(recognize)

Categories