Web Audio API and Bluetooth - javascript

I programmmed a website which uses text to speech engine to generate audio files.
Then these mp3 files are started using Web Audio API.
Everything works fine when hearing aufio from speakers on a computer or on a smart phone.
However, as soon as I connect my bluetooth helmet to the smart phone, the audio is not played.
Is it a famous issue that Web Audio API doesn't work with bluetooth devices, or does the issue come from my code?
Do I need to change the context's destination ? How can I set it to buetooth? (ex : https://www.html5rocks.com/en/tutorials/webaudio/intro/)
source.connect(context.destination); // connect the source to the context's destination (the speakers)
Thanks,
LL

Related

Web Audio Api integration with Web Speech Api - stream speaker/soundcard output to voice recognition api

Problem:
Ideally I would acquire the streaming output from the soundcard (generated by an mp4 file being played) and send it to both the microphone and speakers. I know I can use "getUserMedia" and "createChannelSplitter" (in the Web Audio Api) to acquire & split (based on Audacity analysis the original signal is in stereo) the user media into 2 outputs which leaves me with 2 problems.
getUserMedia can only get streaming input from the microphone
not from the soundcard (from what I have read)
streaming output can only be recorded/sent to a buffer and not sent
to the microphone directly (from what I have read)
Is this correct?
Possible workaround - stalled:
The user will most likely have a headset microphone on but one workaround I have thought of is to switch to the inbuilt microphone on the device and capture what comes out of the speakers and then switch back to the headset for user input. However, I haven't found a way to switch between the inbuilt microphone and the headset microphone without asking the user every time.
Is there a way to do this that I haven't found?
What other solutions would you suggest?
Project Explanation:
I am creating a Spanish language practice program/website written in html & javascript. An mp4 will play and the speech recognition api will display what it says on the screen (as it is spoken in Spanish) and it will be translated into english so the user hears, sees, and understands what is being said by the person speaking in the mp4. Then the user will use the headset microphone to answer the mp4 person (often the inbuilt microphone doesn't give good enough quality for voice recognition - depending on the device - thus the use of the headset).
flow chart of my workaround using inbuilt microphone
mp4->soundcard-> Web Audio Api -> channel 1 -> user's ears
channel 2 -> microphone input-> Web Speech Api-> html->text onscreen
flow chart of ideal situation skipping microphone input
mp4->soundcard-> Web Audio Api -> channel 1 -> user's ears
channel 2-> Web Speech Api-> html->text onscreen -> user's eyes
Another potential work around:
I would like to avoid having to manually strip an mp3 from each mp4 and then have to try and sync them so the voice recognition happens as the mp4 person speaks. I have read that I can run an mp3 through the voice recognition api.
The short answer is that there is not currently (12/19) a way to accomplish this on this platform with the tools and budget I have. I have opted for the laborious way to do this which is setting up individual divs with text blocks to be revealed as the person is speaking on a timer. I will still use the speech api to capture what the user says so the program can run the correct video in response.
Switching between speaker and user headset is a definite no go.
Speech recognition software usually requires clean and well captured audio. So, if the sound is coming from speakers, the users microphone is not likely to pick it up very well. And if the user is using headphones, then there is no way for the microphone to capture the audio at all.
As far as I know, you cannot send audio files Web Speech Api directly (I may be wrong here)
Web Speech Api Is not supported by all browsers so that is a downside to consider too: https://caniuse.com/#feat=speech-recognition
What I would recommend is checking out Google's Speech to text API: https://cloud.google.com/speech-to-text/
With this service you can send them directly the audio file and they will send back the transcription.
It does support streaming so you could have the audio transcribed at the same time it is playing. The timing wouldn't be perfect though.

Streaming external (usb3) camera footage to a video element in chrome for android

I'm developing a web application for remote assistance.
The webpage uses WebRTC to stream footage from the camera of one client to the other client.
Everything is working fine, with the default internal phone cameras that is (via navigator.mediaDevices.getUserMedia()).
The problem is that I'm developing this for a smartglass connected to the phone via USB3. I want to use the camera from the smartglass instead of the internal camera's.
When I enumerate all devices, I only get two audio devices and two video devices (the two internal cameras) even with the smartglass plugged in. I'm logging this remotely in chrome for desktop.
Also navigator.usb.getDevices won't give me any results.
Is it possible to access USB cameras from Javascript in Chrome for android?
This is not a duplicate question. In contrast to the other question I DO have access to the camera's, only not the camera I want, which is an external camera connected to my phone by USB3

Reading output audio data from Spotify Web Playback stream

I am currently playing around with audio visualization and I am trying to work with Spotify's Web Playback SDK to stream and analyze songs directly on my site.
However, I am unsure what the limitations are when it comes to actually reading the streamed data. I've noticed that an iframe is generated for the Spotify player, and I've read that spotify uses the encrypted media extensions to stream the audio on chrome.
Is it even possible to read the music data from the Spotify api? Maybe, I can read the output audio from the browser?
According to the web API documentation, you aren't able to play back full songs and get the audio data like you desire (for obvious reasons). Although, 30 second "song previews" are allowed through URL streaming, as well as full song playback on desktop browsers (excluding safari at the time of this post), with the Web Playback SDK.
However on the mobile API it is now possible to get the raw PCM data (Android or iOS). This will require you registering for a developers account and setting up the access tokens if you haven't already done so.
For quick reference, on Android it involves using the AudioController class.
EDIT : Thanks to #Leme for the Web Playback SDK link.

Is there a command to keep the iPhone camera recording?

Basically, I have an idea for my App, I would like the iPhone camera to keep recording a video even when the user is doing something else (like checking twitter, for example) like a spy cam. I have many coding solutions available
Is there a way I can code this with either HTML5 ,CSS, javascript or xcode?
iOS will not allow you to run the camera constantly in the background. This is because once each app enters the background state, it only has a very short time to wrap-up it's processes and prepare to be suspended (iOS does this to conserve memory).
From the apple developer docs
In iOS, only specific app types are allowed to run in the background:
Apps that play audible content to the user while in the background, such as a music player app
Apps that record audio content while in the background
Apps that keep users informed of their location at all times, such as a navigation app
Apps that support Voice over Internet Protocol (VoIP)
Apps that need to download and process new content regularly
Apps that receive regular updates from external accessories
The only other way to achieve what you want is to jailbreak your device and distribute your app on Cydia (the jailbroken App Store). Jailbreaking will free your device from the restrictions of IOS but will also make your phone a lot more vulnerable to being hacked...

Audio players in Chrome packaged apps

I'm trying to embed a music player in a Chrome app, and at the moment I'm using the default tag to embed music, but the problem is that I can't seek the song unless the song has been fully downloaded before playing (I'm streaming it from a file host).
The file host has its own music player, but is made in flash and so it doesn't work in Chrome apps. What should I do?

Categories