Hello can you help me regarding speech synthesis in normal use if browser like chrome it's working, but when i use kiosk mode app in android and running google chrome as browser there's an error SpeechSynthesisUtterance is not defined. any encounter regarding this matter?
Please help.
Thank you.
const speak = () => {
const speech = new SpeechSynthesisUtterance(data.description);
speech.voice = voices?.find((x: any) => x.name === 'Google US English');
speech.rate = rate;
//window.speechSynthesis
ws.speak(speech);
};
Related
i am currently implementing speech input on a website and using the web speech API for it.
voice recognition works as expected in Chrome on desktop and android. Firefox does not have support for the API so it is not being used there.
the problem is with chrome on iOS where the service throws a "service-not-allowed" error.
this error seems to be distinctly different from the "not-allowed" error that is being thrown when the browser does not have permission to use the microphone.
in my case chrome has all permissions it would need (microphone permission, pop-ups are not blocked).
at first i thought the problem was, that for some reason chrome on iOS does not show me the permission pop-up, specifically for microphone usage, directly on the web page, but now i am not so sure anymore.
does anyone have experience with this problem or have a solution for this?
here is the working code for android/desktop, the function gets triggered by a button click:
function startDictation() {
try {
var SpeechRecognition = SpeechRecognition || webkitSpeechRecognition;
var recognition = new SpeechRecognition();
} catch (e) {
console.log(e);
}
if (recognition) {
recognition.continuous = false;
recognition.interimResults = true;
recognition.lang = $('html').attr('lang');
recognition.start();
recognition.onresult = function(e) {
$('#searchInput').val(e.results[0][0].transcript);
console.log(e.results[0][0].transcript);
};
recognition.onerror = (e) => {
console.error('Speech recognition error detected: ' + e.error);
recognition.stop();
};
recognition.onend = () => {
console.log('Speech recognition service disconnected');
}
}
}
a few helpful links
web speech api error values
web speech api demo from google, that also doesn't work on iOS for me
i have tried various end devices at this point, two different iPads and an iPhone and the same error gets thrown everywhere.
any help is appreciated, thanks!
Hello guys i've been working on a website where i'm using Javascript text-to-speech API to play some text and it works great for web browsers but now the project is being ported to android and text-to-speech is not working on it at all.
android's web view is being used by the android developer to open the site in app
For porting from web to android, i have used Mobile detection library fro mobiledetect.net and so i can detect if it's android or not.
since i can detect "if mobile" i was wondering if there is actually a way to place some android code in onclick attribute when it's mobile which will then call android's text-to-speech API else JS function.
the function grabs plain text from a div and passes it to text-to-speech. i was wondering if same could happen using android where by detecting mobile i can place some android code in onclick attribute and android takes care of the rest.
-- here is a code of my javascript text-to-speech --
//area_id == div id with text.... btn_id = id of the button which is being clicked
function textToAudio(area_id, btn_id){
var amISpeaking = window.speechSynthesis.speaking;
if(amISpeaking){
var etext = $("#"+btn_id).html();
if(etext == "Play"){
window.speechSynthesis.resume();
$("#"+btn_id).html("Pause");
}else if(etext == "Pause"){
window.speechSynthesis.pause();
$("#"+btn_id).html("Play");
};
return false;
}else{
$("#"+btn_id).html("Pause");
};
/* playing souund */
var msg = $("#"+area_id).html();
msg = $.trim(msg);
if(empty_check(msg) || (msg == NO_VAL)){ msg = "Either there is no text or something went wrong during processing."; };
let speech = new SpeechSynthesisUtterance();
speech.lang = "en-US";
speech.text = msg;
speech.volume = 1;
speech.rate = 1;
speech.pitch = 1;
window.speechSynthesis.speak(speech);
};
anyone have any ideas ?
I'm trying to build a Javascript bot by using Puppeteer to open a https URL where I can listen for the microphone and output a transcript from the SpeechRecogniton API built in a browser, the below code seems to log something in normal Chrome, but on Chromium I get nothing despite this feature apparently being supported according to Modernizr. I've allowed microphone permissions but I get a dead console.log
window.SpeechRecognition = window.SpeechRecognition || window.webkitSpeechRecognition
const recognition = new SpeechRecognition()
recognition.interimResults = true
recognition.addEventListener('result', e => {
const transcript = Array.from(e.results)
.map(result => result[0])
.map(result => result.transcript)
.join('')
// I get nothing logged here in Chromium
console.log(transcript)
})
recognition.addEventListener('end', recognition.start)
recognition.start()
UPDATE
After adding the following...
recognition.addEventListener('error', function(event) {
console.log('Speech recognition error detected: ' + event.error);
});
I'm getting a Network error... and don't know what to do about this in Chromium?
Chromium no longer supports SpeechRecognition. I believe it's got something to do with Google wanting people to use their cloud services instead.
I am using Dialogflow V2 API.
Everything works perfectly in testing via Actions on Google simulator. Please find pic attached.
However, when attempted using the console (right column) within Dialogflow, and also the web integration link, it does not work.
Agent is able to detect appropriate entity from user input, but unable to call intent declared in webhook.
https://bot.dialogflow.com/acc64a26-8d1d-4459-8ce0-24c890acb6d7
I have attempted to post in Dialogflow forum but there was an error posting. Similar case for raising support withing Dialogflow.
Image of Google simulator agent (works):
Image of public link agent (fails):
Image of Response declared in both webhook js file and within console:
Please find part of my index.js webhook below. I am using Dialogflow's inline editor.
'use strict';
const functions = require('firebase-functions')
const { dialogflow } = require('actions-on-google')
const app = dialogflow()
app.intent('Default Welcome Intent', conv => {
conv.ask('Welcome to Zera! We provide medicine and drug advice. What seems to be bothering you today?')
})
app.intent('QSpecific Problem', (conv, {SpecificProb}) => {
conv.contexts.set('specificprob', 10, {SpecificProb: SpecificProb})
conv.ask(`Do you have these problems before?`)
})
app.intent('QRecurring', (conv, {Recurring}) => {
conv.contexts.set('recurring', 10, {Recurring: Recurring})
if (Recurring === "Recur") {
conv.ask(`Have you taken any medication for this?`);
} else {
const specProb = conv.contexts.get('specificprob')
conv.ask(`How long have you been having this ${specProb.parameters.SpecificProb}?`)
}
})
exports.dialogflowFirebaseFulfillment = functions.https.onRequest(app)
I actually wrote in to Dialogflow's support team to seek help. I spoke with Riel, who was very helpful. Please see his reply below:
Your agent works as expected in Actions on Google Simulator because
the library you used is specifically for Actions on Google. The
library you've been using is Actions on Google Node.js client library.
If you want to also use the web demo integration for your responses,
you can use Dialogflow’s fulfillment library that has an integration
with the Google Assistant using AoG client library.
You can refer to this example code for your fulfillment. 'use strict';
const functions = require('firebase-functions');
const { WebhookClient } = require('dialogflow-fulfillment');
process.env.DEBUG = 'dialogflow:debug';
exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
const agent = new WebhookClient({ request, response });
function welcome(agent) {
let conv = agent.conv();
conv.ask('Welcome to my agent!');
agent.add(conv);
}
let intentMap = new Map();
intentMap.set('Default Welcome Intent', welcome);
agent.handleRequest(intentMap);
});
Dialogflow's support team is very helpful and their replies are very quick. I recommend you to write in since everyone's issue is different and quite specific! You can reach them at support#dialogflow.com
So iOS 11 Safari was supposed to add support for the Web Audio API, but it still doesn't seem to work with this javascript code:
//called on page load
get_user_media = get_user_media || navigator.webkitGetUserMedia;
get_user_media = get_user_media || navigator.mozGetUserMedia;
get_user_media.call(navigator, { "audio": true }, use_stream, function () { });
function use_stream(stream){
var audio_context = new AudioContext();
var microphone = audio_context.createMediaStreamSource(stream);
window.source = microphone; // Workaround for https://bugzilla.mozilla.org/show_bug.cgi?id=934512
var script_processor = audio_context.createScriptProcessor(1024, 1, 1);
script_processor.connect(audio_context.destination);
microphone.connect(script_processor);
//do more stuff which involves processing the data from user's microphone...
}
I copy pasted most of this code, so I only have a cursory understanding of it. I know that it's supposed to (and does, on other browsers) capture the user's microphone for further processing. I know that the code breaks on the var audio_context = new AudioContext(); line (as in, no code after that is run), but don't have any error messages cause I don't have a mac which is required to debug iOS Safari (apple die already >_<) Anyone know what's going on and/or how to fix it?
e: forgot to mention that I looked it up and apparently I need the keyword "webkit" before using Web Audio API in Safari, but making it var audio_context = new webkitAudioContext(); doesn't work either
#TomW was on the right track - basically the webkitAudioContext is suspended unless it's created in direct response to the user's tap (before you get the stream).
See my answer at https://stackoverflow.com/a/46534088/933879 for more details and a working example.
Nothing works on mobile save to home screen apps. I issued a bug report to Apple developer. Got a response that it was a duplicate ( which means they know..no clue if or when they will actually fix it).