I'm testing the native speech synthesizer in firefox, it works fine, only the pause () function fails, but in chrome, I have more errors, it only plays a short part of the text, only the first 200 characters, and it doesn't play everything else, I tried the library external meSpeak.js and if it plays everything but it takes a long time to load text of 1842 characters with which I did the test, I am using ubuntu system with chrome version 81.0.4044.92
url for testing https://mdn.github.io/web-speech-api/speak-easy-synthesis/
any solution for chrome ?? Thanks
I believe this is an issue with the speech synthesizer chosen. If you choose a local voice - one for which the audio can be generated simply by running in-browser / in-OS code on your local computer - then there should be no character limit, unless the voice you chose has such a limit, which doesn't seem likely
For example, on Windows, the two local voices I have are Microsoft David Desktop and Microsoft Zira Desktop. If I select these voices, the whole text gets played, regardless of length.
In contrast, other voices such as those from Google (eg Google US English) are not local; they may need to make a network request to parse the text into audio. Google's speech API seems to have a 200 character or so limit on how much text can be translated at once.
If the character limit is a problem, select a voice which is local, rather than one from Google.
Here's a snippet that'll list your available voices, as well as which are local or not:
const populateVoiceList = () => {
console.log('Logging available voices');
for (const voice of speechSynthesis.getVoices()) {
console.log(voice.name, 'local: ' + voice.localService);
}
};
populateVoiceList();
speechSynthesis.onvoiceschanged = populateVoiceList;
Related
We generate hls files with segment size of 3 seconds. We use hlsjs for non Safari browsers and Safari has native hls support.
In hlsjs world we were able to restrict how much ahead we should be in terms of buffer using maxMaxBufferLength, where as we are unable to find similar solution for Safari. In Safari, after loading video m3u8, even if I pause after a second, in the network tab I can see that all the segments are being fetched which I would like to restrict.
I'll not be able to share our examples due to company polices. But, a public example file by hls.js is attached below:
https://test-streams.mux.dev/x36xhzz/url_6/193039199_mp4_h264_aac_hq_7.m3u8 try opening this url in Safari, and try pausing the video, you'll see that it continues to download. Where as if you open same one using https://hls-js.netlify.app/demo/ with maxMaxBufferLength: 5 it won't happen.
Is there an option at ffmpeg to make it controlled buffer or some solution that we should do for Safari by listening to events?
Found the same question here -> https://developer.apple.com/forums/thread/121074
Once checking out this resource, it highlights the fact that:
hls.js tries to buffer up to a maximum number of bytes (60 MB by default) rather than to buffer up to a maximum nb of seconds.
this is to mimic the browser behavior (the buffer eviction algorithm is starting after the browser detects that video buffer size reaches a limit in bytes).
It is a good idea to check out Lines 175 and 176 of this script file here showing the 8 times of maxBufferSize to act as the maxBufLen. You might think about changing this.
try to remove the src attribute from video element
removeAttribute('src');
You may also need to call load method of video element to avoid browser crash
Problem
I am recording the webm chunks by MediaRecorder at Chrome 83 in Windows 10 and sending these to other computer. These chunks are playing on another Chrome by using Media Source Extension(MSE).
sourceBuffer.appendBuffer(webmChunkData);
Everything works fine between 1 to 1.20 seconds. But after that, the audio/video syncing problem starts. The gap between audio and video is minimal at the moment, but as time increases, the gap also rises.
The weird thing is that we can see the different behaviour on different browsers, let me show this by
Chrome's version is 83+ in almost all operating systems.
Camera can be the problem ?
I think Camera is not the problem as I have dual operating systems Fedora and Windows in the same machine. And webm chunks play fine with the Fedora.
Sample rate can be the problem ?
I doubt this. But when I compare the sample rate used by browsers while playing. chrome://media-internals shows 48000 for both with and without a syncing problem.
Warning message from Chrome
Chrome which has sync problem also shows the below message on chrome://media-internals
Question:
Why there is an audio/video syncing problem when both recording and playing are performed on Chrome browser in Windows 10?
How can I eliminate this syncing problem?
I believe I have a workaround for you. The problem seems specific to Chrome + MediaRecorder + VP8, and has nothing to do with MSE or the platform. I have the same issues on Chrome 98 on Mac 12.2.1. Additionally, if you decrease the .start(timeslice) argument, the issue will appear more rapidly and more severely.
However... when I use VP9 the problem does not manifest!
I'm using code like this:
function supportedMimeTypes(): string[] {
// From least desirable to most desirable
return [
// most compatible, but chrome creates bad chunks
'video/webm;codecs=vp8,opus',
// works in chrome, firefox falls back to vp8
'video/webm;codecs=vp9,opus'
].filter(
(m) => MediaRecorder.isTypeSupported(m)
);
}
const mimeType = supportedMimeTypes().pop();
if (!mimeType) throw new Error("Could not find a supported mime-type");
const recorder = new MediaRecorder(stream, {
// be sure to use a mimeType with a specific `codecs=` as otherwise
// chrome completely ignores it and uses video/x-matroska!
// https://stackoverflow.com/questions/64233494/mediarecorder-does-not-produce-a-valid-webm-file
mimeType,
});
recorder.start(1000)
The resulting VP9 appears to play in Firefox, and a VP8 recorded in Firefox plays well in Chrome too.
I'm looking for an html5 audio and video player that has a waveform. I found wavesurfer.js, but that looks like just audio. But hey, I thought I'd play around with it. Here is some very simple code (this is just me with an html file on my desktop - and wav was converted to PCM. Though, I've tried this with a wav and mp3):
<html>
<head>
<script src="https://cdnjs.cloudflare.com/ajax/libs/wavesurfer.js/1.3.7/wavesurfer.min.js"></script>
<script>
var wavesurfer = Object.create(WaveSurfer);
document.addEventListener('DOMContentLoaded', function () {
wavesurfer.init({
container: '#waveform',
waveColor: '#A8DBA8',
progressColor: '#3B8686'
});
wavesurfer.load('session.wav');
});
wavesurfer.on('ready', function () {
wavesurfer.play();
});
</script>
</head>
<body>
<div id="waveform"></div>
</body>
</html>
This couldn't get any simpler! OK, let's open this in Firefox:
Great! It starts playing. I have a waveform. Awesome!
Now Chrome (or Edge - both do the same):
Absolutely nothing (scratches head). No sound. Nothing.
OK, I found this link here: https://wavesurfer-js.org/example/audio-element/
It says: wavesurfer.js will automatically fallback to HTML5 Media if Web Audio is not supported. However, you can choose to use audio element manually. Simply set the backend option to "MediaElement"
Without googling (listen I'm jumping in the pool feet first here!), I guess I don't know the exact difference between HTML 5 Media and Web Audio. Or my assumption off my head is Web Audio means the HTML 5 Audio tag, which is different from HTML 5 Media How? Not sure yet. I know nothing.
Regardless, I'll change that code I posted above and add one line of code to the init function:
wavesurfer.init({
container: '#waveform',
waveColor: '#A8DBA8',
progressColor: '#3B8686',
backend: 'MediaElement'
});
Running in Chrome now, I get:
It plays. But no waveform.
I mean, I go to the wavesurfer.js website with Chrome and all the demos work. I don't get it. On top of that, I'm concerned about forcing things with the 'MediaElement' backend property.
What am I missing?
EDIT: Oh for goodness sake. I took the same html5.html file (without the back end 'MediaElement' property) and session.wav file and placed them on a web server (IIS). Now, I'm fetching the page through a web server instead of working local to my desktop. Works in Edge and Chrome (and Opera - tried that too!) - No problem. Must be something about working locally that Chrome and Edge don't like. I'll leave this question open - green check marks await for that person that adds valuable info!
Chrome (in an effort to maintain better security involving file system access) prevents the dynamic loading of anything from the file protocol. This (as well as a deep discussion about why this is both a good idea and a bad idea) is referenced here:
https://bugs.chromium.org/p/chromium/issues/detail?id=47416
My personal favorite quote is this one (I have so been this guy in the past):
Your local file policy is 'over the top' as regards security and I urge you to reconsider, please don't fall into the trap of making your browser so secure that it ceases to be useful or usable. Allow the user to decide as Microsoft do with a simple option choice or, God help me, another yellow bar.
You can disable this by launching Chrome with the command line argument --allow-file-access-from-files or by (as you found out) just spinning up a web server, if you want an even easier server I would recommend Python's SimpleHTTPServer which you can start from any directory (in Windows, Mac OSX and Linux) by typing python -m SimpleHTTPServer 8000 (it comes standard with any version of Python)
I'm developing a webapp that (in part) records some audio using recorder.js, and sends it to a server. I'm trying to target Firefox, so I have to use this hack to keep the audio source from cutting off:
// Hack for a Firefox bug that stops input after a few seconds
window.source = audio_context.createMediaStreamSource(stream);
source.connect(audio_context.destination);
I think that this is causing audio to be played back through the computer speakers, but I'm not sure. I'm kind of a newbie when it comes to web audio. My goal is to eliminate the audio that is being played out of the speakers.
EDIT:
Here's a link to my JS file on Github: https://github.com/miller9904/Jonathan/blob/master/js/main.js#L87
you have to connect the source to the node( through which you retrieve data which you are going to record), replace this.node with what variable name you have assigned to yuor node used for processing.
window.source.connect(this.node);
//this.node.connect(this.context.destination);
edit: just checked, connecting to destination is not compulsory, also make sure your node variable does not get garbage collected( which i am assuming is happening in your case, since recording stops after few seconds.)
I am trying to reduce the bitrate in a RTCPeerConnection within FireFox. I have successfully been able to do within Chrome.
I am modifying the SDP string that is automatically generated by FireFox after calling the createOffer method. My callback modifies the SDP and then tries to set the SDP in the RTCSessionDescription that is generated(which is just a DOMString according to the protocol spec). In Chrome, I can modify that SDP string and then set it(done within a callback passed to createOffer:
desc.sdp = TransFormSDP(desc.sdp);
connection.setLocalDescription(desc);
However, this does not seem to be working in FireFox, it will not update the SDP after my assignment and continues to utilize the string that was generated by the createOffer method.
Specifically, I am trying to specifically add an fmtp: max-fr=15; max-fs=400; restriction on the VP8 codec being offered and the bandwidth by adding b=AS:512 line in the video media portion of the SDP.
Does FF not allow you to modify the SDP after it has been automatically generated? Or Does FireFox disallow specific SDP options that are part of SDP's standardization(like bandwidth limits and codec settings)?
EDIT: Seriously FireFox??
Well, it seems that for now it is not supported, at least I am assuming so because there is yet to be a response to this bug. Guess I am stuck using Chrome for now.
Actually, the bitrate of the codec encoding is available throught the API, however it doesn't work very well on Firefox.
The proper API should be the one described in the specs https://www.w3.org/TR/webrtc/#dom-rtcrtpencodingparameters
RTCRtpSender.setParameters is supported in Firefox from version 64. But actually (v.66) does not support it correctly, bitrate works, but fps doesn't.
The API way snippet to modify the bitrate:
const sender = peerConnection.getSenders().filter(s => s.track.kind === 'video')[0];
sender.setParameters({...(sender.getParameters()), encodings: [{
maxBitrate: 1000*50,
}]});
However, chaging the bitrate throught the API has only a temporary effect in FF, as presented on the diagram below. The bitrate goes back to the default one after few seconds. The reason is not clear, probably it might be connected with the degradationPreference codec property since it acts differently for balanced, maintain-framerate and maintain-resolution. On chrome, it works normally.