React Native using react-native-audio-toolkit how to access files - javascript

So I can record something with the following code
let rec = new Recorder("filename.mp4").record();
// Stop recording after approximately 3 seconds
setTimeout(() => {
rec.stop((err) => {
// NOTE: In a real situation, handle possible errors here
// Play the file after recording has stopped
new Player("filename.mp4")
.play()
.on('ended', () => {
// Enable button again after playback finishes
this.setState({disabled: false});
});
});
}, 3000);
I can perfectly record audio but how I can access or delete or list recorded files like filename.mp4. Where is this saved in Android or iOS? Will these remain there after each update to app or each recompilation etc.

According to react-native-audio-toolkit documentation you can get the file parth with fsPath property.
fsPath - String (read only)
Get the filesystem path of file being recorded to. Available after
prepare() call has invoked its callback successfully.

Related

Playing audio for a specific time in JavaScript

If you want to play some sounds and audio files (like for notification in button's click or for any other purpose) for a specific time ( like 2 seconds or 5 seconds regardless of how long or short that audio file is ) by your javascript code.
First, we create a DOM Audio object for our audio file by providing the path to that audio file.
then set loop to true, call play(), and at last call pause() after a specific time with the help of setTimeout().
funciton play( audio_path, time_in_milisec){
let beep = new Audio( audio_path);
beep.loop = true;
beep.play();
setTimeout(() => { beep.pause(); }, time_in_milisec);
}
play('beep.mp3', 2000);
Did you want to code this yourself, or are you looking for a library to make this easier? You might want to check out WadJS
let song = new Wad({source : 'https://www.myserver.com/audio/mySong.wav'})
song.play({
env: {
// The duration in seconds. It defaults to the length of the audio file, but you can manually set it lower to make the sound stop after a certain amount of time
hold: 10
}
})

How to use RTCPeerConnection.removeTrack() to remove video or audio or both?

I'm studying WebRTC and try to figure how it works.
I modified this sample on WebRTC.github.io to make getUserMedia source of leftVideo and streaming to rightVideo.It works.
And I want to add some feature, like when I press pause on leftVideo(My browser is Chrome 69)
I change apart of Call()
...
stream.getTracks().forEach(track => {
pc1Senders.push(pc1.addTrack(track, stream));
});
...
And add function on leftVideo
leftVideo.onpause = () => {
pc1Senders.map(sender => pc1.removeTrack(sender));
}
I don't want to close the connection, I just want to turn off only video or audio.
But after I pause leftVideo, the rightVideo still gets track.
Am I doing wrong here, or maybe other place?
Thanks for your helping.
First, you need to get the stream of the peer. You can mute/hide the stream using the enabled attribute of the MediaStreamTrack. Use the below code snippet toggle media.
/* stream: MediaStream, type:trackType('audio'/'video') */
toggleTrack(stream,type) {
stream.getTracks().forEach((track) => {
if (track.kind === type) {
track.enabled = !track.enabled;
}
});
}
const senders = pc.getSenders();
senders.forEach((sender) => pc.removeTrack(sender));
newTracks.forEach((tr) => pc.addTrack(tr));
Get all the senders;
Loop Through and remove each sending track;
Add new tracks (if so desired);
Edit: or, if you won't need renegotiation (conditions listed below), use replaceTrack (https://developer.mozilla.org/en-US/docs/Web/API/RTCRtpSender/replaceTrack).
Not all track replacements require renegotiation. In fact, even
changes that seem huge can be done without requiring negotation. Here
are the changes that can trigger negotiaton:
The new track has a resolution which is outside the bounds of the
bounds of the current track; that is, the new track is either wider or
taller than the current one.
The new track's frame rate is high enough
to cause the codec's block rate to be exceeded. The new track is a
video track and its raw or pre-encoded state differs from that of the
original track.
The new track is an audio track with a different
number of channels from the original.
Media sources that have built-in
encoders — such as hardware encoders — may not be able to provide the
negotiated codec. Software sources may not implement the negotiated
codec.
async switchMicrophone(on) {
if (on) {
console.log("Turning on microphone");
const stream = await navigator.mediaDevices.getUserMedia({audio: true});
this.localAudioTrack = stream.getAudioTracks()[0];
const audioSender = this.peerConnection.getSenders().find(e => e.track?.kind === 'audio');
if (audioSender == null) {
console.log("Initiating audio sender");
this.peerConnection.addTrack(this.localAudioTrack); // will create sender, streamless track must be handled on another side here
} else {
console.log("Updating audio sender");
await audioSender.replaceTrack(this.localAudioTrack); // replaceTrack will do it gently, no new negotiation will be triggered
}
} else {
console.log("Turning off microphone");
this.localAudioTrack.stop(); // this will turn off mic and make sure you don't have active air-on indicator
}
}
This is simplified code. Solves most of the issues described in this topic.

Capture screenshot of Electron window before quitting

In an Electron app, I can take a screenshot of my window from the main process using this:
let win = new BrowserWindow(/* ... */);
let capturedPicFilePath = /* where I want that saved */
win.capturePage((img) => {
fs.writeFile(capturedPicFilePath, img.toPng(), () => console.log(`Saved ${capturedPicFilePath}`))
})
Awesome. Now I'd like to do that right before app quits. Electron emits a particular event for that, that I tried to use:
Event: 'before-quit' : emitted before the application starts closing its windows.
Problem: if I use the same code as above in a handler for that event, the file is created but empty.
I'm guessing it's because the screenshot is taken in an asynchronous way, and the window is already closed when it happens.
So this does not work for me:
app.on('before-quit', (event) => {
win.capturePage(function(img) {
fs.writeFile(capturedPicFilePath, img.toPng(), () => console.log(`Saved ${capturedPicFilePath}`))
})
})
Edit 1 : Doing this in the renderer process with window.onbeforeunload fails too. It's also too late to perform the screenshot. I get this in the main console (i.e. it goes to terminal):
Attempting to call a function in a renderer window that has been closed or released.
Context: for now I'm exploring the limits of what is possible to do with screen capture (essentially for support purposes), and found that one. Not sure yet what I would do with that edge case, I thought about it not just for support, I'm also considering displaying at startup a blurred pic of previous state (some parts of my app take 1-2 seconds to load).
Any ideas?
I have had a similar problem before, I got around it by using the window close event and then preventing it from closing. Once my action had performed I then ran app.quit().
window.on('close', function (event) {
event.preventDefault();
let capturedPicFilePath = /* where you want it saved */
window.capturePage((img) => {
fs.writeFile(capturedPicFilePath, img.toPng(), () =>
console.log(`Saved ${capturedPicFilePath}`));
app.quit(); // quit once screenshot has saved.
});
});
Hope this helps!

Chrome Extension - Use javascript to run periodically, and log data permanently

Currently, I have a script that when the image in the top right tray is clicked(only for one specific allowed website), it scans the pages HTML then outputs some value. This scanning and outputting is a function in a single JS file, called say checkData.js.
Is it possible, even if a user is not actively using a tab but it is open, to automatically have the script run every 10 seconds and log data to some place I can access later within the extension? THis is because the pages HTML is constantly changing. The I suppose I would use alarms or event pages, but I am not sure how to integrate that.
Chrome limits the frequency of repeating alarms to at most once per minute. If that is OK, here is how to do it:
See here on how to setup an event page.
In the background.js you would do something like this:
// event: called when extension is installed or updated or Chrome is updated
function onInstalled() {
// CREATE ALARMS HERE
...
}
// event: called when Chrome first starts
function onStartup() {
// CREATE ALARMS HERE
...
}
// event: alarm raised
function onAlarm(alarm) {
switch (alarm.name) {
case 'updatePhotos':
// get the latest for the live photo streams
photoSources.processDaily();
break;
...
default:
break;
}
}
// listen for extension install or update
chrome.runtime.onInstalled.addListener(onInstalled);
// listen for Chrome starting
chrome.runtime.onStartup.addListener(onStartup);
// listen for alarms
chrome.alarms.onAlarm.addListener(onAlarm);
Creating a repeating alarm is done like this:
// create a daily alarm to update live photostreams
function _updateRepeatingAlarms() {
// Add daily alarm to update 500px and flickr photos
chrome.alarms.get('updatePhotos', function(alarm) {
if (!alarm) {
chrome.alarms.create('updatePhotos', {
when: Date.now() + MSEC_IN_DAY,
periodInMinutes: MIN_IN_DAY
});
}
});
}

oscilloscope of speaker input stops rendering after a few seconds

The following script reads the audio from the user's microphone and renders an oscilloscope on a html canvas.
The source is taken from an example of the mozilla developer network: Visualizations with Web Audio API
And here is the fiddle: http://jsfiddle.net/b7j8pktp/
mozGetUserMedia
(note: code has no fork mechanism for different browsers: works only with firefox)
It works fine for a few seconds and then immediately stops rendering.
Whereas this works totally stable: http://mdn.github.io/voice-change-o-matic/
The problem can be reduced to the following code. The microphone activation icon (next to the the address bar in firefox) disappears after about 5 seconds:
navigator.mozGetUserMedia({audio: true},
function() {}, function() {} );
(http://jsfiddle.net/b7j8pktp/2/)
This is a known bug in Firefox. Just take the stream from the getUserMedia call and hook it up to the window like so:
navigator.mozGetUserMedia({audio: true}, function(stream) {
window.stream = stream;
// rest of the code
}, function err() {
// handle error
});
Hopefully we can get it fixed soon. The problem is that we're failing to add a reference to the stream when we do the AudioContext.createMediaStreamSource call, so that the stream is not referenced by anything anymore when the getUserMedia callback returns, and it is collected by the cycle collector when it runs, that is, a couple seconds later.
You can follow along in https://bugzilla.mozilla.org/show_bug.cgi?id=934512.

Categories