I have a webapp that records audio from the user's microphone. It sends the data to the server. The relevant code looks like this:
class Recorder {
// ...
setRecorder() {
this.recorder = new RecordRTC(this.stream, {
type: 'audio',
mimeType: this.mimeType,
recorderType: StereoAudioRecorder,
timeSlice: 2000, // Interval to send recorded data
ondataavailable: async blob => { // send the data to the server
let seq = this.seq++
let data = await blob.arrayBuffer()
if(this.socket.connected) {
try {
this.socket.emit('audio', {
id: this.id,
seq,
mimeType: this.mimeType,
data
})
} catch (e) {
console.error('caught an error', e)
this.stopRecording()
}
}
},
sampleRate: 44100,
desiredSampleRate: 44100,
numberOfAudioChannels: 1
})
}
}
On the server side (Express.js), I send the data as it's received to any interested clients. Here's the relevant code:
app.get('/play', (req, res, next) => {
try {
let id = req.query.id
let mimeType
if(!recordings[id]) {
// ...
}
emitter // the EventEmitter that's handling this
.on(`audio ${id}`, data => {
if(!mimeType) {
mimeType = data.mimeType
res.writeHead(200, {'Content-Type': mimeType})
}
res.write(data.data)
})
.on(`close ${id}`, () => {
console.debug({type:'audio close', id})
res.end()
})
} catch (e) {
next(e)
}
})
The issue is that each chunk I get from the client appears to be a complete WAV file, and concatenating such files doesn't work. When trying to play such a file, you only hear the first chunk.
I've been searching for hours for information about how to concatenate the input files (or any other method that would result in a stream that can be listened to). It seems that there's very little information out there about this topic.
I've been looking in particular at ffmpeg, but despite its purported ability to concatenate files, it expects the files to all be given on the command line. I'm receiving streaming data, so I can't practically list filenames in advance; I would have to send multiple files in on stdin, but that doesn't work.
Can anyone point me in the right direction? I would think that concatenating audio files would be a common need, but I can't find any tools that are capable of doing it without knowing in advance all of the data to be processed. Or am I barking up the wrong tree here?
Related
In my Node backend, I have an endpoint where I get a few videos from where I have them stored in Google Cloud, then return these videos as the response. Here is my code for that:
app.get("/getGalleryVideos", async (req, res) => {
const [files] = await bucket.getFiles(bucketName, { prefix: req.query.user });
const fileNames = files.map((file) => file.name);
const sumFiles = [];
for (let i = 0; i < fileNames.length; i++) {
await bucket
.file(fileNames[i])
.download()
.then((data) => {
sumFiles.push(data);
});
}
res.json(sumFiles);
});
Then when I receive these videos as the response in the React frontend, I want to convert them into a format that they can be used as the source for HTML video tags. For now I am just trying to get one video to work so when I receive the response as a list, I just take the first item and work with that. Here is my code for that:
function getGalleryVideos() {
var user = "undefined";
if (typeof Cookies.get("loggedInUser") != "undefined") {
user = Cookies.get("loggedInUser");
}
axios({
method: "get",
url: `http://localhost:3001/getGalleryVideos`,
params: {
user: user,
},
timeout: 100000,
}).then((res) => {
console.log("res");
console.log(res);
console.log("res.data");
console.log(res.data);
console.log("res.data[1]");
console.log(res.data[1]);
console.log("res.data[1][0]");
console.log(res.data[1][0]);
console.log("res.data[1][0].data");
console.log(res.data[1][0].data);
const vid = new Blob(res.data[1][0].data, { type: "video/mp4" });
console.log("vid");
console.log(vid);
console.log("url");
console.log(URL.createObjectURL(vid));
setVideo(URL.createObjectURL(vid));
});
}
Now this is what I receive in the React frontend console when I make this request:
But if I navigate to the URL that I printed in the console, the video isn't there:
Now I'm not sure why this is happening, but one thing I have noticed is my prints in the console show that the size of the buffer before I convert it to a blob, is 773491 and then after I convert it, it is 1987122 which seems quite far off.
Please help explain how I can fix this, or provide a different way I can send these videos over so that I can display them.
I have an simple Node application that allows me to pass an AWS S3 URL link to a file (in this case video files). It uses the FFMPEG library to read the video file and return data like codecs, duration, bitrate etc..
The script is called from PHP script which in turn send the data to the Node endpoint and passes the Amazon S3 URL to node. Sometimes for no obvious reasons the video file fails to return the expected values regarding container, codec, duration etc... and just returns '0'. But when I try the exact same file/request again it returns this data correctly e.g container:mp4
I'm not sure but I think the script somehow needs the createWriteStream to be closed but I cannot be sure, the problem is the issue I have found doesn't happen all the time but sporadically so its hard to get to the issue when its difficult to replicate it.
Any ideas?
router.post('/', async function(req, res) {
const fileURL = new URL(req.body.file);
var path = fileURL.pathname;
path = 'tmp/'+path.substring(1); // removes the initial / from the path
let file = fs.createWriteStream(path); // create the file locally
const request = https.get(fileURL, function(response) {
response.pipe(file);
});
// after file has saved
file.on('finish', function () {
var process = new ffmpeg(path);
process.then(function (video) {
let metadata = formatMetadata(video.metadata);
res.send ({
status: '200',
data: metadata,
errors: errors,
response: 'success'
});
}, function (err) {
console.warn('Error: ' + err);
res.send ({
status: '400',
data: 'Something went wrong processing this video',
response: 'fail',
});
});
});
file.on('error', function (err) {
console.warn(err);
});
});
function formatMetadata(metadata) {
const data = {
'video' : metadata.video,
'audio' : metadata.audio,
'duration' : metadata.duration
};
return data;
}
// Expected output
{"data":{"video":{"container":"mov","bitrate":400,"stream":0,"codec":"h264","resolution":{"w":1280,"h":720},"resolutionSquare":{"w":1280,"h":720},"aspect":{"x":16,"y":9,"string":"16:9","value":1.7777777777777777},"rotate":0,"fps":25,"pixelString":"1:1","pixel":1},"audio":{"codec":"aac","bitrate":"127","sample_rate":44100,"stream":0,"channels":{"raw":"stereo","value":2}},"duration":{"raw":"00:00:25.68","seconds":25}}
// Actual output
{"data":{"video":{"container":"","bitrate":0,"stream":0,"codec":"","resolution":{"w":0,"h":0},"resolutionSquare":{"w":0,"h":null},"aspect":{},"rotate":0,"fps":0,"pixelString":"","pixel":0},"audio":{"codec":"","bitrate":"","sample_rate":0,"stream":0,"channels":{"raw":"","value":""}},"duration":{"raw":"","seconds":0}}
Note - this happens sporadically
You are not accounting for a failed fetch from AWS. You should check the status code of the response before you move on to your pipe.
const request = https.get(fileURL, function(response) {
if(response.statusCode == 200)
response.pipe(file);
else
// Handle error case
});
Im trying to get the contents of a file using the google drive API v3 in node.js.
I read in this documentation I get a stream back from drive.files.get({fileId, alt: 'media'})but that isn't the case. I get a promise back.
https://developers.google.com/drive/api/v3/manage-downloads
Can someone tell me how I can get a stream from that method?
I believe your goal and situation as follows.
You want to retrieve the steam type from the method of drive.files.get.
You want to achieve this using googleapis with Node.js.
You have already done the authorization process for using Drive API.
For this, how about this answer? In this case, please use responseType. Ref
Pattern 1:
In this pattern, the file is downloaded as the stream type and it is saved as a file.
Sample script:
var dest = fs.createWriteStream("###"); // Please set the filename of the saved file.
drive.files.get(
{fileId: id, alt: "media"},
{responseType: "stream"},
(err, {data}) => {
if (err) {
console.log(err);
return;
}
data
.on("end", () => console.log("Done."))
.on("error", (err) => {
console.log(err);
return process.exit();
})
.pipe(dest);
}
);
Pattern 2:
In this pattern, the file is downloaded as the stream type and it is put to the buffer.
Sample script:
drive.files.get(
{fileId: id, alt: "media",},
{responseType: "stream"},
(err, { data }) => {
if (err) {
console.log(err);
return;
}
let buf = [];
data.on("data", (e) => buf.push(e));
data.on("end", () => {
const buffer = Buffer.concat(buf);
console.log(buffer);
});
}
);
Reference:
Google APIs Node.js Client
I am reading the file, zipping & encrypting it and then uploading/writing on network. But I need to know the content-length of the end stream( stream returned after passing through read, zip, encryption) to make a post request.
let zlib = zlib.createGzip(),
encrypt = crypto.cipherIV(....),
input = fs.createReadStream('file.jpg');
function zipAndEncrypt(){
let stream = readStream.pipe( zlib).pipe( encrypt );
let options = {
"stream_length":0,
headers: { "content-type": 'image/jpeg',
"content-length": '123456', // need to get this length
.....
}
}
// post the stream
needle( 'post', url, stream, options )
.then( resp => { console.log( "file length", resp.body.length);})
.catch( err => {})
}
Above code works if I enter the correct content length in headers ( in this case I knew the length ). So I need to find the length of the stream.
so far I achieved the length by :
let chunk = [], conLength;
stream.on( 'data', ( data ) => {
chunk.push( data );
} )
.on( 'end', () => {
conLength = Buffer.concat( chunk ).length;
} );
But the post request fails, SOCKET hang up error.
It looks like stream is drained or consumed as it does not emit 'data' event after finding the length using the code above.
Tried stream.resume(). But nothing works. Could you please suggest how to find the length of the stream without consuming the stream.
If you need to send the content length, the only way to know it, is after the file has been zipped & encrypted.
So, your solution works, but only if you send the buffer, and not the stream, because you already consumed all the data from the stream. And since you already have all the chunks in memory, you might as well send it.
let chunk = [];
stream.on('data', data => chunk.push(data))
.on('end', () => {
const buffer = Buffer.concat(chunk);
const conLength = buffer.length;
// Execute the request here, sending the whole buffer, not the stream
needle(/*...*/)
});
But if your file is too big, you're require to stream it, otherwise you will reach out of memory, an easy workaround, with a little overhead, is to pipe it to a temporary file, and then send that file. That way you can know the file size before performing the request, accessing the stream.bytesWritten property or using fs.lstat.
function zipAndEncrypt(input) {
const gzip = zlib.createGzip();
const encrypt = crypto.createCipheriv(algo, key, iv),
const stream = input.pipe(gzip).pipe(encrypt);
const fileName = tmpFileName();
const file = fs.createWriteStream(fileName)
stream
.pipe(file)
.on('finish', () => {
let options = {
"stream_length": 0,
headers: {
"content-type": 'image/jpeg',
"content-length": file.bytesWritten
}
}
const readStream = fs.createReadStream(fileName);
// post the stream
needle('post', url, readStream, options)
.then(resp => {
console.log("file length", resp.body.length);
})
.catch(err => {})
.finally(() => {
// Remove the file from disk
});
})
}
Alright, so I'm fairly new to Node/Express. I've got an Angular chart, and a React-based Node exporting service.
When you click on a chart's export button, it should create an SVG, pass it along to the export service in an axios.post request, convert it to React component, and export the HTML -> PNG via an image stream, in memory, which should be downloaded on the client when we get a successful response back.
I've tried a Blob, a Buffer, hacking an image.src, etc. I can't seem to get it.
The response passes back the raw image string, so all of that works (if that's what I should be returning)... but I can't get it to download on the client. After digging, I found out that res.download only works with GET, but I specifically need a POST here to pass the SVG and other random data up to the export server (too large for query params).
I'm not sure if it's just that I'm converting it incorrectly (stream) and I can't parse it correctly on the client, or if I'm missing something key.
Here are some snippets:
Angular Service
// ...stuff...
exportPNG(chart) {
const exportServiceURL = 'someUrl.com'
const svg = chart.toSvg();
const params = { svg, ...someOtherData};
const exportUrl = `${exportServiceURL}/exports/image/chart`;
return axios.post(exportUrl, params).then(res => {
//... I need to download the response's image somehow, here?
// res.data looks like "�PNG IHDR�9]�{pHYs��IDATx^�}T[Ǚ�w6�\(-��c[؍�� ')�...
});
React/Export service:
// ...stuff...
async getChart (req, res) => {
const deferred = async.defer();
const filename = 'reportingExport.png';
// Returns HTML, wrapped around the SVG string.
const html = this.getChartHtml(req.body);
const stream = await this.htmlToImageStream(html, filename);
stream.on('open', () => this.pipeStream(req, res, stream, filename));
}
async htmlToImageStream(html, tempFilename) {
const deferred = async.defer();
webshot(html, tempFilename, { siteType: 'html' }, err => {
if (err) deferred.error(err);
const stream = fs.createReadStream(tempFilename);
// Does cleanup, fs.unlink, not important
stream.on('end', () => this.imageExportStreamEnd(stream, tempFilename));
deferred.resolve(stream);
});
return deferred.promise;
}
pipeStream(req, res, stream, filename) {
res.set('Content-Type', 'application/png');
//res.attachment(filename); // this didn't work
res.download(`${path.join(__dirname, '../../../')}${filename}`, filename);
stream.pipe(res); // Seems to return an actual PNG
}