I'm creating a button to record a canvas using FFMPEG. Here's the code that finalizes the download process.
const recordButton = document.querySelector("#record")
recordButton.addEventListener('click', function () {
function startRecording() {
const { createFFmpeg } = FFmpeg;
const ffmpeg = createFFmpeg({
log: true
});
var transcode = async (webcamData) => {
var name = `record${id}.webm`;
await ffmpeg.load();
await ffmpeg.write(name, webcamData);
await ffmpeg.transcode(name, `output${id}.mp4`);
var data = ffmpeg.read(`output${id}.mp4`);
var video = document.getElementById('output-video');
video.src = URL.createObjectURL(new Blob([data.buffer], { type: 'video/mp4' }));
dl.href = video.src;
}
fn().then(async ({ url, blob }) => {
transcode(new Uint8Array(await (blob).arrayBuffer()));
})
...
id += 1}
The problem arises with the transcode variable. While the button works initially, every other attempt (on a single page load) fails just the async function. I'm not well versed enough in the function to know why it would only work once. That said, I do know it is the only bit of code that does not fire off upon second attempt.
It could be a few things. This is borrowed code, and I've repurposed it for multiple uses. I may have messed up the declarations. It may be an async issue. I tried to use available values to rig up a secondary, similar function, but that would defeat the purpose of the first.
I tried clearing and appending the DOM elements affected, but that doesn't do anything.
It seems to have something to do with:
await ffmpeg.load();
While FFMPEG has to download and load the library initially, it does not have to do so the second initialization. That my be the trigger that is not activating with successive uses.
Related
I got an example working great.
Now Im trying to modify that working example and come up with a way to extract specific data.
For example. Frame rate.
Im thinking the syntax should be something like this with result.frameRate
see below where I tried console.log("Frame: "+ result.frameRate) also tried the Buzz suggestion of result.media.track[0].FrameRate neither suggestion works.
<button class="btn btn-default" id="getframe" onclick="onClickMediaButton()">Get Frame Rate</button>
<script type="text/javascript" src="https://unpkg.com/mediainfo.js/dist/mediainfo.min.js"></script>
const onClickMediaButton = (filename) => {
//const file = fileinput.files[0]
const file = "D:\VLCrecords\Poldark-Episode6.mp4"
if (file) {
output222.value = 'Working…'
const getSize = () => file.size
const readChunk = (chunkSize, offset) =>
new Promise((resolve, reject) => {
const reader = new FileReader()
reader.onload = (event) => {
if (event.target.error) {
reject(event.target.error)
}
resolve(new Uint8Array(event.target.result))
}
reader.readAsArrayBuffer(file.slice(offset, offset + chunkSize))
})
mediainfo
.analyzeData(getSize, readChunk)
.then((result) => {
consoleLog("Frame: " + result.media.track[0].FrameRate);
output222.value = result;
})
.catch((error) => {
output222.value = `An error occured:\n${error.stack}`
})
}
}
but I cant figure out the exact syntax. Can you help point me in the right direction?
Short answer
Use result.media.track[0].FrameRate.
Long answer
The type of result depends on how you instantiated the library. Your example does not provide enough information on how you are using the library.
From the docs:
MediaInfo(opts, successCallback, errorCallback)
Where opts.format can be object, JSON, XML, HTML or text. So, assuming you used format: 'object' (the default), result will be a JavaScript object.
The structure of the result object depends on the data you provide to MediaInfoLib (or in this case mediainfo.js which is an Emscripten port of MediaInfoLib). The information about the framerate will only be available if you feed a file with at least one video track to the library.
Assuming this is the case, you can access the list of tracks using result.media.track. Given that the video track you are interested in has the index 0, the access to the desired property would be result.media.track[0].FrameRate. This is true for a large amount of video files that usually have at least one video track and have this track as the first available track. Note that this won't necessarily work for all video files and you must make sure your code is fault-tolerant in case these properties don't exist on the result object.
Unfortunately, it seems there is no detailed list of available fields in the MediaInfoLib documentation. You could look at the source code. This might get tedious and lengthy and requires you to understand a fair amount of C++. The most convenient way for me though is to just feed MediaInfoLib a file and look at the result.
PS: The question was already answered in this GitHub issue.
Disclaimer: I'm the author of the aforementioned Emscripten port mediainfo.js.
EDIT:
I don't think you are correct when saying my answer is "partial-incomplete" and "does not work."
As a proof here is a working snippet. Obviously you need to use a media file with a video track.
const fileinput = document.getElementById('fileinput');
const onChangeFile = (mediainfo) => {
const file = fileinput.files[0];
if (file) {
const getSize = () => file.size;
const readChunk = (chunkSize, offset) =>
new Promise((resolve, reject) => {
const reader = new FileReader();
reader.onload = (event) => {
resolve(new Uint8Array(event.target.result));
}
reader.readAsArrayBuffer(file.slice(offset, offset + chunkSize));
});
console.log(`Processing ${file.name}`);
mediainfo
.analyzeData(getSize, readChunk)
.then((result) => {
const frameRate = result.media.track[0].FrameRate; // <- Here we read the framerate
console.log(`Framerate: ${frameRate}`);
})
}
}
MediaInfo(null, (mediainfo) => {
fileinput.addEventListener('change', () => onChangeFile(mediainfo));
fileinput.disabled = false;
})
<script src="https://unpkg.com/mediainfo.js#0.1.4/dist/mediainfo.min.js"></script>
<input disabled id="fileinput" type="file" />
Yes, I know there are a lot of threads on here regarding this topic. HOWEVER; I've found that in most cases, it is kind of dependant on what your music code looks like. Therefore, I decided to not mess too much around and potentially break my code and clutter it with stuff I won't need.
So here I go.
Below is the code I have in my play.js file. Most of it I got through a guide I found just to get me going then I tweaked it a bit to be more suitable for my use.
const discord = require('discord.js');
//Setting up constants
const ytdl = require("ytdl-core");
const ytSearch = require("yt-search");
const voiceChannel = msg.member.voice.channel;
// Check if user is in voiceChannel
if (!voiceChannel) return msg.channel.send(errorVoiceEmbed);
// Check if we have the correct permissions
const permissions = voiceChannel.permissionsFor(msg.client.user);
if (!permissions.has("CONNECT")) return msg.channel.send(permsEmbed);
if (!permissions.has("SPEAK")) return msg.channel.send(permsEmbed);
// Check if a second argument has been passed
if (!args.length) return msg.channel.send(playEmbed);
// Validating the passed URL
const validURL = (str) => {
var regex = /(http|https):\/\/(\w+:{0,1}\w*)?(\S+)(:[0-9]+)?(\/|\/([\w#!:.?+=&%!\-\/]))?/;
if (!regex.test(str)){
return false;
} else {
return true;
}
}
// If we have a valid URL, load it and play the audio
if(validURL (args[0])){
const connection = await voiceChannel.join();
const stream = ytdl(args[0], {filter: 'audioonly'});
connection.play(stream, {seek: 0, volume: 1})
// Leave when done
.on('finish', () => {
voiceChannel.leave();
msg.channel.send(completedEmbed);
});
await msg.reply(playingEmbed);
return
}
const connection = await voiceChannel.join();
// If a user enters a search query instead of a link, Search YouTube and play a result.
const videoFinder = async (query) => {
const videoResult = await ytSearch(query);
return (videoResult.videos.length > 1) ? videoResult.videos[0] : null;
}
const video = await videoFinder(args.join(' '));
if (video) {
const stream = ytdl(video.url, {filter: 'audioonly'});
connection.play(stream, {seek: 0, volume: 1})
.on('finish', () => {
voiceChannel.leave();
msg.channel.send(completedEmbed);
});
await msg.reply (playingEmbed);
} else {
msg.channel.send(noResultsEmbed);
}
So - How would I go about adding a proper queue to this? I'm looking for a separate queue command that ties in perfectly with the play command. Therefore - I'm gonna be using two different files that would need to communicate through a songlist of some sort. How this would be done tho, I'm unsure. I looked into just using arrays for this but didn't manage to get that working.
And before you ask - Those embeds that are used in those msg.channel.send statements are embeds configured earlier up in the file. I didn't include these in here but they're there.
PLEASE NOTE:
I'm not looking for a complete solution. I just want some tips and hints as to how I can solve this in a simple and effective way without having to clutter up the code I already have.
That being said - Just on it's own, the code for the play command works perfectly . It plays either the requested song from the link provided or a song from the search query. But when a new song is requested, the old one just stops and the new one plays. You all know how this should go. If no song is playing, play it. If a song is played, add the requested song to the queue and play that once the previous song is done etc etc.
How to give an argument to command explorer.newFile in Visual Studio Code extension development?
I just want to add create a file with certain name in explorer view and then open it and insert a snippet, these operations will be realized in a function .Help ^_^
I could certainly be wrong but I am not sure explorer.newFile will take an argument.
Nevertheless, this function will do what you want: create a file, open it, and insert an existing snippet:
async function createFileOpen() {
const we = new vscode.WorkspaceEdit();
const thisWorkspace = await vscode.workspace.workspaceFolders[0].uri.toString();
// if you want it to be in some folder under the workspaceFolder: append a folder name
// const uriBase = `${thisWorkspace}/folderName`;
// let newUri1 = vscode.Uri.parse(`${uriBase}/index.js`);
// create a Uri for a file to be created
const newUri = await vscode.Uri.parse(`${ thisWorkspace }\\myTestIndex.js`);
// create an edit that will create a file
await we.createFile(newUri, { ignoreIfExists: false, overwrite: true });
await vscode.workspace.applyEdit(we); // actually apply the edit: in this case file creation
await vscode.workspace.openTextDocument(newUri).then(
async document => {
await vscode.window.showTextDocument(document);
// if you are using a predefined snippet
await vscode.commands.executeCommand('editor.action.insertSnippet', { 'name': 'My Custom Snippet Label Here'});
});
}
I'm using electron to develop an app. after some encryption operations are done, I need to show a dialog to the user to save the file. The filename I want to give to the file is a random hash but I have no success also with this. I'm trying with this code but the file will not be saved. How I can fix this?
const downloadPath = app.getPath('downloads')
ipcMain.on('encryptFiles', (event, data) => {
let output = [];
const password = data.password;
data.files.forEach( (file) => {
const buffer = fs.readFileSync(file.path);
const dataURI = dauria.getBase64DataURI(buffer, file.type);
const encrypted = CryptoJS.AES.encrypt(dataURI, password).toString();
output.push(encrypted);
})
const filename = hash.createHash('md5').toString('hex');
console.log(filename)
const response = output.join(' :: ');
dialog.showSaveDialog({title: 'Save encrypted file', defaultPath: downloadPath }, () => {
fs.writeFile(`${filename}.mfs`, response, (err) => console.log(err) )
})
})
The problem you're experiencing is resulting from the asynchronous nature of Electron's UI functions: They do not take callback functions, but return promises instead. Thus, you do not have to pass in a callback function, but rather handle the promise's resolution. Note that this only applies to Electron >= version 6. If you however run an older version of Electron, your code would be correct -- but then you should really update to a newer version (Electron v6 was released well over a year ago).
Adapting your code like below can be a starting point to solve your problem. However, since you do not state how you generate the hash (where does hash.createHash come from?; did you forget to declare/import hash?; did you forget to pass any message string?; are you using hash as an alias for NodeJS' crypto module?), it is (at this time) impossible to debug why you do not get any output from console.log (filename) (I assume you mean this by "in the code, the random filename will not be created"). Once you provide more details on this problem, I'd be happy to update this answer accordingly.
As for the default filename: As per the Electron documentation, you can pass a file path into dialog.showSaveDialog () to provide the user with a default filename.
The file type extension you're using should also actually be passed with the file extension into the save dialog. Also passing this file extension as a filter into the dialog will prevent users from selecting any other file type, which is ultimately what you're also currently doing by appending it to the filename.
Also, you could utilise CryptoJS for the filename generation: Given some arbitrary string, which could really be random bytes, you could do: filename = CryptoJS.MD5 ('some text here') + '.mfs'; However, remember to choose the input string wisely. MD5 has been broken and should thus no longer be used to store secrets -- using any known information which is crucial for the encryption of the files you're storing (such as data.password) is inherently insecure. There are some good examples on how to create random strings in JavaScript around the internet, along with this answer here on SO.
Taking all these issues into account, one might end up with the following code:
const downloadPath = app.getPath('downloads'),
path = require('path');
ipcMain.on('encryptFiles', (event, data) => {
let output = [];
const password = data.password;
data.files.forEach((file) => {
const buffer = fs.readFileSync(file.path);
const dataURI = dauria.getBase64DataURI(buffer, file.type);
const encrypted = CryptoJS.AES.encrypt(dataURI, password).toString();
output.push(encrypted);
})
// not working:
// const filename = hash.createHash('md5').toString('hex') + '.mfs';
// alternative requiring more research on your end
const filename = CryptoJS.MD5('replace me with some random bytes') + '.mfs';
console.log(filename);
const response = output.join(' :: ');
dialog.showSaveDialog(
{
title: 'Save encrypted file',
defaultPath: path.format ({ dir: downloadPath, base: filename }), // construct a proper path
filters: [{ name: 'Encrypted File (*.mfs)', extensions: ['mfs'] }] // filter the possible files
}
).then ((result) => {
if (result.canceled) return; // discard the result altogether; user has clicked "cancel"
else {
var filePath = result.filePath;
if (!filePath.endsWith('.mfs')) {
// This is an additional safety check which should not actually trigger.
// However, generally appending a file extension to a filename is not a
// good idea, as they would be (possibly) doubled without this check.
filePath += '.mfs';
}
fs.writeFile(filePath, response, (err) => console.log(err) )
}
}).catch ((err) => {
console.log (err);
});
})
For example i want to load 100MB mp3 file into AudioContext, and i can do that with using XMLHttpRequest.
But with this solution i need to load all file and only then i can play it, because onprogress method don't return data.
xhr.onprogress = function(e) {
console.log(this.response); //return null
};
Also i tried to do that with fetch method, but this way have same problem.
fetch(url).then((data) => {
console.log(data); //return some ReadableStream in body,
//but i can't find way to use that
});
There is any way to load audio file like stream in client JavaScript?
You need to handle the ajax response in a streaming way.
there is no standard way to do this until fetch & ReadableStream have properly been implemented across all the browsers
I'll show you the most correct way according to the new standard how you should deal with streaming a ajax response
// only works in Blink right now
fetch(url).then(res => {
let reader = res.body.getReader()
let pump = () => {
reader.read().then(({value, done}) => {
value // chunk of data (push chunk to audio context)
if(!done) pump()
})
}
pump()
})
Firefox is working on implementing streams but until then you need to use xhr and moz-chunked-arraybuffer
IE/edge has ms-stream that you can use but it's more complicated
How can I send value.buffer to AudioContext?
This only plays the first chunk and it doesn't work correctly.
const context = new AudioContext()
const source = context.createBufferSource()
source.connect(context.destination)
const reader = response.body.getReader()
while (true) {
await reader.read()
const { done, value } = await reader.read()
if (done) {
break
}
const buffer = await context.decodeAudioData(value.buffer)
source.buffer = buffer
source.start(startTime)
}