I am currently using ffmpeg-fluent to merge video files. (https://github.com/fluent-ffmpeg/node-fluent-ffmpeg)
Unfortunately, my loop in which I put files to merge fail at thousandth file with below exception :
Error ENAMETOOLONG in /nodes_modules/fluent-ffmpeg.
My question is :
how can i bypass this error for writing command with a number of unlimited character?
This is an old question and unfortunately there's no code to point the problem for sure. But getting ENAMETOOLONG from ffmpeg on Windows usually means the command is really too long. And merging thousands of files makes this seem quite natural.
It still happens now in 2020. But one can work this around. We need to put the source filenames (for merger) into a text file and provide this text file as an input to ffmpeg.
A raw ffmpeg call would look like:
ffmpeg -f concat -safe 0 -i mylist.txt -c copy output.wav
with mylist.txt being like:
file '/path/to/file1.wav'
file '/path/to/file2.wav'
file '/path/to/file3.wav'
With fluent-ffmpeg it's not intuitive but still feasible:
const cmd = ffmpeg();
cmd.input('mylist.txt')
.inputOption(['-f concat', '-safe 0'])
.output('out.wav')
.run();
Note: Be careful with absolute paths of the list file and source files inside the list on Windows. Most ffmpeg versions would add the directory of the list to the source file resulting in similar corrupted path:
c:/ffmpeg/lists/c:/audiofiles/file1.wav
But you still can solve this if you go with url-formatted source files:
file 'file:c:/audiofiles/file1.wav'
file 'file:c:/audiofiles/file2.wav'
file 'file:c:/audiofiles/file3.wav'
I'm sure this will be helpful to someone searching for this ffmpeg error :-)
Related
I added a "scripts" folder as a passthrough copy to my Eleventy site and installed some npm dependencies inside it for scripts to be run on the page load.
So my .eleventy.js has some lines like this:
eleventyConfig.addPassthroughCopy("img");
eleventyConfig.addPassthroughCopy("scripts");
eleventyConfig.addPassthroughCopy("css");
But when I run npx eleventy, I get a build error that says,
Language does not exist: sh
Problem writing Eleventy templates: (more in DEBUG output)
> Having trouble rendering liquid (and markdown) template ./scripts/wb-service-worker/node_modules/bs-fetch/README.md
Why is it trying to "render liquid" in a passthrough copy? (I thought the whole point of passthrough copies is it doesn't try to parse them.) How do I get it to stop?
addPassthroughCopy will copy the files through without parsing, but if the file is also in the input directory, eleventy will process it in its normal way too.
You should keep the assets that you want to be passthrough-copied in a seperate folder to the src files that you are inputting to eleventy for processing.
See these docs for more help:
Input Directory
Ignoring Files
Passthrough's handling of input files
I have a Discord bot I'm maintaining since a year, and a couple of months ago I changed a bit the file structure to clean it up and make it easier for me to know what's going on.
The thing is, whenever I try to request a file (with require) that is in a folder located in the bot's root directory, sometimes it works with "./" and other times it works with "../"
The current file structure is:
----commands
-------commands.js(multiple files)
----images
-------halloween
----------images.png/jpg(multiple images)
----logs
-------bot.log
----modules
------logger.js
----settings
-------config.json
-emojis.json
-gifs.json
-index.js
Following the structure above, when for example I try to request one of the halloween images in a command, the logical thing to me would be to use "../images/halloween/image.png", but instead I have to use "./images/halloween/image.png" as if the "images" folder is within the "commands" folder
In one of the commands I have to use:
const logs = require("../modules/logger");
const background = await Canvas.loadImage("./images/halloween/background.jpg");
I would like to know why this happens. It really messes with my brain seeing an error saying that a file was not found only because node.js decided that this time the parent directory is "./" instead of "../"
Assuming your commands file is making file system calls (because you're accessing an image from it), the directory you invoke your script from can matter. Make sure you're using the path utility to resolve your file locations. See NodeJS accessing file with relative path for more details.
I've been reading through a lot of stuff on this site and I cannot seem to find an answer to my need.... to the point:
I need to copy 1 file to all folder in C:\program files dir, however I'm trying to find a way that I wont need to specify the full path...for a rough example I can
REN F:\source\*.bat *.exe
(or .mp3 or .jpg or .vbs etc etc)
the above commands will rename all *.bat files to *.exe files, without specifying a path
so I'm looking for a similar command line in a batch to move 1 specific file to multiple folders in a dir without specific paths...
I have tried %~d0\ and %programfiles% but nothing seems to work for me....
I still do not fully understand the use case for this, but here is something that will copy a file into each subdirectory below the user's "Program Files" directory. Once you are sure that the copies you expect would be done, remove the -WhatIf from the Copy-Item cmdlet.
Get-ChildItem -Directory -Path $Env:ProgramFiles |
ForEach-Object {
Copy-Item -Path 'C:\src\t t t.txt' -Destination $_.FullName -WhatIf
}
If you must run it from a cmd.exe shell, put the code above into a file named with a .ps1 extension and run:
powershell -NoProfile -File copyit.ps1
Notes:
It is important to use the environment variable ProgramFiles because the actual directory name may be in a language you do not know.
There may be permission issues with writing to these directories. Try using Run as Administrator.
Yes, there are certainly .bat file script ways to do this. The future Microsoft direction is PowerShell. Might as well start grocking it now.
I am trying to make a server side git pre-receive hook for checking the code quality of php and javascript files. So the repo server will make the git push fail, if the pre-receive hook fails the test. Since the server doesn't have the physical file with the content from the latest commit, I have tried getting the file contents and piping them to the php linting tools. It was successful.
For javascript file, I am using the jshint tool. But the issue with the jshint tool is that it is not accepting the file content as the argument.
Is there any way by which I can make the jshint accept file content instead of the file name ? One solution I find is by writing a temporary file. But that is not an ideal solution.
Jshint could also read contents from the STDIN if you specify - instead of filename. So you can forward your file contents to stdin and you won't need temporary file.
$ jshint -
var a = 2
stdin: line 1, col 10, Missing semicolon.
1 error
I can minify on Windows with Node + UglifyJS. So I can say "this change adds 123 bytes after minification."
But I'd like to be able to saying "this changes adds 23 bytes after minification + gzipping."
How can I find out how gzipping will affect my file's size, easily, on Windows?
I use UnxTools (http://sourceforge.net/projects/unxutils/files/unxutils/current/) for windows
In the zip file, you can simply unpack /usr/local/wbin folder to any location on your disc (I use C:/Tools/Unx and make it part of the PATH.
There is gzip tool. Simply use "gzip myscript.js" to get it gzipped. You can easily include the tools into command line process to automate the gzipping and size comparison.
UPDATE: Here is the small checkgzip.cmd file I use with UnxTools to see the difference:
#echo off
dir *.js | grep -Ei [0-9][0-9]/
gzip *.js
dir *.gz | grep -Ei [0-9][0-9]/
gunzip *.gz
Typical output then looks like this with file lengths in bytes to the right from the files:
C:\Tmp\>checkgzip
11/02/2012 09:32 PM 1,026 test.js
11/02/2012 09:32 PM 335 test.js.gz
Files are back intact after the run.
Without sending it through a server that will gZip the file you just use 7Zip or Winzip to compress the file. While it's not necessarily the exact zip algorithm the browser is using you'll get a very good approximation of how big the file will be over the wire.
Like others pointed out, the type of file will depend on how much compression you'll actually get.