Firefox can't decode wav audio format - javascript

Currently I'm creating a game in Javascript and there are audio files included in the game. It works perfectly in Chrome, but Firefox has some problem with decoding some of the audio files.
Here is the error message
I have read a post about the junk chunks at the beginning of the file, but I don't really know how to clean them or what to do with it.
Link of the post
// Error Code: NS_ERROR_DOM_MEDIA_METADATA_ERR (0x806e0006)
audio = new Audio("someaudio.wav")
// It has no problem with it
audio = new Audio("someotheraudio.wav")
I experienced that Firefox can decode some audio files, but not all of them.

Related

how can I play .mov iPhone video as .mp4?

Noob question from a high school physics teacher: I have written a web app for my students to analyze videos using the HTML video player. If they take videos on their iPhones, copy them to their chromebooks, and then edit the file extension from .mov to .mp4, things work fine. Is there a way to skip the extension change? I have seen examples like this:
source src="https://www.somewebsite.com/videos/sample.mov" type="video/mp4"
where the call to the file is embedded in the code. But my students are making their own videos, so they need to be able to load them.
I tried
inputFile.type = "video/mp4"
but the type seems to be read only, because when I read it back, it is still "video/quicktime"
I figured out a solution to my own question, a personal first!
var file = files[0]; // this is the quicktime file I want to read
var dummyName = file.name;
file = new File([file], dummyName, {type: "video/mp4"});
Now the variable file points to the original file, but is recognized as mp4 and can be played.

Is it possible to create an audio file based on CSV-data and an existing audio file?

I am working on a project in JavaScript, and I need to do a fairly strange task. I am not sure how to achieve it. I have looked into the most popular libraries for audio, but it doesn't seem to be an easy way to just export a created file fast, without recording it in real-time. I might be wrong.
I am going to have some data as JSON or in CSV format with numbers in each row. That number corresponds to seconds elapsed. This data tells me when a certain audio file needs to be played. The audio file is just a 5-second long clip of beeps.
I need to generate a long audio file that is just silent, but where the audio clip plays when "instructed" by the data. So it could play at 10 seconds, 45 seconds, 267 seconds, etc.
The finished audio file could be over an hour long. I was hoping to create a system where I could just select an audio file and a data file from my computer, click a button, let it process, and then download the finished file.
I hope what I want to do is not unclear. Can I use the Web Audio API for this, or do I need a library? I am stuck at the first part of the process, namely what to use and how to "create" a file from nothing.
I think you could try MediaRecorder from MediaStream Recording API - it will let you create a recording function (something like this):
const stream = navigator.mediaDevices.getUserMedia({ audio: true });
const mediaRecorder = new MediaRecorder(stream);
// then you can get your number from JSON / CSV file and do next
mediaRecorder.start();
await sleep(YOUR_NUMBER_FROM_FILE);
mediaRecorder.stop();
The example above just creates audio that will be last concrete seconds.
You can load your existing files of audio and try it play in the background. I suppose MediaRecorder will record them too.
See MDN Web Docs about MediaRecorder
To summarize - Web Audio API and MediaStream Recording API will be enough for your case.
I hope this will help you in some ways.

How to stream live TS (transport stream) files on HTML player?

I want to stream a transport stream file on a HTML player. Is there a way to implement it?
I have tried these following approaches to play the TS file-
a) Put it in a video tag:
I simply wrote a video tag
But it showed me a blank screen.
b) I tried it with iframe tag:
I wrote a simple tag:
It actually downloaded the file but screen turned blank.
c) I used HLS player for showing the ts file
The HLS (hls.js) validated the file. However it gave me the "manifestLoadError".
Can anyone helpe me with this HLS error?
Or suggest me another way to show this TS file?
Browsers can not play ts. You need to convert to iso/fmp4 before playing.

Play local video file in electron html5 video player using node.js fs.readStream()

I am developing a video player application, which plays video videos (.mp4) from the local filesystem using node.js and electron (therefore I am using chromium's html5 video player).
Playing video videos larger than 2GB seems to be a problem with my current approach.
I used to read the local video files using fs.readFileSync and pass that Blob to the video player, like in this code:
this.videoNode = document.querySelector('video');
const file: Buffer = fs.readFileSync(video.previewFilePath);
this.fileURL = URL.createObjectURL(new Blob([file]));
this.videoNode.src = this.fileURL;
This does work for video files smaller that 2GB. Files larger than 2GB trigger the following error:
ERROR RangeError [ERR_FS_FILE_TOO_LARGE]: File size (2164262704) is greater than possible Buffer: 2147483647 bytes
at tryCreateBuffer (fs.js:328)
at Object.readFileSync (fs.js:364)
at Object.fs.readFileSync (electron/js2c/asar.js:597)
I believe the solution is to pass a ReadStream to the html5 video player using fs.readStream(). Unfortunately I cannot find any documentation on how to pass this stream to the video player.
As the topic says you are using electron and from the above comments it is clear that you are avoiding a server. It seems that if you are just creating an offline video player then you are just making things complex. Why are you creating a buffer and then creating a new url? You can achieve this by simply getting the video path and using it as src attribute of video object.
Your code should look like this-
var path="path/to/video.mp4"; //you can get it by simple input tag with type=file or using electron dialogs
this.videoNode = document.querySelector('video');//it should be a video element in your html
this.videoNode.src=path;
this.videoNode.oncanplay=()=>{
//do something...
}
This will handle complete file and you dont need to disable webPreference given that videoNode is the video element in html file.
You can take a look at this open source media player project made using electron-
https://github.com/HemantKumar01/ElectronMediaPlayer
Disclaimer: i am the owner of the above project and everyone is invited to contribute to it

WAV file from user microphone vs. WAV file from file: Some difference is causing bugs, but what are these different?

Right now I have two methods of sending a WAV file to the server. A user can directly upload said file, or make a recording on their microphone. Once the files are sent, they are processed in nigh the same way. The file is sent to S3, and can later be played by clicking on some link (which plays the file via audio = new Audio('https://S3.url'); audio.play()
When dealing with a file from the microphone:
audio.play() seems to work. Everything in the audio object is identical (except for the URL itself), but the sound won't actually play through the speakers. On the other hand, for an uploaded file, the sound plays through the speakers.
When I visit the URLs directly—both of them open up the sound-player (in Chrome) or prompt a download for a WAV file (in Firefox). The sound-player appropriately plays both sounds, and the downloaded WAV files each contain their respective sound, which other programs can play
If I actually download the file with sound from the user's microphone instead of sending it directly to the server, then manually upload the WAV file, everything works as it should (as it would with any other uploaded WAV file).
In any scenario where the microphone-sound is uploaded somewhere, then downloaded, it is downloaded as a WAV file and plays accordingly. Anything which uses the re-uploaded WAV file works as intended.
Here's how I'm getting the sound from the user's microphone. First, I use WebAudioTrack to place a record button on my webpage. Once the user stops their recording, they hit the submit button which runs:
saveRecButton.addEventListener("click", function() {
save_recording(audioTrack.audioData.getChannelData(0))
});
Here, audioTrack.audioData is an AudioBuffer containing the recorded sound. getChannelData(0) is a Float32Array representing the sound. I send this array to the server (Django) via AJAX:
function save_recording(channelData){
var uploadFormData = new FormData();
uploadFormData.append('data', $('#some_field').val());
...
uploadFormData.append('audio', channelData);
$.ajax({
'method': 'POST',
'url': '/soundtests/save_recording/',
'data': uploadFormData,
'cache': false,
'contentType': false,
'processData': false,
success: function(dataReturned) {
if (dataReturned != "success") {
[- Do Some Stuff -]
}
});
}
Then, using wavio, a WAV file is written from an array:
import wavio
import tempfile
from numpy import array
def save_recording(request):
if request.is_ajax() and request.method == 'POST':
form = SoundForm(request.POST)
if form.is_valid():
with tempfile.NamedTemporaryFile() as sound_recording:
sound_array_string = request.POST.get('audio')
sound_array = array([float(x) for x in sound_array_string.split(',')])
wavio.write(sound_recording, sound_array, 48000, sampwidth=4)
sound_recording.seek(0)
s3_bucket.put_object(Key=some_key, Body=sound_recording, ContentType='audio/x-wav')
return HttpResponse('success')
Then, when the sound needs to be listened to:
In Python:
import boto3
session = boto3.Session(aws_access_key_id='key', aws_secret_access_key='s_key')
bucket = self.session.resource('s3').Bucket(name='bucket_name')
url = session.client('s3').generate_presigned_url('get_object', Params={'Bucket':bucket.name, Key:'appropriate_sound_key'})
Then, in JavaScript:
audio = new Audio('url_given_by_above_python')
audio.play()
The audio plays well if I upload a file, but doesn't play at all if I use the user's microphone. Is there something about WAV files I might be missing that's done when I upload the microphone sound to S3, then re-download it? I have no clue where to go next; everything between the two files seems identical. Here's a dump of two Audio objects with URLs from the user's mic. and another created from a file manually uploaded that was re-downloaded from that exact user-mic. file look exactly the same (except for the URL, which, upon visiting or downloading, plays both sounds).
There's got to be some difference here, but I have no idea what it is, and have been struggling with this for a few days now. :(
The sound file you're creating is 32-bit PCM, which is an arguably non-standard audio codec. Chrome supports it (source) but Firefox does not (source, bug).
Encode it as 16-bit PCM and it'll be universally acceptable.
EDIT: As mentioned in the comments, this is the parameter in question.

Categories