<video>.currentTIme doesn't want to be set - javascript

I'm trying to write a piece of Javascript that switches between two videos at timed intervals (don't ask). To make matters worse, each video has to start at specific place (about ten seconds, and again, don't ask.)
I got the basics working by just using the YUI Async library to fire to switch the videos at intervals:
YUI().use('async-queue', function (Y) {
// AsyncQueue is available and ready for use.
var cumulativeTime = 0;
var q = new Y.AsyncQueue()
for (var x = 0; x < settings.length; x++) {
cumulativeTime = cumulativeTime + (settings[x].step * 1000)
q.add( {
fn: runVideo,
args: settings[x].mainWindow,
timeout: cumulativeTime
})
}
q.run()
});
So far, so good. The problem is that I can't seem to get the video to start at ten seconds in.
I'm using this code to do it:
function runVideo(videoToPlay) {
console.log('We are going to play -> ' + videoToPlay)
var video = document.getElementById('mainWindow')
video.src = '/video?id=' + videoToPlay
video.addEventListener('loadedmetadata', function() {
this.currentTime = 10 // <-- Offending line!
this.play();
})
}
The problem is that this.currentTime refuses to hold any value I set it to. I'm running it through Chrome (the file is served from Google Storage behind a Google App Engine Instance) and when the debugger goes past the line, the value is always zero.
Is there some trick I'm missing in order to set this value?
Thanks in advance.

Try use Apache server.
setCurrentTime not working with some simple server.
ex) python built in server, php built in server
HTTP server should be Support partial content response. (HTTP Status 206)

Related

iPhone 14 won't record using MediaRecorder

Our website records audio and plays it back for a user. It has worked for years with many different devices, but it started failing on the iPhone 14. I created a test app at https://nmp-recording-test.netlify.app/ so I can see what is going on. It works perfectly on all devices but it only works the first time on an iPhone 14. It works on other iPhones and it works on iPad and MacBooks using Safari or any other browser.
It looks like it will record if that is the first audio you ever do. If I get an AudioContext somewhere else the audio playback will work for that, but then the recording won't.
The only symptom I can see is that it doesn't call MediaRecorder.ondataavailable when it is not working, but I assume that is because it isn't recording.
Here is the pattern that I'm seeing with my test site:
Click "new recording". (the level indicator moves, the data available callback is triggered)
Click "listen" I hear what I just did
Click "new recording". (no levels move, no data is reported)
Click "listen" nothing is played.
But if I do anything, like click the metronome on and off then it won't record the FIRST time, either.
The "O.G. Recording" is the original way I was doing the recording, using deprecated method createMediaStreamSource() and createScriptProcessor()/createJavaScriptNode(). I thought maybe iPhone finally got rid of that, so I created the MediaRecorder version.
What I'm doing, basically, is (truncated to show the important part):
const chunks = []
function onSuccess(stream: MediaStream) {
mediaRecorder = new MediaRecorder(stream);
mediaRecorder.ondataavailable = function (e) {
chunks.push(e.data);
}
mediaRecorder.start(1000);
}
navigator.mediaDevices.getUserMedia({ audio: true }).then(onSuccess, onError);
Has anyone else seen anything different in the way the iPhone 14 handles recording?
Does anyone have a suggestion about how to debug this?
If you have an iPhone 14, would you try my test program above and let me know if you get the same results? We only have one iPhone 14 to test with, and maybe there is something weird about that device.
If it works you should see a number of lines something like data {"len":6784} appear every second when you are recording.
--- EDIT ---
I reworked the code similar to Frank zeng's suggestion and I am getting it to record, but it is still not right. The volume is really low, it looks like there are some dropouts, and there is a really long pause when resuming the AudioContext.
The new code seems to work perfectly in the other devices and browsers I have access to.
--- EDIT 2 ---
There were two problems - one is that the deprecated use of createScriptProcessor stopped working but the second one was an iOS bug that was fixed in version 16.2. So rewriting to use the AudioWorklet was needed, but keeping the recording going once it is started is not needed.
I have the same problem as you,I think the API of AudioContent.createScriptProcessor is Invalid in Iphone14, I used new API About AudioWorkletNode to replace it. And don't closed the stream, Because the second recording session of iPhone 14 is too laggy, Remember to destroy the data after recording. After testing, I have solved this problem,Here's my code,
// get stream
window.navigator.mediaDevices.getUserMedia(options).then(async (stream) => {
// that.stream = stream
that.context = new AudioContext()
await that.context.resume()
const rate = that.context.sampleRate || 44100
that.mp3Encoder = new lamejs.Mp3Encoder(1, rate, 128)
that.mediaSource = that.context.createMediaStreamSource(stream)
// API开始逐步淘汰了,如果可用则继续用,如果不可用则采用worklet方案写入音频数据
if (that.context.createScriptProcessor && typeof that.context.createScriptProcessor === 'function') {
that.mediaProcessor = that.context.createScriptProcessor(0, 1, 1)
that.mediaProcessor.onaudioprocess = event => {
window.postMessage({ cmd: 'encode', buf: event.inputBuffer.getChannelData(0) }, '*')
that._decode(event.inputBuffer.getChannelData(0))
}
} else { // 采用新方案
that.mediaProcessor = await that.initWorklet()
}
resolve()
})
// content of audioworklet function
async initWorklet() {
try {
/*音频流数据分析节点*/
let audioWorkletNode;
/*---------------加载AudioWorkletProcessor模块并将其添加到当前的Worklet----------------------------*/
await this.context.audioWorklet.addModule('/get-voice-node.js');
/*---------------AudioWorkletNode绑定加载后的AudioWorkletProcessor---------------------------------*/
audioWorkletNode = new AudioWorkletNode(this.context, "get-voice-node");
/*-------------AudioWorkletNode和AudioWorkletProcessor通信使用MessagePort--------------------------*/
console.log('audioWorkletNode', audioWorkletNode)
const messagePort = audioWorkletNode.port;
messagePort.onmessage = (e) => {
let channelData = e.data[0];
window.postMessage({ cmd: 'encode', buf: channelData }, '*')
this._decode(channelData)
}
return audioWorkletNode;
} catch (e) {
console.log(e)
}
}
// content of get-voice-node.js, Remember to put it in the static resource directory
class GetVoiceNode extends AudioWorkletProcessor {
/*
* options由new AudioWorkletNode()时传递
* */
constructor() {
super()
}
/*
* `inputList`和outputList`都是输入或输出的数组
* 比较坑的是只有128个样本???如何设置
* */
process (inputList, outputList, parameters) {
// console.log(inputList)
if(inputList.length>0&&inputList[0].length>0){
this.port.postMessage(inputList[0]);
}
return true //回来让系统知道我们仍处于活动状态并准备处理音频。
}
}
registerProcessor('get-voice-node', GetVoiceNode)
Destroy the recording instance and free the memory,if want use it the nextTime,you have better create new instance
this.recorder.stop()
this.audioDurationTimer && window.clearInterval(this.audioDurationTimer)
const audioBlob = this.recorder.getMp3Blob()
// Destroy the recording instance and free the memory
this.recorder = null

Real time audio streaming from ffmpeg to browser (am I missing something?)

I have tried a couple of solutions already, but nothing works for me.
I want to stream audio from my PC to another computer with almost zero latency. Things are working fine so far in a sense of lagging and everything, sound is clear and not choppy at all, but there is something like a delay between the moment when audio starts playing on my PC and remote PC. For example when I click on Youtube 'play' button audio starts playing only after 3-4 seconds on the remote machine. The same when I click 'Pause', the sound on the remote PC stops after a couple of seconds.
I've tried to use websockets\plain audio tag, but no luck so far.
For example this is my solution by using websockets and pipes:
def create_pipe():
return win32pipe.CreateNamedPipe(r'\\.\pipe\__audio_ffmpeg', win32pipe.PIPE_ACCESS_INBOUND,
win32pipe.PIPE_TYPE_MESSAGE |
win32pipe.PIPE_READMODE_MESSAGE |
win32pipe.PIPE_WAIT, 1, 1024 * 8, 1024 * 8, 0, None)
async def echo(websocket):
pipe = create_pipe()
win32pipe.ConnectNamedPipe(pipe, None)
while True:
data = win32file.ReadFile(pipe, 1024 * 2)
await websocket.send(data[1])
async def main():
async with websockets.serve(echo, "0.0.0.0", 7777):
await asyncio.Future() # run forever
if __name__ == '__main__':
asyncio.run(main())
The way I start ffmpeg
.\ffmpeg.exe -f dshow -i audio="Stereo Mix (Realtek High Definition Audio)" -acodec libmp3lame -ab 320k -f mp3 -probesize 32 -muxdelay 0.01 -y \\.\pipe\__audio_ffmpeg
On the JS side the code is a little bit long, but essentially I am just reading a web socket and appending to buffer
this.buffer = this.mediaSource.addSourceBuffer('audio/mpeg')
Also as you see I tried to use -probesize 32 -muxdelay 0.01 flags, but no luck as well
I tried to use plain tag as well, but still - this couple-of-seconds delay exists
What can I do? Am I missing something? Maybe I have to disable buffering somewhere?
I have some code, but all I learned was from https://webrtc.github.io/samples/ website and some from MDN. It's pretty simple.
The idea is to connect 2 peers using a negotiating server just for the initial connection. Afterwards they can share streams (audio, video, data). When I say peers I mean client computers like browsers.
So here's an example for connecting, and broadcasting and of course receiving.
Now for some of my code.
a sketch of the process
note: the same code is used for connecting to and connecting from. this is how my app works bcz it's kind of like a chat. ClientOutgoingMessages and ClientIncomingMessages are just my wrapper around sending messages to server (I use websockets, but it's possible also ajax).
Start: peer initiates RTCPeerConnection and sends an offer via server. also setup events for receiving. The other peer is notified of the offer by the server, then sends answer the same way (should he choose to) and finally the original peer accepts the answer and starts streaming. Among this there is another event about candidate I didn't even bothered to know what it is. It works without knowing it.
function create_pc(peer_id) {
var pc = new RTCPeerConnection(configuration);
var sender
var localStream = MyStreamer.get_dummy_stream();
for (var track of localStream.getTracks()) {
sender = pc.addTrack(track, localStream);
}
// when a remote user adds stream to the peer connection, we display it
pc.ontrack = function (e) {
console.log("got a remote stream")
remoteVideo.style.visibility = 'visible'
remoteVideo.srcObject = e.streams[0]
};
// Setup ice handling
pc.onicecandidate = function (ev) {
if (ev.candidate) {
ClientOutgoingMessages.candidate(peer_id, ev.candidate);
}
};
// status
pc.oniceconnectionstatechange = function (ev) {
var state = pc.iceConnectionState;
console.log("oniceconnectionstatechange: " + state)
};
MyRTC.set_pc(peer_id, {
pc: pc,
sender: sender
});
return pc;
}
function offer_someone(peer_id, peer_name) {
var pc = MyRTC.create_pc(peer_id)
pc.createOffer().then(function (offer) {
ClientOutgoingMessages.offer(peer_id, offer);
pc.setLocalDescription(offer);
});
}
function answer_offer(peer_id) {
var pc = MyRTC.create_pc(peer_id)
var offer = MyOpponents.get_offer(peer_id)
pc.setRemoteDescription(new RTCSessionDescription(offer));
pc.createAnswer().then(function (answer) {
pc.setLocalDescription(answer);
ClientOutgoingMessages.answer(peer_id, answer);
// alert ("rtc established!")
MyStreamer.stream_current();
});
}
handling messages from server
offer: function offer(data) {
if (MyRTC.get_pc(data.connectedUser)) {
// alert("Not accepting offers already have a conn to " + data.connectedUser)
// return;
}
MyOpponents.set_offer(data.connectedUser, data.offer)
},
answer: function answer(data) {
var opc = MyRTC.get_pc(data.connectedUser)
opc && opc.pc.setRemoteDescription(new RTCSessionDescription(data.answer)).catch(function (err) {
console.error(err)
// alert (err)
});
// alert ("rtc established!")
MyStreamer.stream_current();
},
candidate: function candidate(data) {
var opc = MyRTC.get_pc(data.connectedUser)
opc && opc.pc.addIceCandidate(new RTCIceCandidate(data.candidate));
},
leave: function leave(data) {
MyRTC.close_pc(data.connectedUser);
},

networkState, readyState not working for audio stream

I’m trying to check for an audio stream, to see if it is active. The server is on and streaming audio or it is off and not streaming. I found two methods, .networkState and .readyState, that work against an audio tag or an Audio() object. Things look okay when the stream is off and never been listened to (not in browser cache). I have both methods being executed every 4s to catch any change in the status of the stream because I want to continuously monitor for any change and update the web page.
The trouble is that I can’t get either method to correctly update the status of the stream once it has changed, that is, from off to on or on to off. I have to refresh the page to get a real update. Then, after you’ve listened to the stream and it is cached, it always shows as ‘on’.
Why don’t either of these methods correctly detect the change of the stream? Is there a way to force a check against the actual object instead of the cached object?
(excuse my code - I quickly chunked this together and I am a newbie) (I tried more than shown below, but tried to make this post succinct.)
<script>
var myTimerVar = setInterval(myTimer, 4000);
var myFunctionVar = setInterval(myFunction, 4000);
// var aud = new Audio('http://myserver.com:8080/live.mp3');
function myTimer() {
var acheck = document.getElementById("myAudio").networkState;
if (acheck == 1) {
document.getElementById("netstate").innerHTML = "ONLINE " + acheck;
document.getElementById("netstate").style.backgroundColor = "MediumSeaGreen";
}
else if (acheck == 3) {
document.getElementById("netstate").innerHTML = "OFFLINE " + acheck;
document.getElementById("netstate").style.backgroundColor = "yellow";
}
}
function myFunction() {
var x = document.getElementById("myAudio").readyState;
if (x == 0) {
document.getElementById("readystate").innerHTML = "OFFLINE " + x;
document.getElementById("readystate").style.backgroundColor = "yellow";
}
else if (x == 4) {
document.getElementById("readystate").innerHTML = "ONLINE " + x;
document.getElementById("readystate").style.backgroundColor = "MediumSeaGreen";
}
}
</script>
<audio id="myAudio" controls>
<source src="http://myserver.com:8080/live.mp3" type="audio/mpeg">
</audio>
<!-- this is what does not update correctly on the page -->
<p>StreamN0 is: <span id="netstate">OFFLINE</span></p>
<p>StreamR0 is: <span id="readystate">OFFLINE</span></p>
I have both methods being executed every 4s to catch any change in the status of the stream
Firstly, consider using the readystatechange event instead.
const readyStates = {
0: 'HAVE_NOTHING',
1: 'HAVE_METADATA',
2: 'HAVE_CURRENT_DATA',
3: 'HAVE_FUTURE_DATA',
4: 'HAVE_ENOUGH_DATA'
}
document.querySelector('audio#myAudio').addEventListener('readystatechange', (e) => {
document.querySelector('#readyState').innerText = readyStates[e.readyState];
});
The trouble is that I can’t get either method to correctly update the status of the stream once it has changed, that is, from off to on or on to off.
What I think you're asking about is whether or not the stream is actually live... online or not. That's not what the readyState indicates. The browser doesn't actually know that you're playing a live stream. It assumes that the data it's cached is intended to be played. A readyState of HAVE_ENOUGH_DATA (4) just means that the browser thinks it's sufficiently buffered the media, and the rate of which it's buffered means it thinks it can continue to play without interruption.
The events you're probably looking for include:
playing
waiting
ended
emptied
stalled

How to set the currentTime in HTML5 audio object when audio file is online?

I have a JavaScript audio player with skip forward/back 10 second buttons. I do this by setting the currentTime of my audio element:
function Player(skipTime)
{
this.skipTime = skipTime;
this.waitLoad = false;
// initialise main narration audio
this.narration = new Audio(getFileName(dynamicNarration));
this.narration.preload = "auto";
this.narration.addEventListener('canplaythrough', () => { this.loaded(); });
this.narration.addEventListener('timeupdate', () => { this.seek(); });
this.narration.addEventListener('ended', () => { this.ended(); });
this.narration.addEventListener('waiting', () => { this.audioWaiting(); });
this.narration.addEventListener('playing', () => { this.loaded(); });
}
Player.prototype = {
rew: function rew()
{
if (!this.waitLoad) {
this.skip(-this.skipTime);
}
},
ffw: function ffw()
{
if (!this.waitLoad) {
this.skip(this.skipTime);
}
},
skip: function skip(amount)
{
const curTime = this.narration.currentTime;
const newTime = curTime + amount;
console.log(`Changing currentTime (${curTime}) to ${newTime}`);
this.narration.currentTime = newTime;
console.log(`Result: currentTime = ${this.narration.currentTime}`);
},
loaded: function loaded()
{
if (this.waitLoad) {
this.waitLoad = false;
playButton.removeClass('loading');
}
},
audioWaiting: function audioWaiting()
{
if (!this.waitLoad) {
this.waitLoad = true;
playButton.addClass('loading');
}
},
}
(I'm including here some of the event listeners I'm attaching because previously I'd debugged a similar problem as being down to conflicts in event listeners. Having thoroughly debugged event listeners this time though, I don't think that's the root of the problem.)
Though this all works fine on my local copy, when I test an online version I get the following results:
Chrome: resets play position to 0. Final console line reads Result: currentTime = 0.
Safari: doesn't change play position at all. Final console.log line gives a value for currentTime equal to newTime (even though the play position actually doesn't change).
Firefox: skipping forward works; skipping backwards interrupts the audio for a few seconds, then it starts playing again from a couple of seconds before where the playhead had been. In both cases, final console.log line gives a value for currentTime equal to newTime
The issue must have something to do with the way audio is loaded. I have tried adding another console log line to show the start and end values for buffered.
In Chrome it goes up to 2 seconds after current play position. In Safari it goes up to ~170 seconds, and in Firefox it seems to buffer the full audio length.
However, in each case the start of the buffered object is 0.
Does anyone have any idea what might be going wrong?
There are some requirements to properly load an audio file and use the properties.
Your response while serving the file needs to have the following headers.
accept-ranges: bytes
Content-Length: BYTE_LENGTH_OF_YOUR_FILE
Content-Range: bytes 0-BYTE_LENGTH_OF_YOUR_FILE/BYTE_LENGTH_OF_YOUR_FILE
content-type: audio/mp3
My colleagues and I have been struggling over this for a few days and finally this worked
Image of Response header for an audio file
If your browser did not load the audio then the audio can not be played. The browser did not know your audio file and becaue of this it tries to play your audio from the start. May be your audio could be only 1 second long or even shorter.
Solution
You have to wait for loadedmetadata event and after it you can play your audion from any time position. After this event your browser knows all relevant information about your audio file.
Please change your code like follows:
function Player(skipTime)
{
this.skipTime = skipTime;
// initialise main narration audio
this.narration = new Audio(getFileName(dynamicNarration));
this.narration.preload = "auto";
this.narration.addEventListener('canplaythrough', () => { this.loaded(); });
this.narration.addEventListener('timeupdate', () => { this.seek(); });
this.narration.addEventListener('ended', () => { this.ended(); });
this.narration.addEventListener('waiting', () => { this.audioWaiting(); });
this.narration.addEventListener('playing', () => { this.loaded(); });
this.narration.addEventListener('loadedmetadata', () => {playButton.removeClass('loading');});
playButton.addClass('loading');
}
Player.prototype =
{
rew: function()
{
this.skip(-this.skipTime);
},
ffw: function()
{
this.skip(this.skipTime);
},
skip: function(amount)
{
var curTime = this.narration.currentTime;
var newTime = curTime + amount;
console.log(`Changing currentTime (${curTime}) to ${newTime}`);
this.narration.currentTime = newTime;
console.log(`Result: currentTime = ${this.narration.currentTime}`);
}
};
But if you do not want long to wait for audio loading then you have only one option more: to convert all your audiofiles to dataURL format which looks like follows:
var data = "data:audio/mp3;base64,...
But in this case you have to wait for your page load even more than for one audio file load. And by audio file load it is only the metadata and it is faster.
This solved my issue...
private refreshSrc() {
const src = this.media.src;
this.media.src = '';
this.media.src = src;
}
I found a solution to my problem, if not exactly an explanation.
My hosting provider uses a CDN, for which it must replace resource's URLs with those of a different domain. The URLs of my audio resources are dynamically constructed by JS, because there's a random element to them; as such, the deployment process that replaces URLs wasn't catching those for my audio files. To get around this, I manually excluded the audio files from the CDN, meaning I could refer to them using relative file paths.
This was how things stood when I was having this issue.
Then, due to a separate issue, I took a different approach: I got the audio files back on the CDN and wrote a function to extract the domain name I needed to use to retrieve the files. When I did that, suddenly I found that all my problems to do with setting currentTime had disappeared. Somehow, not having the files on the CDN was severely interfering with the browser's ability to load them in an orderly manner.
If anyone can volunteer an explanation for why this might have been, I'd be very curious to hear it...
Edit
I've been working on another project which involves streaming audio, this time also with PWA support, so I had to implement a caching mechanism in my service worker for audio files. Through this guide I learned all about the pitfalls of range requests, and understand now that failing to serve correct responses to requests with range headers will break seeking on some browsers.
It seems that in the above case, when I excluded my files from the CDN they were served from somewhere that didn't support range headers. When I moved them back on the CDN this was fixed, as it must have been built with explicit support for streaming media.
Here is a good explanation of correct responses to range requests. But for anyone having this issue while using a third party hosting service, it suffices to know that probably they do not support range headers for streaming media. If you want to verify this is the case, you can query the audio object's duration. At least in Safari's case, the duration is set to infinity when it can't successfully make a range request, and at that point seeking will be disabled.

Multiple synchronous XmlHttpRequests getting rejected/aborted

For a bit of fun and to learn more about JS and the new HTML5 specs, I decided to build a file uploader plugin for my own personal use.
After I select my files, I send the files to my Web Worker to be spliced into 1mb chunks and uploaded to my server individually. I opted to send them individually to benefit from individual progress callbacks and pause/resume functionality (later).
The problem I'm getting is when I select a lot of files to upload. If I only select 8, no problems. If I select 99, the server rejects/aborts after about the 20th file, although, sometimes, it could stop after 22, 31, 18 - toally random.
Firefox can get away with more than Chrome before aborting. Chrome calls them 'failed' and Firefox calls them 'aborted'. Firefox usually aborts the files after about file 40. Not only this but my test server becomes unresponsive and throws a 'the connection was reset' error - becoming responsive again less than 20-seconds later.
Because I'm using a Web Worker, I am setting my XmlHttpRequests to synchronous to allow each request to complete before starting a new one and the PHP script is on the same domain, so I'm baffled to see the requests rejected and would love to hear what is wrong with my code that's causing this to happen.
This is the plugin part that sends to the Worker. Pretty irrelevant but who knows:
var worker = new Worker('assets/js/uplift/workers/uplift-worker.js');
worker.onmessage = function(e) {
console.log(e.data);
};
worker.postMessage({'files':filesArr});
And this is uplift-worker.js:
var files = [], p = true;
function upload(chunk) {
var upliftRequest = new XMLHttpRequest();
upliftRequest.onreadystatechange = function() {
if (upliftRequest.readyState == 4 && upliftRequest.status == 200) {
// do something
}
};
upliftRequest.setRequestHeader("Cache-Control", "no-cache");
upliftRequest.setRequestHeader("X-Requested-With", "XMLHttpRequest");
upliftRequest.open('POST', '../php/uplift.php', false);
upliftRequest.send(chunk);
}
function processFiles() {
for (var j = 0; j < files.length; j++) {
var blob = files[j];
const BYTES_PER_CHUNK = 1024 * 1024; // 1mb chunk sizes.
const SIZE = blob.size;
var start = 0,
end = BYTES_PER_CHUNK;
while (start < SIZE) {
var chunk = blob.slice(start, end);
upload(chunk);
start = end;
end = start + BYTES_PER_CHUNK;
}
p = j == (files.length -1);
self.postMessage(blob.name + " uploaded successfully");
}
}
self.addEventListener('message', function(e) {
var data__Files = e.data.files;
for (var j = 0; j < data__Files.length; j++) {
files.push(data__Files[j]);
}
if (p) processFiles();
});
BUGS FOUND:
I managed to get this from Chrome console:
ERROR: Line 27 in
http://xxxxxxx/demos/uplift/assets/js/uplift/workers/uplift-worker.js
: Uncaught NetworkError: Failed to execute 'send' on
'XMLHttpRequest': Failed to load
'http://xxxxxxx/demos/uplift/assets/js/uplift/php/uplift.php'.
Which points to the Worker script line: upliftRequest.send(chunk);.
Firebug didn't give me much to work with at all but this shows the aborted requests:
And this shows the header that is sent with the requests:
I initially thought it was a problem server-side, so I removed all PHP from uplift.php and left an empty page to simply test the upload-to-browser parts and posting the requests, but the problems continued.
UPDATE:
I'm beginning to think my hosting provider are limiting request rates by using Apache Mod Security rules - possibly to prevent my IP from attacking the server with brute-force. Adding to that, my uploader works fine on my localhost (MAMP).
I did a little more research on my new suspicions. If my homemade upload plugin was having troubles sending multiple files/requests to my host, then surely some of the other popular upload plugins available, that use the same technology and are posting files to the same host, would have similar complaints posted online. That yielded some good results, with many people backing up the experience I'm having. One guy uploads 'lots of images' to the same host, using Uploadify HTML5 (which also sends individual requests), and his requests get blocked too. I suppose I'd better contact them to see what the deal is with their rate-limiting.
possible problem
I think this is a server-side issue, even with a plain php file.. the server will open a new thread for each request. Check it with top in the console.
You are uploading the chunks in a while loop, without waiting till the previous chunk upload finished..
suggestion
I would create an array of all chunks and do upload(chunks) and manage the concurrency there, onreadystatechange is a good place to go with the next chunk in the array..

Categories