So I've this piece of code which checks for internet speed and returns back some result. In order to deal with non-connectivity or disconnect issues while the speed check is going on, I put a timeout wait of 10 seconds.
This timeout works fine on a desktop browser (I started the test, pulled the plug, and after 10 seconds the error function kicked in.
But while trying to run the same page on Android browser, the error function does not respond even after 30-40 seconds or so.
This makes me believe this has got something to do with Android settings and was hoping any one having knowledge in that area would point me in the right direction. Here is the code btw:
$.ajax({
url:'internet-speed.php',
timeout: 10000,
error: function(x, t, m){
if(t==="timeout")
{
$('span#wait').html('Timed out');
}
else
{
$('span#wait').html('Something went wrong');
}
},
success:function(data)
{
$('span#wait').html('Your speed is '+data+'kbps');
}
});
Related
I have a Web App that runs in both Android and iOS by WebView. I understand it is possible to implement network detection within the App itself but I would prefer to do it using jQuery and Ajax call.
The behaviour is slightly strange and maybe not possible to rectify but interesting all the same and I'd like to find out why it's happening.
The Ajax call works and applies an overlay to the screen to block any activity whilst no internet is available. Mainly because any request whilst having no internet takes ages to load when it reconnects. It works absolutely fine when data/wifi is manually turned off but when a mobile phone loses signal there is a definite delay of about 1 min before this is detected, there is no delay when data comes back. Here is the script...
var refreshTime = 1000;
window.setInterval( function() {
$.ajax({
cache: false,
type: "GET",
url: "./offline.php",
success:
function off() {
document.getElementById("overlay").style.display = "none";
},
error: function on() {
document.getElementById("overlay").style.display = "block";
}
});
}, refreshTime );
My question is what is the reason for this delay? Is it something controlled by the mobile OS to keep checking for mobile data or is it something to do with the way the script works? Currently offline.php is a blank file and the Ajax call just checks the file is accessible. Is there something I can add in there to get rid of the delay?
We are building a chatroom with our own notification system without depending on GCM but with service worker + SSE.
On desktop it is fine , but on mobile android app (which uses cordova-crosswalk , chromium 53 ) .
The long running notification connection become stuck after 20-30 minutes and it is in foreground activity.
It dosen't die with an error , just not reciving data. No error at all which is very weird. No way to reconnect since we do not know if the connection is dead at all.
What would be the cleanest way? Restarting connection every 5 minutes is one idea but it is not clean.
code
runEvtSource(url, fn) {
if (this.get('session.content.isAuthenticated') === true) {
var evtSource = new EventSource(url, {
withCredentials: true
});
}}
Agressive reconnect code
var evtSource = this.runEvtSource(url, fn)
var evtSourceErrorHandler = (event) => {
var txt;
switch (event.target.readyState) {
case EventSource.CONNECTING:
txt = 'Reconnecting...';
evtSource.onerror = evtSourceErrorHandler;
break;
case EventSource.CLOSED:
txt = 'Reinitializing...';
evtSource = this.runEvtSource(url, fn)
evtSource.onerror = evtSourceErrorHandler;
break;
}
console.log(txt);
evtSource.onerror = evtSourceErrorHandler
I normally add a keep-alive layer on top of the the SSE connection. It doesn't happen that often, but sockets can die without dying properly, so your connection just goes quiet and you don't get an error.
So, one way is, inside your get data function:
if(timer)clearTimeout(timer);
timer = setTimeout(reconnect, 30 * 1000);
...process the data
In other words, if it is over 30 seconds since you last got data, reconnect. Choose a value based on the frequency of the data you send: if 10% of the time there is a 60 second gap between data events, but never a 120 second gap, then setting the time-out to something higher than 120 seconds makes sense.
You might also want to keep things alive by pushing regular messages from the server to client. This is a good idea if the frequency of messages from the server is very irregular. E.g. I might have the server send the current timestamp every 30 seconds, and use a keep-alive time-out of 45 seconds on the client.
As an aside, if this is a mobile app, bear in mind if the user will appreciate the benefit of reduced latency of receiving chat messages against the downside of reduced battery life.
I have set up a function which checks the location of the user every X mins using setInterval() which works for the most part as expected. However after some time of the phone being inactive the intervals seem to stretch out, ie if it was set to 5 mins it could take up to an hour to check again if the phone has been inactive for some time. This is intended to keep going 24/7 so the phone being inactive will be common.
Is this a known problem or is there something I should be doing to prevent this?
var onDeviceReady = function(){
var preCheckGPSInterval = setInterval(function(){
var watchID = navigator.geolocation.watchPosition(
function(position){
if(position.coords.accuracy < 100){
navigator.geolocation.clearWatch(watchID);
//code to execute
}
},
function(error){
console.log("error");
},
{enableHighAccuracy: true}
);
}, 300000);
}
document.addEventListener("deviceready", onDeviceReady, false);
Is this a known problem or is there something I should be doing to prevent this?
#Marty,
this is not a problem. This is the way it is supposed to work. If an App is NOT in the foreground, the app will have it's resources cleared from memory and possibly stored in swap space.
As suggested, you will need to force your app to run in the background. This article will help. 7 things you should know about Android Manifest xml file.
-------------------- UPDATE 2 ------------------------
I see now that what I am trying to accomplish is not possible with chrome. But I am still curios, why is the policy set stricter with chrome than for example Firefox? Or is it perhaps that firefox doesn't actually make the call either, but javascript-wise it deems the call failed instead of all together blocked?
---------------- UPDATE 1 ----------------------
The issue indeed seems to be regarding calling http from https-site, this error is produced in the chrome console:
Mixed Content: The page at 'https://login.mysite.com/mp/quickstore1' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://localhost/biztv_local/video/video_check.php?video=253d01cb490c1cbaaa2b7dc031eaa9f5.mov&fullscreen=on'. This request has been blocked; the content must be served over HTTPS.
Then the question is why Firefox allows it, and whether there is a way to make chrome allow it. It has indeed worked fine until just a few months ago.
Original question:
I have some jQuery making an ajax call to http (site making the call is loaded over https).
Moreover, the call from my https site is to a script on the localhost on the clients machine, but the file starts with the
<?php header('Access-Control-Allow-Origin: *'); ?>
So that's fine. Peculiar setup you might say but the client is actually a mediaplayer.
It has always worked fine before, and still works fine in firefox, but since about two months back it isn't working in chrome.
Has there been a revision to policies in chrome regarding this type of call? Or is there an error in my code below that firefox manages to parse but chrome doesn't?
The error only occurs when the file is NOT present on the localhost (ie if a regular web user goes to this site with their own browser, naturally they won't have the file on their localhost, most won't even have a localhost) so one theory might be that since the file isn't there, the Access-Control-Allow-Origin: * is never encountered and therefore the call in its entirety is deemed insecure or not allowed by chrome, therefore it is never completed?
If so, is there an event handler I can attach to my jQuery.ajax method to catch that outcome instead? As of now, complete is never run if the file on localhost isn't there.
before : function( self ) {
var myself = this;
var data = self.slides[self.nextSlide-1].data;
var html = myself.getHtml(data);
$('#module_'+self.moduleId+'-slide_'+self.slideToCreate).html(html);
//This is the fullscreen-always version of the video template
var fullscreen = 'on';
//console.log('runnin beforeSlide method for a video template');
var videoCallStringBase = "http://localhost/biztv_local/video/video_check.php?"; //to call mediaplayers localhost
var videoContent='video='+data['filename_machine']+'&fullscreen='+fullscreen;
var videoCallString = videoCallStringBase + videoContent;
//TODO: works when file video_check.php is found, but if it isn't, it will wait for a video to play. It should skip then as well...
//UPDATE: Isn't this fixed already? Debug once env. is set up
console.log('checking for '+videoCallString);
jQuery.ajax({
url: videoCallString,
success: function(result) {
//...if it isn't, we can't playback the video so skip next slide
if (result != 1) {
console.log('found no video_check on localhost so skip slide '+self.nextSlide);
self.skip();
}
else {
//success, proceed as normal
self.beforeComplete();
}
},
complete: function(xhr, data) {
if (xhr.status != 200) {
//we could not find the check-video file on localhost so skip next slide
console.log('found no video_check on localhost so skip slide '+self.nextSlide);
self.skip();
}
else {
//success, proceed as normal
self.beforeComplete();
}
}, //above would cause a double-slide-skip, I think. Removed for now, that should be trapped by the fail clause anyways.
async: true
});
I'm having some trouble with the OpenTok 2 API. When I start to publish a stream and I'm prompted to allow or deny the website to use my webcam and microphone, if I allow allowed() should run, but if I deny denied() should run.
publisher.addEventListener('accessAllowed', allowed);
publisher.addEventListener('accessDenied', denied);
function allowed() {
console.log('Allowed');
}
function denied() {
console.log('Denied');
}
It works as expected in Firefox. In Chrome accessAllowed works, however, accessDenied doesn't. Instead I get the following error:
OT.Publisher.onStreamAvailableError PermissionDeniedError:
TB.exception :: title: Internal Error (2000) msg: Publisher failed to access camera/mic:
Any ideas?
This is a bug in the current JS library at OpenTok. I do have a workaround that should get you going and I'll come back with an update when the bug is fixed.
var waiting = false;
publisher.addEventListener('accessAllowed', function() {
waiting = false;
allowed();
});
publisher.addEventListener('accessDenied', function() {
waiting = false;
denied();
});
publisher.addEventListener('accessDialogOpened', function() {
waiting = true;
});
publisher.addEventListener('accessDialogClosed', function() {
setTimeout(function() {
if (waiting) {
waiting = false;
denied();
}
}, 0);
});
This workaround is slightly limited because Chrome has some weirdness when it comes to denying access once and then visiting the page again. If the user hasn't changed his/her preferences regarding the media permissions, the video will continue to be denied and the 'accessDialogOpened' won't even fire. I'll inform the team and continue to look into this.