Firefox addon sdk : Get header's loading time - javascript

I'm developing an addon for Firefox that watches and filters headers, displaying the one I'm interested by.
I need to get the header's loading time, and this is how I do it, which is quite brutal :
function getHeaderInformations(httpHeader, count){
timer.setTimeout(function() {
try{
//this fails until the header is totally in
var httpStatus = httpHeader.responseStatus;
}catch(errStatus){
getHeaderInformations(httpHeader, ++count);
}, 1/*milliseconde*/);
}
Every milliseconds I test it, if it fails i reload it, else it means that we have the header's status so that it is all completed.
The problem is : I don't think it's a good way to get header's loading time. I never get the same value as HTTPfox, Live HTTP Header or the console, plus it's not reliable.
I've searched for such attributes on https://developer.mozilla.org/en-US/docs/Mozilla/Tech/XPCOM/Reference/Interface/nsIHttpChannel without succes.
I've been through HTTPfox's code, but I must say that I couldn't find how they had this information.
The question is : How can I get the header's loading time properly ?

Related

Issues executing working JS with Scraping API: Not making complete XHR Calls

Trying to execute some JavaScript via a scraping API, with the goal of scrolling through dynamically loading pages, then parsing the full response. I tested the script in the console at an Auction site (with single page ticked) and it worked as expected.
//Keep scrolling until the previous doc height is equal to the new height
const infinite_scroll = async () => {
let previous = document.body.scrollHeight;
while (true){
window.scrollTo(0, document.body.scrollHeight);
// Fully Wait Until Page has loaded, then proceed
await new Promise(r => setTimeout(r, 2000));
let new_height = document.body.scrollHeight;
console.log('%s Previous Height: ',previous)
console.log(' ')
console.log('%s New Height: ',new_height)
if (new_height == previous){
break
}
else{
previous = new_height
}
}
};
infinite_scroll();
In the network tab it shows 11 successful XHR calls are made from
'372610?pn=6&ipp=10' to '372610?pn=16&ipp=10' ( 'pn', representing the equivalent to a page number and 'ipp', items per page)
However when I try to pass this script to Scrapfly, the API I'm using, It makes only 3 XHR calls and the last one times out, So instead of getting the entire page I get an extra 20 items.
Using the API Demo (would need to sign up for a free account!) It can be reproduced by adding the script, setting the Rendering time to 10000 ms, and adding the following headers;
emailcta : pagehits%3D1%26userdismissed%3Dfalse
cookie: UseInfiniteScroll=true
Looking at the details for the last XHR call, it times out and has a null response. The request headers for the call is identical to the two previous successful ones so I'm not exactly sure what the issue is.
JS_Rendering time doesn't seem to affect this or the wait time within the infinite scroll function.
All the docs say is:
"We provide a way to inject your javascript to be executed on the web page.
You must base64 your script.Your Javascript will be executed after the rendering delay and before the awaited selector (if defined)." They encode the script in base64 then execute it but the result is drastically different from the one in console

iframe content doesn't always load

So I have a system that essentially enabled communication between two computers, and uses a WebRTC framework to achieve this:
"The Host": This is the control computer, and clients connect to this. They control the clients window.
"The Client": The is the user on the other end. They are having their window controlled by the server.
What I mean by control, is that the host can:
change CSS on the clients open window.
control the URL of an iframe on the clients open window
There are variations on these but essentially thats the amount of control there is.
When "the client" logs in, the host sends a web address to the client. This web address will then be displayed in an iframe, as such:
$('#iframe_id').attr("src", URL);
there is also the ability to send a new web address to the client, in the form of a message. The same code is used above in order to navigate to that URL.
The problem I am having is that on, roughly 1 in 4 computers the iframe doesn't actually load. It either displays a white screen, or it shows the little "page could not be displayed" icon:
I have been unable to reliably duplicate this bug
I have not seen a clear pattern between computers that can and cannot view the iframe content.
All clients are running google chrome, most on an apple powermac. The only semi-link I have made is that windows computers seem slightly more susceptible to it, but not in a way I can reproduce. Sometimes refreshing the page works...
Are there any known bugs that could possibly cause this to happen? I have read about iframe white flashes but I am confident it isn't that issue. I am confident it isn't a problem with jQuery loading because that produces issues before this and would be easy to spot.
Thanks so much.
Alex
edit: Ok so here is the code that is collecting data from the server. Upon inspection the data being received is correct.
conn.on('data', function(data) {
var data_array = JSON.parse(data);
console.log(data_array);
// initialisation
if(data_array.type=='init' && inititated === false) {
if(data_array.duration > 0) {
set_timeleft(data_array.duration); // how long is the exam? (minutes)
} else {
$('#connection_remainingtime').html('No limits');
}
$('#content_frame').attr("src", data_array.uri); // url to navigate to
//timestarted = data_array.start.replace(/ /g,''); // start time
ob = data_array.ob; // is it open book? Doesnt do anything really... why use it if it isnt open book?
snd = data_array.snd; // is sound allowed?
inititated = true;
}
}
It is definitele trying to make the iframe navigate somewhere as when the client launches the iframe changes - its trying to load something but failing.
EDIT: Update on this issue: It does actually work, just not with google forms. And again it isn't everybody's computers, it is only a few people. If they navigate elsewhere (http://www.bit-tech.net for example) then it works just fine.
** FURTHER UPDATE **: It seems on the ones that fail, there is an 'X-Frames-Origin' issue, in that its set the 'SAMEORIGIN'. I dont understand why some students would get this problem and some wouldn't... surely it depends upon the page you are navigating to, and if one person can get it all should be able to?
So the problem here was that the students were trying to load this behind a proxy server which has an issue with cookies. Although the site does not use cookies, the proxy does, and when the student had blocked "third party cookies" in their settings then the proxy was not allowing the site to load.
Simply allowed cookies and it worked :)
iframes are one of the last things to load in the DOM, so wrap your iframe dependent code in this:
document.getElementById('content_frame').onload = function() {...}
If that doesn't work then it's the document within the iframe. If you own the page inside the iframe then you have options. If not...setTimeout? Or window.onload...?
SNIPPET
conn.on('data', function(data) {
var data_array = JSON.parse(data);
console.log(data_array);
// initialisation
if (data_array.type == 'init' && inititated === false) {
if (data_array.duration > 0) {
set_timeleft(data_array.duration); // how long is the exam? (minutes)
} else {
$('#connection_remainingtime').html('No limits');
}
document.getElementById('content_frame').onload = function() {
$('#content_frame').attr("src", data_array.uri); // url to navigate to
//timestarted = data_array.start.replace(/ /g,''); // start time
ob = data_array.ob; // is it open book? Doesnt do anything really... why use it if it isnt open book?
snd = data_array.snd; // is sound allowed?
inititated = true;
}
}
}

What is the best technology for a chat/shoutbox system? Plain JS, JQuery, Websockets, or other?

I have an old site running, which also has a chat system, which always used to work fine. But recently I picked up the project again and started improving and the user base has been increasing a lot. (running on a VPS)
Now this shoutbox I have (running at http://businessgame.be/shoutbox) has been getting issues lately, when there are over 30 people online at the same time, it starts to really slow down the entire site.
This shoutbox system was written years ago by the old me (which ironically was the young me) who was way too much into old school Plain Old JavaScript (POJS?) and refused to use frameworks like JQuery.
What I do is I poll every 3 seconds with AJAX if there are new messages, and if YES, load all those messages (which are handed as an XML file which is then parsed by the JS code into HTML blocks which are added to the shoutbox content.
Simplified the script is like this:
The AJAX functions
function createRequestObject() {
var xmlhttp;
if (window.XMLHttpRequest) { // code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp = new XMLHttpRequest();
} else { // code for IE6, IE5
xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
}
// Create the object
return xmlhttp;
}
function getXMLObject(XMLUrl, onComplete, onFail) {
var XMLhttp = createRequestObject();
// Check to see if the latest shout time has been initialized
if(typeof getXMLObject.counter == "undefined") {
getXMLObject.counter = 0;
}
getXMLObject.counter++;
XMLhttp.onreadystatechange = function() {
if(XMLhttp.readyState == 4) {
if(XMLhttp.status == 200) {
if(onComplete) {
onComplete(XMLhttp.responseXML);
}
} else {
if(onFail) {
onFail();
}
}
}
};
XMLhttp.open("GET", XMLUrl, true);
XMLhttp.send();
setTimeout(function() {
if(typeof XMLhttp != "undefined" && XMLhttp.readyState != 4) {
XMLhttp.abort();
if(onFail) {
onFail();
}
}
}, 5000);
}
Chat functions
function initShoutBox() {
// Check for new shouts every 2 seconds
shoutBoxInterval = setInterval("shoutBoxUpdate()", 3000);
}
function shoutBoxUpdate() {
// Get the XML document
getXMLObject("/ajax/shoutbox/shoutbox.xml?time=" + shoutBoxAppend.lastShoutTime, shoutBoxAppend);
}
function shoutBoxAppend(xmlData) {
process all the XML and add it to the content, also remember the timestamp of the newest shout
}
The real script is far more convoluted, with slower loading times when the page is blurred and keeping track of AJAX calls to avoid double calls at the same time, ability to post a shout, load settings etc. All not very relevant here.
For those interested, full codes here:
http://businessgame.be/javascripts/xml.js
http://businessgame.be/javascripts/shout.js
Example of the XML file containing the shout data
http://businessgame.be/ajax/shoutbox/shoutbox.xml?time=0
I do the same for getting a list of the online users every 30 seconds and checking for new private messages every 2 minutes.
My main question is, since this old school JS is slowing down my site, will changing the code to JQuery increase the performance and fix this issue? Or should I choose to go for an other technology alltogether like nodeJS, websockets or something else? Or maybe I am overlooking a fundamental bug in this old code?
Rewriting an entire chat and private messages system (which use the same backend) requires a lot of effort so I'd like to do this right from the start, not rewriting the whole thing in JQuery, just to figure out it doesn't solve the issue at hand.
Having 30 people online in the chatbox at the same time is not really an exception anymore so it should be a rigid system.
Could perhaps changing from XML data files to JSON increase performance as well?
PS: Backend is PHP MySQL
I'm biased, as I love Ruby and I prefer using Plain JS over JQuery and other frameworks.
I believe you're wasting a lot of resources by using AJAX and should move to websockets for your use-case.
30 users is not much... When using websockets, I would assume a single server process should be able to serve thousands of simultaneous updates per second.
The main reason for this is that websockets are persistent (no authentication happening with every request) and broadcasting to a multitude of connections will use the same amount of database queries as a single AJAX update.
In your case, instead of everyone reading the whole XML every time, a POST event will only broadcast the latest (posted) shout (not the whole XML), and store it in the XML for persistent storage (used for new visitors).
Also, you don't need all the authentication and requests that end up being answered with a "No, there aren't any pending updates".
Minimizing the database requests (XML reads) should prove to be a huge benefit when moving from AJAX to websockets.
Another benifit relates to the fact that enough simultaneous users will make AJAX polling behave the same as a DoS attack.
Right now, 30 users == 10 requests per second. This isn't much, but it can be heavy if each request would take more than 100ms - meaning, that the server answers less requests than it receives.
The home page for the Plezi Ruby Websocket Framework has this short example for a shout box (I'm Plezi's author, I'm biased):
# finish with `exit` if running within `irb`
require 'plezi'
class ChatServer
def index
render :client
end
def on_open
return close unless params[:id] # authentication demo
broadcast :print,
"#{params[:id]} joind the chat."
print "Welcome, #{params[:id]}!"
end
def on_close
broadcast :print,
"#{params[:id]} left the chat."
end
def on_message data
self.class.broadcast :print,
"#{params[:id]}: #{data}"
end
protected
def print data
write ::ERB::Util.html_escape(data)
end
end
path_to_client = File.expand_path( File.dirname(__FILE__) )
host templates: path_to_client
route '/', ChatServer
The POJS client looks like so (the DOM update and from data access ($('#text')[0].value) use JQuery):
ws = NaN
handle = ''
function onsubmit(e) {
e.preventDefault();
if($('#text')[0].value == '') {return false}
if(ws && ws.readyState == 1) {
ws.send($('#text')[0].value);
$('#text')[0].value = '';
} else {
handle = $('#text')[0].value
var url = (window.location.protocol.match(/https/) ? 'wss' : 'ws') +
'://' + window.document.location.host +
'/' + $('#text')[0].value
ws = new WebSocket(url)
ws.onopen = function(e) {
output("<b>Connected :-)</b>");
$('#text')[0].value = '';
$('#text')[0].placeholder = 'your message';
}
ws.onclose = function(e) {
output("<b>Disonnected :-/</b>")
$('#text')[0].value = '';
$('#text')[0].placeholder = 'nickname';
$('#text')[0].value = handle
}
ws.onmessage = function(e) {
output(e.data);
}
}
return false;
}
function output(data) {
$('#output').append("<li>" + data + "</li>")
$('#output').animate({ scrollTop:
$('#output')[0].scrollHeight }, "slow");
}
If you want to add more events or data, you can consider using Plezi's auto-dispatch feature, that also provides you with an easy to use lightweight Javascript client with an AJAJ (AJAX + JSON) fallback.
...
But, depending on your server's architecture and whether you mind using heavier frameworks or not, you can use the more common socket.io (although it starts with AJAX and only moves to websockets after a warmup period).
EDIT
Changing from XML to JSON will still require parsing. The question is actually whether XML vs. JSON parsing speeds.
JSON will be faster on the client javascript, according to the following SO question and answer: Is parsing JSON faster than parsing XML
JSON seems to be also favored on the server-side for PHP (might be opinion based rather than tested): PHP: is JSON or XML parser faster?
BUT... I really think your bottleneck isn't the JSON or the XML. I believe the bottleneck relates to the multitude of times that the data is accessed, (parsed?) and reviewed by the server when using AJAX.
EDIT2 (due to comment about PHP vs. node.js)
You can add a PHP websocket layer using Ratchet... Although PHP wasn't designed for long running processes, so I would consider adding a websocket dedicated stack (using a local proxy to route websocket connections to a different application).
I love Ruby since it allows you to quickly and easily code a solution. Node.js is also commonly used as a dedicated websocket stack.
I would personally avoid socket.io, because it abstracts the connection method (AJAX vs Websockets) and always starts as AJAX before "warming up" to an "upgrade" (websockets)... Also, socket.io uses long-polling when not using websockets, which I this is terrible. I'd rather show a message telling the client to upgrade their browser.
Jonny Whatshisface pointed out that using a node.js solution he reached a limit of ~50K concurrent users (which could be related to the local proxy's connection limit). Using a C solution, he states to have no issues with more than 200K concurrent users.
This obviously depends also on the number of updates per second and on whether you're broadcasting the data or sending it to specific clients... If you're sending 2 updates per user per second for 200K users, that's 400K updates. However, updating all the users only once every 2 seconds, that's 100K updates per second. So trying to figure out the maximum load can be a headache.
Personally, I didn't get to reach these numbers on my apps, so I never got to discover Plezi's limits first hand... although, during testing, I had no issues with sending hundred of thousands of updates per second (but I did had a connection limit due to available ports and open file handle limits on my local machine).
This definitely shows how vast of an improvement you can reach by utilizing websockets (especially since you stated to notice slowdowns with 30 concurrent users).

ajax call to localhost from site loaded over https fails in chrome

-------------------- UPDATE 2 ------------------------
I see now that what I am trying to accomplish is not possible with chrome. But I am still curios, why is the policy set stricter with chrome than for example Firefox? Or is it perhaps that firefox doesn't actually make the call either, but javascript-wise it deems the call failed instead of all together blocked?
---------------- UPDATE 1 ----------------------
The issue indeed seems to be regarding calling http from https-site, this error is produced in the chrome console:
Mixed Content: The page at 'https://login.mysite.com/mp/quickstore1' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://localhost/biztv_local/video/video_check.php?video=253d01cb490c1cbaaa2b7dc031eaa9f5.mov&fullscreen=on'. This request has been blocked; the content must be served over HTTPS.
Then the question is why Firefox allows it, and whether there is a way to make chrome allow it. It has indeed worked fine until just a few months ago.
Original question:
I have some jQuery making an ajax call to http (site making the call is loaded over https).
Moreover, the call from my https site is to a script on the localhost on the clients machine, but the file starts with the
<?php header('Access-Control-Allow-Origin: *'); ?>
So that's fine. Peculiar setup you might say but the client is actually a mediaplayer.
It has always worked fine before, and still works fine in firefox, but since about two months back it isn't working in chrome.
Has there been a revision to policies in chrome regarding this type of call? Or is there an error in my code below that firefox manages to parse but chrome doesn't?
The error only occurs when the file is NOT present on the localhost (ie if a regular web user goes to this site with their own browser, naturally they won't have the file on their localhost, most won't even have a localhost) so one theory might be that since the file isn't there, the Access-Control-Allow-Origin: * is never encountered and therefore the call in its entirety is deemed insecure or not allowed by chrome, therefore it is never completed?
If so, is there an event handler I can attach to my jQuery.ajax method to catch that outcome instead? As of now, complete is never run if the file on localhost isn't there.
before : function( self ) {
var myself = this;
var data = self.slides[self.nextSlide-1].data;
var html = myself.getHtml(data);
$('#module_'+self.moduleId+'-slide_'+self.slideToCreate).html(html);
//This is the fullscreen-always version of the video template
var fullscreen = 'on';
//console.log('runnin beforeSlide method for a video template');
var videoCallStringBase = "http://localhost/biztv_local/video/video_check.php?"; //to call mediaplayers localhost
var videoContent='video='+data['filename_machine']+'&fullscreen='+fullscreen;
var videoCallString = videoCallStringBase + videoContent;
//TODO: works when file video_check.php is found, but if it isn't, it will wait for a video to play. It should skip then as well...
//UPDATE: Isn't this fixed already? Debug once env. is set up
console.log('checking for '+videoCallString);
jQuery.ajax({
url: videoCallString,
success: function(result) {
//...if it isn't, we can't playback the video so skip next slide
if (result != 1) {
console.log('found no video_check on localhost so skip slide '+self.nextSlide);
self.skip();
}
else {
//success, proceed as normal
self.beforeComplete();
}
},
complete: function(xhr, data) {
if (xhr.status != 200) {
//we could not find the check-video file on localhost so skip next slide
console.log('found no video_check on localhost so skip slide '+self.nextSlide);
self.skip();
}
else {
//success, proceed as normal
self.beforeComplete();
}
}, //above would cause a double-slide-skip, I think. Removed for now, that should be trapped by the fail clause anyways.
async: true
});

How to detect that user open page using BACK button in browser?

Idea is to display message, which will infor muser that ajax part of application can wokr incorrectly when he used "back" button.
Yes, there is a lot of discussions, but no solutions.
Best from what I found: Store information about last page on server side, and check current page against server info by ajax.
But in this way it would be impossible to use 2 browser windows by same user.
You might want to develope using the url #(hash) to store client state
take a look at http://www.asual.com/swfaddress/, it is used by Flash and ajax to handle browser history with ajax,
Silverlight 3.0 uses a similar technique of using the #(hash) in the url for state.
The real solution is to let the client maintain state, rather than your server. You're breaking the laws of the Internet if you keep so much client state on your server that the back button doesn't work :)
This solution may or may not apply to your case, and it may or may not work with your browser. It seemed to work for me on IE7 where each page had a distinct "widget Id" referenced in the URL querystring -
//try to detect a bad back-button usage;
//widgetId not match querystring parameter did=#
var mustReload = false;
if (location.search != null &&
location.search.indexOf("&did=") > 0)
{
var urlWidgetId = location.search.substring(
location.search.indexOf("&did=")+5);
if (urlWidgetId.indexOf("&") > 0)
{
urlWidgetId = urlWidgetId.substring(
0,urlWidgetId.indexOf("&"));
}
if (currentDashboard != urlWidgetId)
{
mustReload = true;
}
}
if (mustReload)
{
... //reload the page to resynch here
}

Categories