How to get the size of a page in KB? - javascript

I would like to get the Size of the complete loaded page in KB like how it is shown in the browsers console.
I used:
var res = performance.getEntriesByType('resource');
but here I do not get the complete site, only the resources like .css, .images not the page itself.
var totalSizeTransfer = res.reduce((size, item) => {
size += item.transferSize;
return size;
}, 0);
console.log("total Size Transfer ist: ",totalSizeTransfer);
The above code doesn't help.
Any Idea?
If it is not possible in Javascript is there a PHP solution?

Javascript doesn't have full access to network traffic. Hence, your script could not catch all the requests.

Related

HTML Img's failing to load

Just for fun, i'm trying to implement a "15 puzzle", but with 16 images (from 1 music photo) instead.
The thing is split into 2 scripts / sides. 1 Python CGI script that will perform the Last.FM query + splitting the image in Y x Z chunks. When the python script finishes it outputs a JSON string that contains the location (on server), extension etc.
{"succes": true, "content": {"nrofpieces": 16, "size": {"width": 1096, "height": 961}, "directoryname": "Mako", "extension": "jpeg"}}
On the other side is a HTML, JS, (CSS) combo that will query the CGI script for the images.
$(document).ready(function () {
var artiest = $("#artiest")
var rijen = $("#rijen")
var kolommen = $("#kolommen")
var speelveld = $("#speelveld")
var search = $("#search")
$("#buttonClick").click(function () {
var artiestZ = artiest.val()
var rijenZ = rijen.val()
var kolommenZ = kolommen.val()
$.getJSON("http://localhost:8000/cgi-bin/cgiScript.py", "artiest=" + artiestZ + "&rijen=" + rijenZ + "&kolommen=" + kolommenZ, function (JsonSring) {
console.log("HIIIIII")
if (JsonSring.succes === true){
console.log(JsonSring)
var baseUrl = "http://localhost:8000/"
var extension = JsonSring.content.extension
var url = baseUrl + JsonSring.content.directoryname + "/"
var amountX = rijenZ
var amountY = kolommenZ
for (var i = 0; i < amountX; i += 1){
for (var p = 0; p < amountY; p += 1){
console.log("HI")
var doc = new Image
doc.setAttribute("src", url + JsonSring.content.directoryname + i + "_" + p + "." +extension)
document.getElementById("speelveld").appendChild(doc)
}
}
}else {
// Search failed. Deal with it.
}
})
})
})
where the various id's link to various HTML elements. (Text Fields & Buttons & Div's).
Beneath is a screenshot of the full folder that contains the image files.
Now, coming to the point. All the HTML img tags with src seem correct, yet. Some images don't load, yet other do. I also noticed that all images failed to load in 2s intervals. Is there some kind of timeout, or so?
All this is being ran from a local machine, so disk speed and cpu shouldn't really affect the matter. Also, from what I understand: The call for making the img tags etc is done in a callback from the getJson, meaning it'll only run when getJson has finished / had a reply.
Does the great StackOverFlow community have an idea what's happening here?
To share my knowledge/experiences with the great StackOverflow community,
Small backstory
After progressing a bit further into the project I started to run into various issues going from JSON parsing to not having Allow-Control-Allow-Origin: * headers, making it very hard to get the Ajax Request (Client ==> Python CGI) done.
In the meantime I also started dev'ing on my main desktop (which for some reason either has massive issues with Python versioning or has none). But due to the terminal on my desktop having Python 3.4+ , there was no module CGIHTTPServer. After a small amount of digging, I found that CGIHTTPServer had been transfered into http.server, yet when running plain old python -m http.server, I noticed the CGI script wouldn't run. It would just display. Ofcourse, I forgot the option -cgi.
Main solution
The times I was succesfully using CGIHTTPServer, I had troubles. The images wouldn't load as described above. I suspect that the module just couldn't take the decent amount of requests. Meaning that when suddenly Y x Z requests came in, it would struggle to deliver all the data. ==> Connection Refused.
Since switching to python -m http.server -cgi, no problems what so ever. Currently working on a Bootstrap Grid for all those images!
Thx #Lashane and #Ruud.

How to identify a particular image in a website fetched from a particular CDN server?

For example: There is a website which contains few images,a HTML document,and other contents. Now i want to find out the images used or displayed in the website where fetched from which CDN(Content Delivery Network) server? and likewise for HTML documents and for all contents present in the website. Can anyone help me on this one??
You could iterate over all the imgs / other elements that you are referencing in your code and check the domain names. This should give you a list of items which may or may not be originating from a CDN.
var hosts = [];
$('img').each(function(i, e) {
var h = $(e).attr('src').match(/^http:\/\/[^/]+/)[0];
if (hosts[h] == undefined) hosts[h] = 0;
hosts[h]++;
});
console.log(hosts);
The above code lists all the domains for images on a page.

Why changing images names for the slideshow results in unlimited 304 (not modified) server responses?

I am running a slideshow using the javascript code below on my server Ubuntu 12.04.1. The problem, when I change the name of the images, then I get unlimited server responses 304 (not modified) on Chrome network tool. But, when I clear the browser history (cache), I get everything fine and just 5 network requests from Cache and not more like the screen shot attached.
I wonder why this happen and how to solve it? Screen shot attached for your kind reference! Would greatly appreciate if you hint me how to solve these unlimited requests? Thank you!
In the page index.php:
<SCRIPT>
//set image paths
src = ["ic/photo1.jpg", "ic/photo2.jpg", "ic/photo3.jpg", "ic/photo4.jpg", "ic/photo5.jpg"]
//set duration for each image
duration = 4;
ads=[]; ct=0;
function switchAd(){
var n=(ct+1)%src.length;
if (ads[n] && (ads[n].complete || ads[n].complete==null)) {
document["Ad_Image"].src = ads[ct=n].src;
}
ads[n=(ct+1)%src.length] = new Image;
ads[n].src = src[n];
setTimeout("switchAd()",duration*1000);
}
onload = function(){
if (document.images)
switchAd();
}
//-->
</SCRIPT>
<IMG NAME="Ad_Image" SRC="ic/photo1.jpg" BORDER=0>
I have solved this question buy calling the image once from the server instead of calling it each time to view, i.e. I added the images in the document, then added a src to each of one (ONCE - this means to call them once from server), then for each repetition, I show one image and hide the others! Because if I give a src to each image when viewing, then the server might be slow when the server requests to see if the image has been modified last time called (304 response), and this results in delaying the view! Thank you :)

Web page Capture and save to image using phantomjs lib

i was searching google to get any js lib which can capture the image of any website or url. i came to know that phantomjs library can do it. here i got a small code which capture and convert the github home page to png image
if anyone familiar with phantomjs then please tell me what is the meaning of this line
var page = require('webpage').create();
here i can give any name instead of webpage ?
if i need to capture the portion of any webpage then how can i do it with the help of this library. anyone can guide me.
var page = require('webpage').create();
page.open('http://github.com/', function () {
page.render('github.png');
phantom.exit();
});
https://github.com/ariya/phantomjs/wiki
thanks
Here is a simple phantomjs script for grabbing an image:
var page = require('webpage').create(),
system = require('system'),
address, output, size;
address = "http://google.com";
output = "your_image.png";
page.viewportSize = { width: 900, height: 600 };
page.open(address, function (status) {
if (status !== 'success') {
console.log('Unable to load the address!');
phantom.exit();
} else {
window.setTimeout(function () {
page.render(output);
console.log('done');
phantom.exit();
}, 10000);
}
})
Where..
'address' is your url string.
'output' is your filename string.
Also 'width' & 'height' are the dimensions of what area of the site to capture (comment this out if you want the whole page)
To run this from the command line save the above as 'script_name.js and fire off phantom making the js file the first argument.
Hope this helps :)
The line you ask about:
var page = require('webpage').create();
As far as I can tell, that line does 3 things: It adds a module require('webpage'), then creates a WebPage Object in PhantomJS .create(), and then assigns that Object to var = page
The name "webpage" tells it which module to add.
http://phantomjs.org/api/webpage/
I too need a way to use page.render() to capture just one section of a web page, but I don't see an easy way to do this. It would be nice to select a page element by ID and just render out that element based at whatever size it is. They should really add that for the next version of PhantomJS.
For now, my only workaround is to add an anchor tag to my URL http://example.com/page.html#element to make the page scroll to the element that I want, and then set a width and height that gets close to the size I need.
I recently discovered that I can manipulate the page somewhat before rendering, so I want to try to use this technique to hide all of the other elements except the one I want to capture. I have not tried this yet, but maybe I will have some success.
See this page and look at how they use querySelector(): https://github.com/ariya/phantomjs/blob/master/examples/technews.js

Firefox extension data transfer to web page

I have a firefox extension that collaborates with a web page, and occasionally needs to inject data into it which the page formats and displays.
The way I do it now is :-
var element = doc.createElement("MyData");
doc.documentElement.appendChild(element);
for(....)
{
var x = ....
var y = ....
var z = ....
var row = doc.createElement("row");
row.setAttribute("x", x);
row.setAttribute("y", y);
row.setAttribute("z", z);
element.appendChild(row);
}
This gets really slow for 1000s of items, and some more time is spent by the page parsing the data and displaying it in HTML elements.
Is there a better way?
Would it make sense to dump the entire data as a single string for example?
Thanks in advance
In my experience with regular site scripting, large HTML insertions appear to be faster if you inject raw HTML with the .innerHTML property. Perhaps that's true for extensions as well.

Categories