In an Assignment it was asked what is the first thing to do before testing changes done in JavaScript, after deploying an application. My answer was to clean the browser cache because the cached content may affect the new changes. I want to know whether it is a valid and a good answer or are there some things to do prior to that? Thank you
A word of advice, not all users understand the principle of clearing the browser cache to obtain changes. I suggest you to increment you'r javascript file instead of taking into consideration that users will clear their cache.
But before deploying a modification, I advise you to do some tests. Several types of test exist, you can look for example at unit tests.
What a vague question to be asked, however I personally think about how a real user would use the system when I test it. Therefore, would I trust the user to clear the cache / cookies etc everytime they used the system? No. I would expect to be able to either close the browser and open (or simply refresh the screen). As mentioned in another answer, 'cache busting' should be handled by the developer during the build process, for example by hashing the javascript bundle.
I used to get into a habit of opening dev tools all of the time and relying on the 'disable cache' toggle, however after a few times of getting caught out by real users seeing different behaviour that what i was seeing during development, I moved to ensure that the bundles weren't cached in both dev and prod.
Related
I was wondering if it would be possible to create a JavaScript function to check a condition, and if that is True then I deny access to the code? Right now I am checking the user agent, and if it doesn't meet given criteria then I delete the HTML tag. However, if they go to the network tab then they can still see the GET requests and responses for my code.
This is a website running on localhost because it's an Electron app by the way.
I thought maybe I could issue a 403 error but I'm not sure if that's possible via JS.
Thanks
In an Electron app, your code is still JavaScript source code. You can obfuscate it and/or put it in a ASAR archive to make it harder to read and fine, but the code is still there and accessible to anyone who wants to go to the effort. (For instance, if you use VS Code, you can see the source in resources/app/out directory and its subdirectories.)
You can make it even harder to find and understand the source if you're willing to put in more work. V8, the JavaScript engine Node.js uses, has a feature called startup snapshots: You run V8 and have it load your script (your obfuscated code), take a heap snapshot (after GC), and write it to a file. Then you specify the heap snapshot when loading V8, and it just hauls in the snapshot instead of reading and parsing code. The Atom team have done this on top of Electron. In their case the motivation was startup performance, not hiding the source code, but it has the effect of making your code even harder to find.
Note I said harder, not impossible. At the end of the day, if you want the code to execute on the end-user's computer, that code is going to be accessible to the end user. This is true even of compiled applications, although of course in that case a lot of organizational information that helps a human understand the code is lost in compilation, which happens less with obfuscated code. (But a good obfuscator makes code extremely opaque to human beings.)
Here is the situation: A complex web app is not working, and it is possible to produce undesired behavior consistently. The cause of the problem is not known.
Proposal: Trace the execution paths of all javascript code. Essentially, produce two monstrous logs which can then be fed into a diff algorithm to determine where the behavior related to the bug begins to diverge (as the cause is not apparent from application behavior, and both comprehending and obtaining a copy of the actual JS code being run is difficult, due to the many pages that must be switched to and copied out from the web inspector. Making it difficult is the fact that all pages are dynamically spliced together with Perl code, where significant portions of JS code exist only as (dynamic...) Perl strings).
The Web Inspector in Chrome does not have an option that I know about for logging an execution trace. Basically what I would like is a log of every line of JS that is executed, in the order that they are executed. I don't see this as being a difficult thing to obtain given that the JS VM is single-threaded. The problem is simply that the existing user-facing tools are not designed for quite this much hardcore debugging. If we look at the Profiler in the Dev Tools, it's clearly capable of the kind of instrumentation that I need, but it is fundamentally designed to do profiling instead of tracing.
How can I get started with this? Is there some way I can build Chrome from source, where I can
switch off JIT in V8?
log every single javascript expression evaluated by V8 to a file
I have zero experience with the development side of Chrome. So e.g. links to dev-builds/branches/versions/distros of Chrome/Chromium/Canary (what's the difference?) are welcome.
At this point it appears that instrumenting the browser with powerful js tracing is still likely to be easier than redesigning the buggy app. The architecture of the page is a disaster, but the functionality is complex, and it almost fully works. I just have to find the one missing piece.
Alternatively, if tools of this sort already exist, what are some other keywords I can search for them with? "Code Tracing" is pretty much the only thing I can come up with.
I tested dynaTrace, which was a happy coincidence as our app supports IE (indeed Chrome support just came out of beta), but this does not produce a text dump, it basically produces a massive Win32 UI expando-tree, which is impossible to diff. This makes me really sad because I know how much more difficult it was to make the representation of the trace show up that way, and yet it turns out being almost utterly useless. Who's going to scroll up and down that tree view and see anything really useful in it, in anything other than a toy example of a web app?
If you are developing a big web app, it is always good to follow a test driven strategy for the coding part of it. Using just a few tips allows you to make a simple unit testing script (using QUnit) to test pretty much all aspects of your app. Here are some potential errors and some ways of solving them.
Make yourself handlers to register long living Objects and a handler to close them the safe way. If the safe way does not succeed then it is the management of the Object itself failing. One example would be Backbone zombie views. Either the view has bad code in the close section, the parent close is not hooked or an infinite loop happened. Testing all view events is also good, although tedious.
By putting all the code for data fetching inside a certain module (I often use a bunch of Backbone.Model objects for each table/document in my DB) and handlers for each using a reqres pattern, you can test them 1 by 1 to see if they all fetch and save correctly.
If complex computation is needed, abstract it in a function or module so that it can be easily tested with known data.
If your app uses data binding, a good policy is to have a JSON schema for all data to be tested against your views containing your bindings. Check against the schema all the data required. This is applied to your Backbone.Model also.
Using a good IDE also helps. PyCharm (if you use Python for backend) or WebStorm are extremely good for testing and developing JavaScript/CoffeeScript. You can breakpoint and study your code at specific locations, inside your browser! Also it runs your code for auto-completion and you can see some of the errors that way.
I just cannot encourage enough the use of modules in your code. Although there is no JavaScript official way of doing it (next ECMAScript draft has it), you can still use good libraries for it. Good ones are: RequireJS, CommonJS or Marionette.Module (if you use Marionette as your framework). I think Ember/AngularJS also offers this kind of functionality but I did not work with them personally so I am not sure.
This might not give you an immediate solution to your problem and I don't think (IMO) there is an easy one either. My focus was to show you ways to develop so that errors can be easily spotted and countered, and all of it (depending on your Unit Testing) during development phase. Errors will always happen, as much as our programmer ego wants us to believe the contrary. Hope I helped :)
I would suggest a divide and conquer strategy, first via logging, and second via code. Wrap suspect methods of the code with console logging in and out events and when the bug occurs hopefully it is occurring between or after some event. If event logging will not shed light, bring parts of the code into a new page/app. Divide and conquer will find when the error starts occurring.
So it seems you're in the realm of weird already, so I'm thinking of a weird solution. I know nothing about the backend of chrome myself so we're in the same boat, but if you're feeling bold here's an idea. Maybe you could find/replace every newline in your javascript files with a piece of code that logs either to a global string or to the console a) what file you're in, b) the contents of "this" or something useful to you, and maybe even c) the time. This would at least get you started. Make sure it's wrapped in something distinct so you can just as easily remove it.
One of the great things about being a web developer in recent history is all the sharing that's going on, especially with JavaScript libraries. There's all these awesome tools to use: jQuery, jQuery-UI, Lightbox, bxslider, underscore.js, Backbone.js, the list goes on. Then there comes a time when one or more of these libraries need to be updated. But JavaScript runs on the client, it doesn't compile, and it's difficult or impossible to be notified when a problem occurs. What is the best technique right now to assure that after you update one or more JavaScript libraries, your web application will not start throwing JavaScript errors?
There's no way the best response to this is to just test. Especially with a complicated application it can be too difficult to go through every possibility and make sure no errors are thrown. What are other web application developers out there doing to make sure they don't have a deployment with an embarassing and crippling JavaScript bug caused by updating?
The best answer is to "just test". What you are asking, essentially, is "How do I test to make sure my software still works?". You can do all the homework you want to see what changed, but eventually you just need to test your application.
That being said, there are generic testing tools like JSLint and Selenium, but ultimately your application is going to be unique enough that you will need to have unit tests to cover the business logic and standard QA for non-standard processes.
One way to ensure that things still work functionally is to have a suite of automated browser tests (utilizing a tool like Selenium) that you run on your development environment.
I wouldn't really call this "just testing" but unit testing will help do the job. Write tests once for your app looking for the expected results (good practice either way), then when you update the plugins run these tests again.
http://qunitjs.com/
Nothing beats a good QA and browser testing though.
Plenty of other answers already include "test", so hopefully that's obvious at this point.
The other thing that you should ALWAYS do is read the release notes for every single version up to the version that you're upgrading to. I can't speak intelligently about bxslider or Lightbox, but the other major libraries that you referenced are extremely good at releasing detailed changelogs which will notify you of breaking changes. You can decide for yourself if any of these changes will have a negative effect on your app ( in addition to testing of course!).
I have a problem.
My problem is that every time I make changes to my node.js server code, I have to restart the entire thing to see the results.
Instead of this, I remember seeing something about being able to pipe chrome directly into the server's source code, and "Hot edit" it. That is to say, changes to the code immediately take effect and the server keeps runnings.
I hope that I am being clear.
It would be a real time saver to directly edit code (especially for small things) while the server is actually running and have it instantly take effect.
Does anyone know how to do this?
See my answer to my own question that answers this question: https://stackoverflow.com/a/11157223/813718
In short, there's a npm module named forever which does what you want. It can monitor the source files and restart the node instance when a change has been detected.
I do not quite understand the pipe-to-chrome part... But there seems to be a node module which listens to changes of userdefined files and restarts the server automatically:
How can I edit on my server files without restarting nodejs when i want to see the changes?
https://github.com/isaacs/node-supervisor
Yes, there is such thing.
Just take advantage of the evil-so-called eval() function of Javascript.
(You might need something like websocket to connect with the server and alert it about the change)
I am on the half-way of implement the same feature, but there are a lot of things to consider if you want to reserve the server states (current values of variables for example)
ABOUT THE pipe-to-chrome part
May be this was what you mentioned?
https://github.com/node-inspector/node-inspector/wiki/LiveEdit
I am looking for a tool that lets you monitor/log page rendering time on client machines. I am not looking for firebug/yslow because i want to know the following type of things:
How does fast do my pages load when the user is in russia?
How long does it take for javascript to run on some pages for everyone who accesses those pages?
So, i actually care what my site feels like to the people who use it. Do there exist tools that already do this?
I should add that my website is a software as a service website, not accessible publicly.
I've never heard of any way to do this. One solution, which may be terrible, might be to log the time yourself. At the top of your page have an inline script tag with a global variable called start that creates a new date. Then, have an onload listener that calls a function once the page is finished loading. In this function, get the difference between the start time and current time and send that back to your server. This is by no means that accurate, but might give you some idea. You could also log their IP address for geolocation when you send back the data.
I recommend https://www.atatus.com/. Atatus helps you to visualise page load time across pages, browsers and countries. It also has AJAX monitoring and Transaction monitoring.
There is not really a super easy way to do this effectively. but you can definitely fake the geo-location thing by using a proxy (which would actually give you N*2, time length) and get a pretty good idea at what it's like to browse your site.
As for JavaScript, you can profile it with the profiler in FireBug, this will give you an idea of what functions you should refactor and whatnot.
In my opinion I'd determine what most of your users are using or what the general demographic makeup they are, are they 75 year-old guys? If that is the case maybe they aren't up on the newer faster browsers, or for that matter don't care. If they are cool hipster designers in San Francisco, they its Safari 4.0... anyway this is just a way to determine the meat of the users, I think the best way is just grab an older laptop with Windows XP on it and just browse your site, you can use FireBug lite on browsers besides Firefox
I like to run Dynatrace AJAX edition from UI automation tests. This easily allows you to monitor performance deterioration and improvement over time. There's an article on how to do this on the Dynatrace website.