Debugging Performance with Chrome Devtools - javascript

I have a Nuxt app in production with sourcemaps enabled. When I try to debug using the Performance tab on chrome devtools, all I see is VueJS
For instance, I have a task that takes ~255ms, when I look at the Call Tree, I primarily see only calls pointing to vue.runtime.esm.js, even going deep within the nesting doesn't lead me to any source code.
Am I missing something? or am I interpreting this incorrectly?
Edit:
Taking #wOxxOm's advice and I ignored some Vue files, doesn't seem to make a lot of difference in terms of leading to my application code.
Edit: 2
Enabled two more experiments - "Timeline: All events" and "Timeline: V8 runtime" - as suggested by #wOxxOm. Again, it points to a modern bundle that Nuxt creates and the Call Tree points to more library code.

Related

How to detect undefined imports (when using javascript es6 import statements)

I often run into the problem of an javascript es6 import statement (transpilled via babel) failing to actually find the file the developer intended. This is particularly concerning during refactoring or during auto-fixing/formatting.
Is there a way to automatically flag imports, that are bringing in undefined? edit: at runtime?
It is standard practice to have a developer console of some kind or another (such as Chrome Developer Tools - hit F12, if working on web pages) open during refactoring or development of the Javascript.
So any imports that are not found would generate errors in that console, and thus be visible to the developer.
Missing imports however are not of concern (or should not be) to the User, and far as I know there is no mechanism to flag this to a User (non-developer) of the page. Dood design dictates it is not something they should be concerned with.
So you're probably thinking okay, during development time the we detect during our refactoring via the console that some imports have gone missing, we fix them and publish the latest code/page. Later the imports go missing.
Well that's the thing. If the imports live on external sources and can just randomly go missing, you need to fix That problem. If the imports need to live locally (same location as the javascript being served up) so they are available with 100% accuracy, then do that.

How can I tell what javascript has been compiled when running V8

I have herd V8 compiles "hot code" optimise javascript performance. Is there any way I can tell what code has been compiled and what code has not?
First and foremost you will need to profile your code in the Profiles tab of the Javascript console in Chrome to see what is worth testing. If a function, module, or whatever you are trying to test does not take up much time, it won't be worth your effort.
V8's JIT is going to make assumptions about your code, if those assumptions are true the code will be lightning fast. If not V8 will deoptimize that code as your program continues. Here is a for instance from my own tests. In the code below I was testing a merge sort function I had written.
console.time('order');
msort(ob);
console.timeEnd('order');
The first run of 60000 random numbers completes after 8ms, and all of the following jump up to around 16ms. Basically the JIT has issues with something I wrote so it recompiled my code. I have seen the exact opposite occur where code jumps to twice as fast. If you want to look at it, this is not the exact version, but one using es6 module syntax.
https://github.com/jamesrhaley/es2015-babel-gulp-jasmine/blob/master/src/js/mergeSort/mergeSort.js
Also if your code was not worth optimizing, then it won't be optimized to begin with. Here is a couple of links that helped me improve my speed when writing js.
https://www.youtube.com/watch?v=UJPdhx5zTaw
https://www.smashingmagazine.com/2012/11/writing-fast-memory-efficient-javascript/#so-how-does-javascript-work-in-v8
If you are willing to build a standalone version of v8, you can just run the shell as follows: d8 --trace-opt foo.js (you might also want to deploy --trace-deopt, since your code might get deoptimized (and then reoptimized again..)).
Another useful option is --print-code which will let you see all the versions of machine code for all the functions which were compiled although this one is probably an overkill. Also there is --print-opt-code.
And lastly use d8 --help to see what other useful options v8 can take.

Is there a way to automate the testing of chrome extensions? [duplicate]

I'm going to write bunch of browser extensions (the same functionality for each popular browser). I hope, that some of the code will be shared, but I'm not sure about this yet. For sure some of extensions will use native API. I have not much experience with TDD/BDD, and I thought it's good time to start folowing these ideas from this project.
The problem is, I have no idea how to handle it. Should I write different tests for each browser? How far should I go with these tests? These extensions will be quite simple - some data in a local storage, refreshing a page and listening through web sockets.
And my observation about why is it hard for me - because there is a lot of behaviour, and not so much models, which are also dependent on a platform.
I practise two different ways of testing my browser extensions:
Unit tests
Integration test
Introduction
I will use the cross-browser YouTube Lyrics by Rob W extension as an example throughout this answer. The core of this extension is written in JavaScript and organized with AMD modules. A build script generates the extension files for each browser. With r.js, I streamline the inclusion of browser-specific modules, such as the one for cross-origin HTTP requests and persistent storage (for preferences), and a module with tons of polyfills for IE.
The extension inserts a panel with lyrics for the currently played song on YouTube, Grooveshark and Spotify. I have no control over these third-party sites, so I need an automated way to verify that the extension still works well.
Workflow
During development:
Implement / edit feature, and write a unit test if the feature is not trivial.
Run all unit tests to see if anything broke. If anything is wrong, go back to 1.
Commit to git.
Before release:
Run all unit tests to verify that the individual modules is still working.
Run all integration tests to verify that the extension as whole is still working.
Bump versions, build extensions.
Upload update to the official extension galleries and my website (Safari and IE extensions have to be hosted by yourself) and commit to git.
Unit testing
I use mocha + expect.js to write tests. I don't test every method for each module, just the ones that matter. For instance:
The DOM parsing method. Most DOM parsing methods in the wild (including jQuery) are flawed: Any external resources are loaded and JavaScript is executed.
I verify that the DOM parsing method correctly parses DOM without negative side effects.
The preference module: I verify that data can be saved and returned.
My extension fetches lyrics from external sources. These sources are defined in separate modules. These definitions are recognized and used by the InfoProvider module, which takes a query, (black box), and outputs the search results.
First I test whether the InfoProvider module functions correctly.
Then, for each of the 17 sources, I pass a pre-defined query to the source (with InfoProvider) and verify that the results are expected:
The query succeeds
The returned song title matches (by applying a word similarity algorithm)
The length of the returned lyrics fall inside the expected range.
Whether the UI is not obviously broken, e.g. by clicking on the Close button.
These tests can be run directly from a local server, or within a browser extension. The advantage of the local server is that you can edit the test and refresh the browser to see the results. If all of these tests pass, I run the tests from the browser extension.
By passing an extra parameter debug to my build script, the unit tests are bundled with my extension.
Running the tests within a web page is not sufficient, because the extension's environment may differ from the normal page. For instance, in an Opera 12 extension, there's no global location object.
Remark: I don't include the tests in the release build. Most users don't take the efforts to report and investigate bugs, they will just give a low rating and say something like "Doesn't work". Make sure that your extension functions without obvious bugs before shipping it.
Summary
View modules as black boxes. You don't care what's inside, as long as the output matches is expected or a given input.
Start with testing the critical parts of your extension.
Make sure that the tests can be build and run easily, possibly in a non-extension environment.
Don't forget to run the tests within the extension's execution context, to ensure that there's no constraint or unexpected condition inside the extension's context which break your code.
Integration testing
I use Selenium 2 to test whether my extension still works on YouTube, Grooveshark (3x) and Spotify.
Initially, I just used the Selenium IDE to record tests and see if it worked. That went well, until I needed more flexibility: I wanted to conditionally run a test depending on whether the test account was logged in or not. That's not possible with the default Selenium IDE (it's said to be possible with the FlowControl plugin - I haven't tried).
The Selenium IDE offers an option to export the existing tests in other formats, including JUnit 4 tests (Java). Unfortunately, this result wasn't satisfying. Many commands were not recognized.
So, I abandoned the Selenium IDE, and switched to Selenium.
Note that when you search for "Selenium", you will find information about Selenium RC (Selenium 1) and Selenium WebDriver (Selenium 2). The first is the old and deprecated, the latter (Selenium WebDriver) should be used for new projects.
Once you discovered how the documentation works, it's quite easy to use.
I prefer the documentation at the project page, because it's generally concise (the wiki) and complete (the Java docs).
If you want to get started quickly, read the Getting Started wiki page. If you've got spare time, look through the documentation at SeleniumHQ, in particular the Selenium WebDriver and WebDriver: Advanced Usage.
Selenium Grid is also worth reading. This feature allows you to distribute tests across different (virtual) machines. Great if you want to test your extension in IE8, 9 and 10, simultaneously (to run multiple versions of Internet Explorer, you need virtualization).
Automating tests is nice. What's more nice? Automating installation of extensions!
The ChromeDriver and FirefoxDriver support the installation of extensions, as seen in this example.
For the SafariDriver, I've written two classes to install a custom Safari extension. I've published it and sent in a PR to Selenium, so it might be available to everyone in the future: https://github.com/SeleniumHQ/selenium/pull/87
The OperaDriver does not support installation of custom extensions (technically, it should be possible though).
Note that with the advent of Chromium-powered Opera, the old OperaDriver doesn't work any more.
There's an Internet Explorer Driver, and this one does definitely not allow one to install a custom extension. Internet Explorer doesn't have built-in support for extensions. Extensions are installed through MSI or EXE installers, which are not even integrated in Internet Explorer. So, in order to automatically install your extension in IE, you need to be able to silently run an installer which installs your IE plugin. I haven't tried this yet.
Testing browser extensions posed some difficulty for me as well, but I've settled on implementing tests in a few different areas that I can invoke simultaneously from browsers driven by Selenium.
The steps I use are:
First, I write test code integrated into the extension code that can be activated by simply going to a specific URL. When the extension sees that URL, it begins running the tests.
Then, in the page that activates the testing in the extension I execute server-side tests to be sure the API performs, and record and log issues there. I record the methods invoked, the time they took, and any errors. So I can see the method the extension invoked, the web performance, the business logic performance, and the database performance.
Lastly, I automatically invoke browsers to point at that specific URL and record their performance along with other test information, errors, etc on any given client system using Selenium:
http://docs.seleniumhq.org/
This way I can break down the tests in terms of browser, extension, server, application, and database and link them all together according to specific test sets. It takes a bit of work to put it all together, but once its done you can have a very nice extension testing framework.
Typically for cross-browser extension development in order to maintain a single code-base I use crossrider, but you can do this with any framework or with native extensions as you wish, Selenium won't care, it is just driving the extension to a particular page and allowing you to interact and perform tests.
One nice thing about this approach is you can use it for live users as well. If you are providing support for your extension, have a user go to your test url and immediately you will see the extension and server-side performance. You won't get the Selenium tests of course, but you will capture a lot of issues this way - very useful when you are coding against a variety of browsers and browser versions.

LESS in IE throws exceptions

I am playing aorund with using LESS along with respond.js to streamline the development of a new site. Both LESS and respond are quite simply neat. However, with LESS in IE I have run into many problems.
For starters in IE8 mode my IE10 reported that id did not understand "map". No problems, I wrote up an Array.prototype map extension. Then it said that it did not understand isArray, once again in IE8 mode. Prototype extensions to the rescue again. Now it comes back saying something along the lines of SyntaxError: Invalid operand to 'in': Object expected
I am not in fact aware of what in might be but in any case I cannot keep adding adhoc prototype extenions on the fly in the hope that things will eventually settle down. Either LESS is unusable with IE or else someone here can point me to all the fixes needed to make it work.
Answer for your question:
First of all, LESS client side compilation is supported only in IE9+.
You could probably fix this using shims and polyfills for ES5, like these.
But please, don't.
What you should probably do (and forget the first part):
However, despite of really good caching mechanisms provided by the LESS compiler (eg. using localStorage to preserve generated code) using it i production isn't considered a good practice.
GruntJS and Bower.io work in the console, but are relatively easy to configure. Basically, you set them up once and forget they've ever existed:)
Livereload provides you with a GUI and it's incredibly easy to use.
I used GruntJS for frontend development with backend developers working with PHP (CakePHP, Zend, Laravel) and it made our lives much, much easier :)
It seems much more reasonable to streamline your frontend development workflow using a task runner like GruntJS or Brunch.io or install Livereload. These tools will monitor the file changes and generate a new CSS file on every save (and also, reload your CSS on the fly).
You can install GrunJS with watch and LESS plugins and keep is very simple this way. You could even use LESS Node.js package installed globally to the job.

Ajax-driven JavaScript runtime assertion framework

While working on a larger web application with an increasing amount of JavaScript code, we did a brainstorming session on how to improve code quality.
One of the first ideas was to introduce unit tests. This will be a long term goal; that will not, however, fix the most common causes of regression: the changing DOM and browser specific issues.
Unit tests run in a mocked, DOM-less environment and are not on the page.
What I'm looking for is an assertion framework that can be plugged into the code like this:
var $div = $("div.fooBarClass");
assertNotEmpty($div);
$div.fooBarAction();
I've found assertion frameworks that can do this, but they all either log into the console or into the DOM or open a silly pop-up. None of these work together with (thousands of) automated tests.
What I'm looking for is a run-time assertion framework that logs failed assertion via AJAX! Ideally, it should be:
Have common assertions built-in.
Integrate with JQuery modules, closures.
Log (via Ajax) the assertion, the file name, the page, line number, the cause of failure, some pre-configured variables of the environment (browser, release version etc.).
Support callbacks in case of failures. (If any assertion framework can just do this, I would be gladly willing to write callbacks doing the Ajax part.)
Work well with all browsers.
Trivial to exclude from production release.
Maintained code base.
We've been using the YUI Test Library. It seems to work fairly well.
Has a variety of assertion methods for different types
Assertions exist for equality, sameness, true, false, object type, and even array item comparison.
Allows for mock objects to test DOM objects and other functions
Our code does a lot of AJAX calls, or requires methods / objects that don't need to be tested (as they are tested elsewhere). Using Mock objects, we can tell the tests what to expect. For example:
var mockXhr = Y.Mock();
//I expect the open() method to be called with the given arguments
Y.Mock.expect(mockXhr, {
method: "open",
args: ["get", "/log.php?msg=hi", true]
});
Works with all browsers
We run our tests in IE, Chrome, and Firefox, and aside from some differences in what the test runner itself looks like, it works!
Trivial to exclude from production release
We have all of our testing code in a separate folder which accesses all the production code. Excluding the tests from production is as easy as excluding a folder.
Maintained codebase
YUI 3 is used on the Yahoo homepage, and seems to be fairly well maintained.
I know that it is not what you asked for but I highly recommend Selenium for automated testing of web applications.
Common assertions are built in.
It can test any JS framework because it drives the browser where your code runs.
It has robust logging features.
Browser support depends on your OS but all major browsers are supported.
There is nothing to exclude from a production release because the tests are external to the application.
The code base is well maintained and you have full control over your test cases.
It seems there is no similar solution I'm looking for.
I'm going write my own one, overriding console.assert to make an ajax call when arguments evaluate to false.
UPDATE: Here it comes, still under development, https://github.com/gaboom/qassert

Categories