Yeoman: LiveReload vs. Yeoman Watch - javascript

I'm trying out Yeoman Server for the first time and see that it offers a native watch tool as a fallback to LiveReload. Here's how the docs explain the fallback:
"[Yeoman Server] automatically fires up the yeoman watch process, so changes to any of the application's files cause the browser to refresh via LiveReload. Should you not have
LiveReload installed locally, a fallback reload process will be used instead."
So far the fallback process is working perfectly, and I like that it doesn't require installing anything in the browser/menu bar.
Has anyone tried both watch tools with Yeoman? How is the workflow different and what additional features do you get if you "upgrade" to LiveReload?
UPDATE: A quick inspection of the API revealed that Yeoman's live reload feature is in fact LiveReload. They're one and the same. The reason it works without the browser extensions is because they're using LiveReload's snipvr snippet instead. It's possible there are some additional features accessible via the LiveReload GUI and perhaps for mobile device testing, but more likely the functionality is identical.

As noted in my update, I checked the Yeoman source and realized that the live reload feature is in fact LiveReload. They're one and the same. It's pretty cool of LR's creator, Andrey Tarantsov, to let his valuable tool be used in a popular, open-source project like this without charging for its use.
The reason Yeoman Watch works without the browser extensions is because it's using LiveReload's snipvr snippet instead.
As a result, the functionality of LiveReload and running 'yeoman watch' is essentially identical. However, I find that there's still benefit to owning LiveReload. My preferred workflow is to combine LiveReload and CodeKit.
During (pre-build) development, I use CodeKit to compile my Sass/Compass files and Jade templates (another fantastic tool, btw) since CodeKit's config options are a little more extensive than LiveReload's. Since CodeKit doesn't work with Firefox (only Chrome and Safari), I run LiveReload concurrently so that I can see changes live in both browsers.
This workflow also has the added benefit of being able to "fork on the fly" by mixing LiveReload's "custom command" feature with CodeKit's "advanced compiler settings" feature.

EDIT:
What I said below isn't exactly correct after all. I did some more testing and found that editing a .scss file would have the changes show up even without editing the HTML file first, so yeah, at this point I haven't got a scooby as to what the difference between LiveReload and the fallback process is.
I say this with the caveat that I don't have LiveReload installed,
but from the testing I've done in Yeoman thus far, what I've seen with
the "fallback reload process" is that it doesn't reload the page until
the HTML file is saved, so saved CSS changes aren't immediately
visible until the HTML file receives a Save event from the system.
According to livereload.com, "...when you change a CSS file or an
image, the browser is updated instantly without reloading the page" so
it appears to be a more robust process.
(Sorry, not a complete answer since I don't have LiveReload available,
but this question's been up for a couple of days with no response yet,
so I figured any information was better than none.)

Related

ASP.Net Core caching

I have a .js file that is being statically served in my application. This file will continue to change through the course of development-
I had made some changes to the file this morning as I normally do. Debugging the project, I discovered that the changes in the .js were not reflected- i.e. the browser was using an older version of the javascript file.
I have been working on this project for a few weeks and the changes in the .js file have always been reflected in the next debug until today. I have tried in both Chrome and Edge.
What gives? I'm puzzled about the change in behavior. I did receive a Windows update overnight, could it be there's a global setting for browser caching which was previously disabled on my system and was enabled by the update. I am aware of cache busting techniques but in the past, I've always just been able to update the a static .js file and the browser has always used the latest .js file.
Hopefully someone else might find this helpful. I was poking around Visual Studio and in the "play button" next to IIS Express, there is a menu option called "Script Debugging" that I happened upon and was set to disabled. I enabled the setting and things are back to working as previously.
I was able to confirm that by disabling the setting, Chrome and Edge use a cached version of the script file.
My guess is a Windows update disabled the setting. Can someone explain how changing this setting in Visual Studio affects caching in browsers?

Can I load Vue.js from a CDN in production?

I chose Vue.js for a new project because it seems to run natively in the browser, as opposed to something like React that has to be compiled/transpiled via Node. Is there any reason I couldn't just link to a CDN like this in my production code?
<script src="https://unpkg.com/vue#2.2.1"></script>
A co-worker suggested that may be for development only, and that unpkg is simply transpiling on the fly (which doesn't sound good for performance). But other than that it seems to work fine. I could also link to a more robust CDN such as this one, but just want to make sure I'm not violating some sort of best practice by not using a Node build system (e.g. webpack).
Is there any reason I couldn't just link to a CDN like this in my production code?
No, there is no reason not to use a CDN in production. It's even a preferred way for serving content in production mode, especially with common packages like jQuery, because most of the people would already have loaded and therefore cached this resource.
A co-worker suggested that may be for development only, and that unpkg is simply transpiling on the fly (which doesn't sound good for performance).
This is absolutely not true - that's why it is a CDN! :) It's a matter of choice, but you have to keep in mind that most of the time you should work with specific version of the library that you are using during development. If you just add the latest version of any code, you are vulnerable to all changes pushed to that repository, and so your clients will start receiving updated code, which you haven't tested yet.
Therefore fix to a specific version that you develop with, open a beer and have a good sleep :)
Updated (18.10.2022): Global caching no longer works
Actually, this one's been around for a while, but the answer was never updated. The short story is that caching works per site. Longer version can be found here (thanks to #Baraka's comment).
Either way, using CDN for production deployment is still much preferred!
These maybe help:
<!-- development version -->
<script src="https://unpkg.com/vue"></script>
<!-- production version -->
<script src="https://unpkg.com/vue/dist/vue.min.js"></script>
And it will keep current version of Vue.js automatically.

Angular 2 CLI is compiling but browser not reflecting changes made in WebStorm

I have just started learning Angular 2 through a tutorial.
I made a new project using ng new firstapp. Whenever I make changes to app.component.ts and save the file, the angular-cli always compiles successfully, displaying webpack: Compiled successfully.
However, sometimes it immediately reflects the changes in browser whilst most of the times it doesn't show any change. After I searched for the issue, someone suggested to try disabling the cache using Chrome developer tools but it didn't help. I am a beginner.
I am using WebStorm as my IDE. However when I made changes using SublimeText, the browser reflected the changes immediately. I guess it has got something to do with WebStorm. I'd like to carry on using WebStorm, as I love its features.
When using webpack-dev-server, it’s recommended to disable the IDE Safe write feature (Use "safe writes" (save changes to a temporary file first), Settings | Appearance & Behavior | System Settings ) , otherwise, the app won’t be updated on-time on changes. This issue is fixed in Webpack 2

Is there a way to automate the testing of chrome extensions? [duplicate]

I'm going to write bunch of browser extensions (the same functionality for each popular browser). I hope, that some of the code will be shared, but I'm not sure about this yet. For sure some of extensions will use native API. I have not much experience with TDD/BDD, and I thought it's good time to start folowing these ideas from this project.
The problem is, I have no idea how to handle it. Should I write different tests for each browser? How far should I go with these tests? These extensions will be quite simple - some data in a local storage, refreshing a page and listening through web sockets.
And my observation about why is it hard for me - because there is a lot of behaviour, and not so much models, which are also dependent on a platform.
I practise two different ways of testing my browser extensions:
Unit tests
Integration test
Introduction
I will use the cross-browser YouTube Lyrics by Rob W extension as an example throughout this answer. The core of this extension is written in JavaScript and organized with AMD modules. A build script generates the extension files for each browser. With r.js, I streamline the inclusion of browser-specific modules, such as the one for cross-origin HTTP requests and persistent storage (for preferences), and a module with tons of polyfills for IE.
The extension inserts a panel with lyrics for the currently played song on YouTube, Grooveshark and Spotify. I have no control over these third-party sites, so I need an automated way to verify that the extension still works well.
Workflow
During development:
Implement / edit feature, and write a unit test if the feature is not trivial.
Run all unit tests to see if anything broke. If anything is wrong, go back to 1.
Commit to git.
Before release:
Run all unit tests to verify that the individual modules is still working.
Run all integration tests to verify that the extension as whole is still working.
Bump versions, build extensions.
Upload update to the official extension galleries and my website (Safari and IE extensions have to be hosted by yourself) and commit to git.
Unit testing
I use mocha + expect.js to write tests. I don't test every method for each module, just the ones that matter. For instance:
The DOM parsing method. Most DOM parsing methods in the wild (including jQuery) are flawed: Any external resources are loaded and JavaScript is executed.
I verify that the DOM parsing method correctly parses DOM without negative side effects.
The preference module: I verify that data can be saved and returned.
My extension fetches lyrics from external sources. These sources are defined in separate modules. These definitions are recognized and used by the InfoProvider module, which takes a query, (black box), and outputs the search results.
First I test whether the InfoProvider module functions correctly.
Then, for each of the 17 sources, I pass a pre-defined query to the source (with InfoProvider) and verify that the results are expected:
The query succeeds
The returned song title matches (by applying a word similarity algorithm)
The length of the returned lyrics fall inside the expected range.
Whether the UI is not obviously broken, e.g. by clicking on the Close button.
These tests can be run directly from a local server, or within a browser extension. The advantage of the local server is that you can edit the test and refresh the browser to see the results. If all of these tests pass, I run the tests from the browser extension.
By passing an extra parameter debug to my build script, the unit tests are bundled with my extension.
Running the tests within a web page is not sufficient, because the extension's environment may differ from the normal page. For instance, in an Opera 12 extension, there's no global location object.
Remark: I don't include the tests in the release build. Most users don't take the efforts to report and investigate bugs, they will just give a low rating and say something like "Doesn't work". Make sure that your extension functions without obvious bugs before shipping it.
Summary
View modules as black boxes. You don't care what's inside, as long as the output matches is expected or a given input.
Start with testing the critical parts of your extension.
Make sure that the tests can be build and run easily, possibly in a non-extension environment.
Don't forget to run the tests within the extension's execution context, to ensure that there's no constraint or unexpected condition inside the extension's context which break your code.
Integration testing
I use Selenium 2 to test whether my extension still works on YouTube, Grooveshark (3x) and Spotify.
Initially, I just used the Selenium IDE to record tests and see if it worked. That went well, until I needed more flexibility: I wanted to conditionally run a test depending on whether the test account was logged in or not. That's not possible with the default Selenium IDE (it's said to be possible with the FlowControl plugin - I haven't tried).
The Selenium IDE offers an option to export the existing tests in other formats, including JUnit 4 tests (Java). Unfortunately, this result wasn't satisfying. Many commands were not recognized.
So, I abandoned the Selenium IDE, and switched to Selenium.
Note that when you search for "Selenium", you will find information about Selenium RC (Selenium 1) and Selenium WebDriver (Selenium 2). The first is the old and deprecated, the latter (Selenium WebDriver) should be used for new projects.
Once you discovered how the documentation works, it's quite easy to use.
I prefer the documentation at the project page, because it's generally concise (the wiki) and complete (the Java docs).
If you want to get started quickly, read the Getting Started wiki page. If you've got spare time, look through the documentation at SeleniumHQ, in particular the Selenium WebDriver and WebDriver: Advanced Usage.
Selenium Grid is also worth reading. This feature allows you to distribute tests across different (virtual) machines. Great if you want to test your extension in IE8, 9 and 10, simultaneously (to run multiple versions of Internet Explorer, you need virtualization).
Automating tests is nice. What's more nice? Automating installation of extensions!
The ChromeDriver and FirefoxDriver support the installation of extensions, as seen in this example.
For the SafariDriver, I've written two classes to install a custom Safari extension. I've published it and sent in a PR to Selenium, so it might be available to everyone in the future: https://github.com/SeleniumHQ/selenium/pull/87
The OperaDriver does not support installation of custom extensions (technically, it should be possible though).
Note that with the advent of Chromium-powered Opera, the old OperaDriver doesn't work any more.
There's an Internet Explorer Driver, and this one does definitely not allow one to install a custom extension. Internet Explorer doesn't have built-in support for extensions. Extensions are installed through MSI or EXE installers, which are not even integrated in Internet Explorer. So, in order to automatically install your extension in IE, you need to be able to silently run an installer which installs your IE plugin. I haven't tried this yet.
Testing browser extensions posed some difficulty for me as well, but I've settled on implementing tests in a few different areas that I can invoke simultaneously from browsers driven by Selenium.
The steps I use are:
First, I write test code integrated into the extension code that can be activated by simply going to a specific URL. When the extension sees that URL, it begins running the tests.
Then, in the page that activates the testing in the extension I execute server-side tests to be sure the API performs, and record and log issues there. I record the methods invoked, the time they took, and any errors. So I can see the method the extension invoked, the web performance, the business logic performance, and the database performance.
Lastly, I automatically invoke browsers to point at that specific URL and record their performance along with other test information, errors, etc on any given client system using Selenium:
http://docs.seleniumhq.org/
This way I can break down the tests in terms of browser, extension, server, application, and database and link them all together according to specific test sets. It takes a bit of work to put it all together, but once its done you can have a very nice extension testing framework.
Typically for cross-browser extension development in order to maintain a single code-base I use crossrider, but you can do this with any framework or with native extensions as you wish, Selenium won't care, it is just driving the extension to a particular page and allowing you to interact and perform tests.
One nice thing about this approach is you can use it for live users as well. If you are providing support for your extension, have a user go to your test url and immediately you will see the extension and server-side performance. You won't get the Selenium tests of course, but you will capture a lot of issues this way - very useful when you are coding against a variety of browsers and browser versions.

Headless node.js javascript browser with screenshot capability?

Are there any headless browsers for node.js that support dumping a rendered page out to a file? I know phantomjs supports rendering to a file, but it doesn't run on node.js. I know zombie.js is a node.js headless browser, but it doesn't support rendering to a file.
I doubt you will find anything that is going to work as well as phantomjs. I would just treat the rendering as an async backend process and execute phantom in a subprocess from your main node.js process and call it a day. Rendering a web page is HARD, and since phantom is based on WebKit, it can actually do it. I don't think there will ever be a node library that can render a web page to a graphic file that isn't built upon an existing browser rendering engine. But maybe one day phantomjs will integrate more seamlessly with node.
Try nightmare, it uses the electron, it is way faster than phantomjs, and it's API easy and uses modern ES6 javascript.
This might look like a solution with a little bit overhead...
You can use the Mozilla Firefox with the MozRepl plugin. Basically this plugin gives you a telnet port to your Firefox which allows you to control the browser from the outside. You can open URLs, take screenshots, etc.
Running the Firefox with the Xvfb server will run it in headless mode.
Now you just have to control the browser from the outside with node.js. I've seen a few examples where someone has implemented a http alike interface inside the chrome.js of Firefox. So you can run a http command to get a screenshot. You can then use http calls from node.js. This might look strange, it actually is but might work well for you.
http://hyperstruct.net/2009/02/05/turning-firefox-into-a-screenshot-server-with-mozrepl/
I'm running a slightly modified version in production with Perl Mojolicious in async mode to trigger the screenshots. However, there is a small problem. When plugins are required they do work, however Flash usually gets activated when it's in the visible area, this won't happen so movies/flash things might not get initialized.
You might find this helpful, though it's not javascript specific.
There is a webkit-based tool called "wkhtmltopdf" that I understand includes javascript support using the QT web-kit widget. It outputs a visual representation ("screenshot" if you will) of the page in PDF format.
FWIW, there are also PHP bindings for it here: php-wkthmltox
The Chrome dev team has released Puppeteer which can be used in node. It uses Chrome with the headless option.
There's a project called Node-Chimera. Although it's not as mature as Phantomjs, it has all the features you have mentioned: it runs on native Nodejs, and allows you to render pages to a file. Repository is here: https://github.com/deanmao/node-chimera. It has examples to do exactly what you need.

Categories