the new guy here. I've been looking for a good solution to using Stylus (compiled CSS) client side.
Now, I know the tips regarding not using compiled CSS client side because:
It breaks if JS is not in use.
It takes extra time to compile in a live client environment.
It needs to be recompiled at every single client, which just isn't green.
However, my environment is an extension made for Chrome and Opera. It works in a JS environment and it works offline, so neither 1, 2 or 3 applies. What I'm really looking for here is just a way to write CSS more efficiently with less headaches, more variables, nesting and mixins.
I have tried Less, which is the only one of the trio Less, Sass and Stylus which currently works nicely client side. So, does anyone know a good solution for Stylus?
CSS Preprocessors aren't actually meant to be run client-side. Some tools (i.e. LESS) provide a development-time client-side (JavaScript) compiler that will compile on the fly; however, this isn't meant for production.
The fact that Stylus/Sass do not provide this by default is actually a good thing and I personally wish that LESS did not; however, at the same time, I do realize that having it opens the door to people that may prefer to have some training wheels which can help them along in the beginning. Everyone learns in a different way so this may be just the feature that can get certain groups of people in the door initially. So, for development, it may be fine, but at the time of this writing, this workflow is not the most performant thing to do in production. Hopefully, at some point, most of the useful features in these tools will be added to native CSS then this will be a moot point.
Right now, my advice would be to deploy the compiled CSS only and use something like watch or guard or live-reload or codekit (or any suitable equivalent file watcher) in development so your stylus files are getting re-compiled as you code.
This page likely has the solution: http://learnboost.github.io/stylus/try.html
It seems to be compiling Stylus on the fly.
Stylus is capable of running in the browser
There's a client branch available in the GitHub repo
I don't totally understand your question but I'll offer some of the experience I've have with compiled css using LESS.
Earlier implementations needed javascript to compile the LESS files into CSS in the browser, I've never tried to work this way didn't seem that great to me and as you say if JS is switched off your in for a rough time.
I've been using applications recently to compile the LESS code into valid CSS, this gets around the need for JS to convert the source code.
The first application I used was crunch http://crunchapp.net/ which worked quite well but didn't compile the css on the fly.
The application I'm using now is called simpless http://wearekiss.com/simpless and this creates valid css on the fly so as soon as I've hit save in sublime text and refresh in the browser I can see my changes to the css.
Using this work flow, I'm able to get around the issues your raised above, when I'm done doing development I just upload my css file outputted from simpless which is also heavily minified which also saves time in terms of needing to optimise the css further.
I hope I have understood the question correctly, if not apologies.
Cheers,
Stefan
Related
The Chrome Dev Tools JavaScript and CSS Coverage Drawer is pretty neat, except for one thing... It would be nice if it could be left on, and not reinitiate its analysis between pages.
I would like to be able to analyze an entire site, a subset of pages, or a set of websites, and find what is unused amongst them.
I get that it would mean browsing entire projects and using every feature (or setting up tests) to be thorough/precise, but that's not as tedious as what I have to do entirely without such a utility/feature. (And it doesn't seem like you would need to be meticulous to obtain usable or initial observations from a sub-thorough audit. The DevTools utility doesn't facilitate automated tests on its own either.)
The codebase at issue is huge (1.9mb uncompressed on the front end), and (possibly) contributed to by many developers who may or may not have left relics, legacy code, or abandoned dependencies. There is possibly also code that is only relevant in parts of projects, which could reveal opportunities for reducing dependency scope.
Is there a way to begin to crack into code use without a laborious manual deep dive?
It was one of the first things that came to mind when learned about the Google's coverage utility, and I assumed it would be capable of analyzing multiple pages collectively, but it doesn't.
Is there anything else that does this? Does any utility exist that can search an entire site or multiple pages and find unused JS and CSS?
Side note: The CSS is in SASS. For that and other reasons I may need to manually analyze the results and improve the code, which is trivial comparatively, so I'm not looking for a feature to automate code improvements. It's a similar situation with the JS which is minified.
This is not a request for a recommendation on poduct/software. It is asking if task X is possible, which is technically answerable with a yes or no.
UPDATE: It does seem like there are utilities for CSS, but still nothing found for JS.
For Static Analysis:
Try unusedcss.com to analyse unused CSS code across the entire website. More - Helium CSS
Try deadfile which is a simple util to find deadcode and unused files in any JavaScript project.
For Dead-code Removal:
Try purgecss to remove unused CSS code from your project.
Try Google's closure compiler which parses your JavaScript, analyzes it, removes dead code and rewrites and minimizes what's left.
That being said, detecting unused code is a hard problem to solve, because there are countless ways to invoke a given piece of code. Hence, these tools wisely and with your best judgement.
I have been wondering why do we need uncompressed files for development and minified for production? I mean what are the benefits of using uncompressed files over minified?
Do they give you better error messages or is it just that if we want to look something up we can go through code of uncompressed files?
Might be dumb question to some of you but I never had habit of going through code of well known big libraries and if I am not wrong, very few people do it.
The main reason for this is usability. When a js-file is minified and you've got an Error and trying to find a place where it is located, what would you find? just a minified string like
(function(_){var window=this,document=this.document;var ba,ea,fa,ha,la,na,oa,pa,qa,sa,ra,ta,wa,xa,za,Aa,Da,Ea,Fa,Ga,Ia;ba=function(a){return function(){return _.aa[a].apply(this,arguments)}};ea=function(a){return ca(_.p.top,a)||ca(da(),a)};_.aa=[];fa="function"==typeof Object.create?Object.create:function(a){var b=function(){};...
and so on. Is it readable for you? I don't think so. It's not readable at all.
For a much better understanding of the code, you need to uncompress it. It will add some additional spaces and format the code in a much readable way. so it would look like:
(function(){
var b = j(),
c = k (b);
})();
It allows you to move from one piece of code to another and discover the code or search your error inside.
Also, for production, you need not only minify your code but compress it as well. So, it would be nice to use Uglify library for that.
It removes unnecessary spaces, rename variables, objects and functions for much smaller names like a or b12. It increases downloading speed.
Hope it helps you.
There may be several reasons why one might prefer uncompressed [unminified] files during development. Some reasons I can think of:
Reduce time to view changes while coding by skipping the minification step. (If you use minification as a part of your build step during development, you may have to wait for the minification to complete each time you make a change to see it in the browser.)
If code mangler is being used, variables may be renamed and not intuitively apparent as to what they are actually called in the codebase
If you are using webpack or some similar module loader, it may include extra code for hot module reloading and dependency injection/error tracking when not minified (which you wouldn't want in production)
It allows debugging to be easier, readable and intuitive.
Minification and code mangling are done MAINLY to make the delivery of those assets more efficient from the server to an end user (client). This ensures that the client can download the script fast and also reduces the cost for the website/business to deliver that script to the user. So this can be considered an extra unnecessary step when running the code during development. (The assets are already available locally so the extra payload cost is negligible)
TLDR: Minification and code mangling can be a time consuming process (especially when we are generating map files etc) which can delay the time between changes and the time those changes are visible on the local instance. It can also actually hamper development by making it harder/less intuitive to debug
I am making a web app which, based on a jped picture, recognizes characters and renders it in an interactive interface for the user - this includes some async code. There are 4 js scripts file, which all require npm modules, and an html view.
In order to test the app client-side, I have decided to bundle the scripts together.
It shows the following error message:
Uncaught ReferenceError: require is not defined
List of my npm modules whose code returns this error at run time:
isexe: requires fs
destroy: fs
tesseractocr: child_process, fs
I have tried:
browserify my scripts into a bundle, but I read that it would not work with async functions ;
webpack the scripts into a bundle, but Node modules like fs and child_process are returning 'undefined' ;
adding a specific Node module, child-process-ctor, to force child_process to the included
Alas, the same error message is returned.
Questions:
Is bundling the scripts the right approach?
Is the problem that webpack does not transpile fs and child_process correctly?
Which possible solutions should I consider?
Thanks all. This is my 1st question on SO -- any feedback is much welcome!
PS: This might be redundant with Using module "child_process" without Webpack.
Okay this answer is a follow up to my comments, which answer the question more directly. However here I'll go into more detail than is probably necessary, but it will thoroughly answer what you asked. Plus it's educational and I'd say it's pretty fun once you start really digging into it :D
To start at the beginning. As the internet in its early days became more advanced the need for a type of "front end logic" increased and Netscape's response to this demand was to birth a competitive, cutting edge programming language in record time.
And by record time I mean 10 days, and by competitive I mean barely functional.
That's right Javascript was born in 10 days (literally). As you can imagine it was a pretty poor language, but it worked well enough that people started using it.
Because it was the programming language of the internet, and because of how fast the internet grew, enough people started to use it that the thought of removing it became scary.
If you changed it you would destroy backward compatibility with millions of websites. The other idea would be to keep it, but also implement a new standard. However it would be hard to justify this because javascript already took a lot of work to upkeep, upkeeping multiple standards would be a nightmare (cough... flash cough).
Javascipt was easy enough for "new" programmers to learn, but the problem was javascript's only 1 language in a world where php, ruby, mySql, Mongo, Css, Html all rule as dominant kings in their respective kingdoms.
So someone thought it was a good idea to move javascript to the server and thus node.js was born.
However for javascript to mean anything on the server it had to be able to do things that you wouldn't want it to be able to do in your browser. For example, scan your hard drive and edit your files.
If every website you visit could start scanning and uploading everything in your system well....
However if your server software can't edit or read files you need it to well....
You get the idea. It's the same language, but because of security issues node.js has some differences. Mainly the modules it's allowed to use.
Now for the fun part. Can you run node.js client side in a browser. Technically yes. In fact now that we're dumping entire operating systems into a subset of javascript called asm.js there really isn't anything javascript can't do with enough processing power.
However even if you dump the entire node.js engine (which is basically a stripped down version of chrome) into asm.js you would still have the same security limitations placed by the "Host" browser and so your modules could only run within the sandbox the browser provides.
So it is technically just a browser within another browser Running at half the speed with the same security limitations.
Is it something I would recommend doing? Of course not.
Is it something that people haven't already tried before? Of course not.
So I hope that helps answer your question.
Probably any experienced web developer would be familiar with this problem: over time your css files can grow pretty huge and ugly because of all the no longer used selectors, which might be pretty tricky to find. I'm working on a rails project where we tend to re-design things quite frequently, which leads to a tonne of deadweight css. What's the best way to find and remove it?
Now, I do know that there is a rails plugin called deadweight built specifically for that purpose. However, here's my problem with deadweight: first of all, it completely ignores selectors used in javascript. Next, it scans only those pages that you configure it to scan which means there's a risk of removing something that is used on pages that you didn't scan for some reason. Finally, it finds unused selectors only in compiled css (we use LESS) - matching these against the actual code is a bit too involved.
I have also tried http://unused-css.com/ - they're great, but can't access localhost and, again, can only scan compiled CSS.
I really think there must be a better way of doing this. Actually, some time ago I decided to optimise one particular css file by grepping each selector in the entire project directory (emacs + rinari mode make it super-easy and super-fast), and each time I didn't see any html or css in the results I removed the style. Zero problems, worked like a charm. Obviously, I'm not going to do that for the entire site. However, I really don't believe that this couldn't be automated. Now, before I fire up my python and code this up, can anyone actually tell me if I'd be reinventing the wheel?
Check out uCSS library from Opera Software.
It helps you to find unused CSS, as well as duplicate CSS. Also, you can get an overview of how many times each rule has been used in your markup. Several options are available by setting up a config file.
Update:
Another great alternative: csscss.
Written in Ruby and supports SASS, Less.
Update:
Another great alternative: uncss.
It works across multiple files and supports Javascript-injected CSS.
Dust Me Selecters and/or CSS Usage Firefox extensions can help you weed out unused CSS.
In Chrome's Developer Tools you can use the Web Page Performance tool to find unused CSS rules.
I already spent some time developing small projects with the GWT and I recently discovered Script#.
Now I am curious about how mature this toolkit is.
I am especially interested in the opinion of someone who tried both GWT and Sharp# and therefore is able to compare the two.
How mature is Script#?
Is it true that it is maintained by only one guy?
Where does it lack functionality when compared to the GWT?
Does it have advantages over the GWT?
Personal opinion on Sharp#?
Thanks for your time.
While this is a duplicate question, I think the answer in the previous question doesn't touch on some important points with regards to GWT.
GWT is a Java to Javascript compiler with a heavy emphasis on optimizing the generated Javascript beyond anything that's possible to do by hand. The generated JS is also browser specific, so Webkit browsers don't download IE hacks. The generated files are also cachable because the name is the md5 sum of the script contents, so you could cache it forever. This means a user only has to download the code once until it changes. Script#, from my quick skimming of the website, only seems to translate C# to Javascript.
GWT offers advanced features like developer guided code splitting, ClientBundle for bundling resources and CssResource for conditional CSS, etc. Combined with UiBinder, developing a site that has 2 round trips for application start up is possible to do and not very hard. I don't think Script# has any of this, and most JS libraries don't either.
GWT has dev mode for a JS like development environment (change code, refresh the browser, see the changes), I'm not sure if Script# has something like that.
I could keep going, but I think I'll stop with those. When you combine this with the other answers, GWT is pretty compelling.