write incremental number inside a file with grunt - javascript

i'm trying to figure out a way to make grunt write an incremental number or a hash on a file.
i have a file named config.yml which contains this:
version: 0.0.0
every time i change some files that i can specify somewhere (say all the .js and .css files), grunt should increment that number in some way.
i've seen some cache busting plugin but it's not what i am looking for as i don't want to have something like config.987234892374982.yml or config.yml?v=1.0.0 for example. i'm looking for a way to have grunt to find that number in that file, change it in a way that makes sense (incrementally ideally, or a random hash) and then save the file.
can you help in some way? thanks a lot!

I would seriously recommend against automatic version bumping for anything beyond the build number.
A version number is more than just an indication of how many times your product has been built. In essence, a version number is a semantic promise to your end user concerning compatibility with earlier releases. Node.js and npm and other systems using versioning are built around the core concept that X.Y.Z version numbers contain the following logic:
Software that doesn't have the same X version will have severely breaking changes that require those upgrading (or downgrading) to completely redesign the logic of how they use the software, and in some cases they even have to find alternatives because what they're doing right now doesn't work anymore.
Software within the same X version can relatively easily be swapped out without having to make sweeping changes to your own site or product.
Software within the same Y version can be swapped out without having to change any code, because it's supposed to be only bug fixes and security fixes. Only code that works around the things fixed needs to be changed.
Software within the same Z version has the exact same source, so if you have 2 copies of the same Z.Y.Z version, they can be used interchangeably.
This is the core contract that NPM and other package managers ask all their content providers to adhere to. In fact, it has already been shown that in some cases where content providers don't adhere to this versioning system, consumers of this content who assume semantic versioning apply have found their build to fail. Many consumers assume that if version 3.5.N works, that any version within the 3.5.X family will be interchangeable, and make use of that assumption by automatically building their code with the most recent version of the 3.5.X family.
Because of this, it's not a good idea to automatically bump versions beyond the build number. Bumping the patch version should only be done when you actually release a new version to the public, not after every build. Bumping the minor version should only be done when you have added new features to the product, but without the need for major changes to software that uses your product. Bumping the major version should only be done when you make drastic and destructive changes to your API, like moving and/or removing functions, function parameters, objects or attributes.

Related

Should I publish a new library version when making non functional changes?

I'm maintaining a set of javascript libraries and it's usual to have to update some dependencies that doesn't require any functional changes for instance when my library is not affected by a dependency breaking change.
Do you usually publish a new version of the library once updated its dependencies or wait to have to make a functional change for publishing a new version?
Also, do you include which dependencies have been update in the changelog?
PD: I'm using semantic versioning
When you are using semver you would have to release a minor update.
From the docs:
Patch version Z (x.y.Z | x > 0) MUST be incremented if only backwards compatible bug fixes are introduced. A bug fix is defined as an internal change that fixes incorrect behavior.
Minor version Y (x.Y.z | x > 0) MUST be incremented if new, backwards compatible functionality is introduced to the public API. It MUST be incremented if any public API functionality is marked as deprecated. It MAY be incremented if substantial new functionality or improvements are introduced within the private code. It MAY include patch level changes. Patch version MUST be reset to 0 when minor version is incremented.
So I would suggest to also take the updates of the 3rd party libraries into consideration and decide depending on what features they have introduced and decide based on this.
You have to take into consideration that depending on what distribution channels are used (and they may change) developers might have extended your library to a way that they are also using features of third partys which your library depends on or themselves depend on.
In the end there is not THE RULE but in my opinion more information is better than no information because you dont know what other developers try to accomplish.

Delivery mechanism for JavaScript bundle consumed by multiple clients

Scenario:
A shared component implemented as a micro-front-end and hosted on S3...
JS bundle containing the whole app (webpacked) hosted on S3
JS bundle contains hash with the latest commit, e.g. component.{hash}.js
Question:
When we ship a new bundle, what's the best strategy for ensuring the new bundle is consumed by all clients after release, taking into account browser/CDN caching? Important note: we would like client's to get updates immediately (internal).
Examples
On release, generate a component.html file that pulls in the bundle (script tag) based on the latest hash. Ship the new component.html to S3. Clients use <link rel-'import' href='somedomain.com/component.html'> always giving them the latest shipped version.
Issue: The bundle can still take advantage of CD/browser caching, but the HTML file cannot be cached since we need it to be hot for any release. Also seems odd that we have to make two downloads to just get to a single bundle.
Ship as an NPM module that can be consumed at build time by a client.
Issue: If we have 10 clients, all 10 need to build and ship to release with the new component. Assuming package.lock won't cause issues for wildcards (don't know it well enough).
Note: Internal component; may undergo frequent changes, e.g. AB testing, etc.
It's generally important that page/app consuming your component be tested and updated with whatever new version of your component is released. Therefore, your idea of using NPM is a good one. It's generally desirable to have this lag time, so that developers of the pages/apps can verify functionality and also handle any API changes that may have occurred (intentional or otherwise).
For NPM, Semantic Versioning (SemVer) is a defacto standard. The idea is that you number your versions in such a way that bugfixes (no API changes) can be immediately updated in apps. Some developers are okay with installing the latest patch-version of your module. Many prefer to lock to a specific version, and will only release after testing like any other update.
NPM aside, what I've done in the past is use hashed or versioned URLs for the library. I've also kept a latest URL, which redirects to the latest version. For those integrating my library that don't care what version they're on, they'll always get the latest version this way. Additionally, browsers can cache the redirect target and share that cache with other pages that might specify an exact version.
Important note: we would like client's to get updates immediately (internal).
This isn't really all that possible in all cases. In most cases, using proper response headers for caching will solve it. There are edge cases though. For example, what will you do if the user loads a page and then goes offline before your JavaScript loads?
It's always tricky when deploying new pages. Some of your clients will be on old versions, some on new. Maintain backwards compatibility as best as you can, and set your caching headers as appropriate for your circumstances.

How to use fonts in the node.js backend?

Background:
I am building a node.js-based Web app that needs to make use of various fonts. But it only needs to do so in the backend since the results will be delivered as an image. Consequently, the client/browser does not need access to the fonts at all in my case.
Question:
I will try to formulate the question as little subjective as possible:
What are the typical options to provide a node.js backend with a large collection of fonts?
The options I came up with so far are:
Does one install these hundreds or thousands of fonts in the operating system of the (in my case: Ubuntu) server?
Does one somehow serve the fonts from a cloud storage such as S3 or (online) database such as a Mongo DB server?
Does one use a local file system to store the fonts and retrieve them?
...other options
I am currently leaning towards Option 1 because this is the way a layman like me does it on a local machine.
Without starting a discussion here, where could I find resources discussing the (dis-)advantages of the different options?
EDIT:
Thank you for all the responses.
Thanks to these, I noticed that I need to clarify something. I need the fonts to be used in SVG processing libraries such as p5.js, paper.js, raphael.js. So I need to make the fonts available to these libraries that are run on node.js.
The key to your question is
hundreds or thousands of fonts
Until I took that in there is no real difference between your methods. But if that number is correct (kind of mind-boggling though) I would:
not install them in the OS. What happens if you move servers without an image? Or move OS?
Local File system would be a sane way of doing it, though you would need to keep track manually of all the file names and paths for your code.
MongoDB - store file names+paths in the collection..and store the actual fonts in your system.
In advent of moving servers you would have to pick up the directory where all the actual files are stored and the DB where you hold the file name+paths.
If you want you can place it all in a MongoDB but then that file would also be huge, I assume - that is up to you.
Choice #3 is probably what I would do in such a case.
If you have a decent enough server setup (e.g. a VPS or some other VM solution where you control what's installed) then another option you might want to consider is to do this job "out of node". For instance, in one of my projects where I need to build 175+ as-perfect-as-can-be maths statements, I offload that work to XeLaTeX instead:
I run a node script that takes the input text and builds a small but complete .tex file
I then tell node to call "xelatex theFileIJustMade.tex" which yields a pdf
I then tell node to call "pdfcrop" on that pdf, to remove the margins
I then tell node to call "pdf2svg", which is a free and amazingly effective utility
Then as a final step mostly to conserve space and bandwidth, I use "svgo" which is a nodejs based svg optimizer that can run either as normal script code, or as CLI utility.
(some more details on that here, with concrete code here)
Of course, depending on how responsive a system you need, you can do entirely without steps 3 and 5. There is a limit to how fast we can run, but as a server-side task there should never be the expectation of real-time responsiveness.
This is a good example of remembering that your server runs inside a larger operating system that might also offer tools that can do the job. While you're using Node, and the obvious choice is a Node solution, Node is also a general purpose programming language and can call anything else through spawn and exec, much like python, php, java, C#, etc. As such, it's sometimes worth thinking about whether there might be another tool that is even better suited for your needs, especially when you're thinking about doing a highly specialized job like typesetting a string to SVG.
In this case, LaTeX was specifically created to typeset text from the command line, and XeLaTeX was created to do that with full Unicode awareness and clean, easy access to fonts both from file and from the system, with full OpenType feature control, so would certainly qualify as just as worthwhile a candidate as any node-specific solution might be.
As for the tools used: XeLaTeX and pdfcrop come with TeX Live (installed using whatever package manager your OS uses, or through MiKTeX on Windows, but I suspect your server doesn't run on windows) pdf2svg is freely available on github, and svgo is available from npm)

Drop asm into an Existing JS Application

I got emscripten working, but it generates huge self executing files. Is it possible to make emscripten generate small functions that I want to optimize so I can copy paste them into my existing application easily?
Thanks!
I would advise against a copy/paste of some generated function from the inside of the Emscripten-generated output unless you have identified that bandwidth / compiling of the ASM/Javascript in the browser is a limiting factor that affects the performance of the application. Going down that route I suspect makes would make updates full of pain that I would avoid unless necessary.
What I think is better is to use the techniques in the Code Size section of the Emscripten docs
Some of the fairly straightforward ways are:
Using NO_FILESYTEM to prod Emscripten to not include some standard libraries (assuming you don't need them).
Using NO_BROWSER if you can
Using NO_EXIT_RUNTIME to not include some functions required when exiting.
Tinkering with the optimization flags, but according to the docs -O2 offers
the smallest and fastest output.
It is possible, but not well documented yet: you can use the --separate-asm flag. See
https://gist.github.com/wycats/4845049dcf0f6571387a
and
https://gist.github.com/kripken/910bfe8524bdaeb7df9a
for examples.

Practical approach to keeping jQuery up to date?

Some of the projects we're working on have strong roots in jQuery 1.4.2 or earlier, and somewhere between lacking the performance edge (or syntactic sugar) of the latest releases, the humiliation of using now-deprecated methods, and the discomfort of deploying a 3+ year old version of an actively maintained library, an upgrade is now imminent.
What are some practices popular in the community that we could adopt/re-visit to ensure a smooth rollout (i.e. focus on obscure compatibility issues, picking up global regressions, re-factoring some of the older code...)? How would they be best integrated into SDLC for future upgrades? What is a reasonable upgrade schedule for a library such as jQuery (I don't anticipate significant gains or justifyable costs to do so with every point release, but once every 6-12 months may very well be reasonable)?
To actually answer your three questions, here are some things I've done or at least recommend:
Best practices for a smooth upgrade rollout
Have tests. These can be unit tests for your JS and/or browser tests. These should cover at least the most typical and the most complex functionality used within your projects. If you don't have tests, write them. If you don't want to write tests, reconsider. If you reeeeally don't want to write tests, at least have a list of use cases that someone will be able to execute manually.
Make sure all your tests pass before the upgrade.
Read the release notes for every (major) version between the version you use now and the most current release. See also the Removed and Deprecated categories in the API docs. If any of your code uses jQuery UI, also look at those release notes and upgrade guides for the interstitial versions. As you do this, make note of the things you would likely have to change in your codebase (possibly making heavy use of grep).
If your project's current jQuery version is >= 1.6.4, also consider using the jQuery Migrate plugin to further assess the work required.
Decide on which version you want to be your upgrade target, based on the work required to get there, whether your project is using any third-party libraries that require a certain version of jQuery, other factors only you can consider, etc.
Meet with your team to go over the list of changes to be done to your codebase, and divide/assign the work accordingly. Maybe write some scripts or other tools to help out. If you have one, your team's coding style guide / best practices document may also need to be updated. Decide on one-shot (recommended) or rolling update releases, if that's possible+desirable. Come up with a suitable release strategy. (I recommend against releasing the upgrade as part of another unrelated large change to your codebase, so it's easy to roll back if you need to.)
Throughout the upgrade process, continually run your tests. When testing manually, always monitor the browser console for new errors. Write new tests that cover unexpected errors.
When all tests pass, decide on how you want to roll out--if it's a site, all users at once or a percentage at a time, etc. For a library or other project, maybe you'd release a beta/bleeding edge version that you can let your more ambitious users test out for you in the wild.
Document everything you just did so it will be easier for the next time.
[Profit.]
How to integrate upgrades into normal workflow
Again, tests. Make sure you have them. Make sure they're good, maintained, and cover a large portion of your codebase and use cases. A continuous integration setup to automate the running of these tests is highly recommended.
Consider getting your team to create and agree to follow a coding style guide or standard. This will make it easier in the future to search for deprecated function calls or constructs, since everyone would be following similar coding patterns. Tools such as scripts, commit hooks, static analysis utils, etc. to enforce good or sniff out bad coding style might be useful (depending on the team).
Investigate and maybe decide to use a package manager like NPM or bower to manage jQuery versions and other third party libraries you might use that depend on it. (You'll still need to maintain your own JS code and go through pretty much the same process as above.)
Again, once you're past version 1.6.4, make sure the Migrate plugin is part of your upgrade workflow.
Assess what worked from the initial big upgrade process, what didn't work, and extract a general process from this that works best with your current workflow. Whether or not you plan to upgrade every time there's a new version, there will be ongoing maintenance tasks and habits that you'd probably want to keep as general development best practices.
Reasonable upgrade schedule
That's essentially a CBA/risk management question. You'll have to weigh some things:
There should be no breaking API changes within the same major version, so you should generally be able to upgrade to the most recent minor version with minimal effort, no refactoring required. This assumes you have and maintain good tests, which you can run on your projects before you decide that all is well enough for a rollout.
Major version upgrades require more research, more refactoring, and more testing. After the research step, you should do a cost-benefit analysis of upgrading.
This might not matter much, but if any of your projects is a website that has many users, what would be the cost of making all of your users have to download essentially all of the changed JS files on your site the next time they visit it (instead of sticking with the older versions that are probably still cached in their browsers)?
The decision to upgrade should always be a subjective one. Minor or major, you'll still have to justify each time whether any upgrade would be worth it. Always read the release notes. Does it fix a security vulnerability or a bug related to issues that you or your users are currently experiencing? Would it significantly improve the performance of your project (be sure to have benchmarks to verify it later)? Does it greatly simplify a coding pattern you've been using, allowing your code to be written more cleanly and easily? Is there a third-party library you want to use that is dependent on this newer version? Are there third-party libraries you already use which are dependent upon the older version? (If so, are these libraries likely to be upgraded anytime soon to work with the newer version?) Are you that confident in your tests and QA process that an upgrade will take a reasonable amount of development resources and cause no major regressions? Were you thinking of eventually switching out jQuery with something else anyway? Etc.
Of course this is just my advice. There are a few recurring themes in it, and I hope they are clear. In any case, I hope someone finds this helpful!
You will always be outdated. Once you are done updating to the latest version, a newer one will come out a few months later.
Unless you are willing to put hours/days/weeks of development, testing and bugfixing, with the possibility of breaking user-facing functionality, you shouldn't be updating just to use the newest way of declaring event handlers. It won't hurt you. And normally this is a risky thing to do. This translates into dev team costs. You already know this. Refactoring, especially when there is no evident risk for the project, is in general hard to justify to managers. And you should double check your thoughts to be sure if having the new jQuery in code that is already working will make any difference.
Now, if you are working on creating new pages in an existing site, you could be including a new version in those areas. But, this will have a consequence: lets assume that you and your team, apart from developing the new part of the site, also have to maintain the part that is using the old one. Everybody will need to be aware of the specific version of jQuery they are writing their code against.
So, to close, I would say something like this. Unless there is real justifiable risk for the project to be delayed or to be technically blocked because of an older jQuery version, you are going to be getting into trouble for breaking something that is already working and will need to put extra hours just to make everything work as well as it was working before.
Anyway, this approach doesn't mean that you could start separating the 'new sections' from the old ones, and using the newest libraries in the new areas.
This is worth looking into: https://github.com/jquery/jquery-migrate/#readme
This plugin can be used to detect and restore APIs or features that
have been deprecated in jQuery and removed as of version 1.9. See the
warnings page for more information regarding messages the plugin
generates. For more information about the changes made in jQuery 1.9,
see the upgrade guide and blog post.
The Twitter guys have solved this problem quite nicely.
http://github.com/twitter/bower
It does what it says on the tin - it is a package manager for the web (and that includes keeping JS files like JQuery up to date)
In order to keep up to date in your development tree, I recommend using src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.js" (the full un-minified version which allows for easier debugging)
Then when you go to publish, just replace it with the specific minified version that is in the header comment (currently http://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js) This has the bonus of allowing better client side caching and using someone else's bandwidth.
If caching is less of a concern than ensuring that it will automatically get bugfixes for that minor version release, you can use just the major and minor version such as: http://ajax.googleapis.com/ajax/libs/jquery/1.9/jquery.min.js (Note: google doesn't yet have the 1.9 series up; however the 1.8 series is up to 1.8.3) Since these get updated periodically for bug fix releases they don't get cached like the version specific releases
In this era we cannot be predictable about the stability of the any software versions. Few years Before the versions of software and services releases after a year or two year. But at this time the versions of the services are updating rapidly and frequently.
So if you are using any service with your service you have to use Agile Development. By this development method you can easily make changes in the new requirements and change the required methods according to you.
And please don't use depreciated methods because they are not suitable for long-time service versions. And make libraries of the used services of library of your own service function that are using other services so that you can easily change them according to your new version.
For example : like you have a method name update_var(); it is calling a another method of other service like $a = newlib::check_update();. Then by creating libraries you have to change the main library of the function of your and the core library of the involved service

Categories