One i18n approach across js and java - javascript

I realize this may not be the right place to ask this question so feel free to send me elsewhere. I am looking to internationalize a set of apps that have strings in a combination of java, tsp and javascript files. We have some use of resource bundles in a. few of the java apps but I am looking for a unified approach for all to both reduce the number of translation files and provide a single point of reference for them. I have yet to stumble upon a solution that is not specific to one or the other.
What I have thought of so far:
Database-driven - this would achieve the two stated objectives but, unless I am missing something, would result in a lot of db calls and likely performance degradation.
External files - this is the most feasible approach as I can read from a shared location. The only part I am struggling with is how to organize them so as to make it possible to load all the tags for, say, a single page together.

I would recommend the "External files" approach... decouple your translation files from your code... this would help your localization process a lot.
For i18n you might read this article (focus JavaScript but not only) ...
I would recommend looking into a i18n lib that is ready to be used in different frameworks, i.e. i18next
There is some java based lib too: i.e. i18next-android
As said in the beginning, you should not only consider that you have to instrument your code (i18n) to get your app/website translated. You should think about the process too - how will you solve continuous localization, how you keep track of progress, etc...
For a translation management+ system you might eg. have a look at locize it plays well with all json based i18n frameworks and in the core has a very simpe api... and provides a lot more than traditional systems...

Related

Humble Haxe, approaches to automating merging haxe within a non haxe friendly target project?

Often with haxe we need to use existing non haxe code, so instead of writing externals we may want our haxe code to absorb parts of a system or add to it, a system where we can't assume a nice haxe setup.
For instance taking the js target suppose we want to add functionality to some existing javascript code, we can't easily control the entry point of haxe we have to inject functionality or classes within current js code. JS code which maybe too complex to rearrange into a really haxe friendly format. So one approach would be to mock up a class with stuff we need and then try to create some neko to automatically insert it and convert it, merging into the current codebase... but this is quite an open ended problem, and would differ with other targets.
So my question is what approaches have you developed for mixing in haxe target code within existing target code, for instance adding a haxe class within a js source, maybe using some neko to automate the insertion and rearrange the boot code needed for the haxe in the haxe class. But also interested in how you might approach this with other targets I probably have ideas for haxe flash but not say PHP or c++. Lets assume you can't setup the standard main boot structure, and on every publish you really would like your haxe code to be merged correctly back into the main non haxe project code when you hit the build button.
Tricky one but very important as solutions make it much easier to use haxe within more projects.
I've had only small amounts of experience with what you say, but here goes:
JS - I've used a custom Markdown library (variant of mdown), which was written in Haxe, in a predominantly non-haxe javascript environment. I tried to make it as "black box" as possible - the Haxe library exposed a static method using #:expose metadata, so I could call Markdown.convert(str); from anywhere in my Javascript. We found keeping it as "black box" as possible was beneficial, so the non-haxe Javascript knew what input to provide and what output was expected, but everything else was opaque.
PHP - I've done one or two projects where I did some work in Haxe, and had to include it on an existing PHP website. I found I could piggy-back off the existing websites sessions to check that the user was authenticated, and I set up a way for the existing site to provide a "base template" for the Haxe portion of the app, which Haxe then rendered into. Pretty hacky, but it did the trick and meant the template for both the Haxe section and the non-haxe section was updated.
Another approach for server-side could be separating out into user-facing code and an API. So perhaps Haxe sets up a JSON API, and the PHP talks to it. Or perhaps you have a Haxe website, and it talks to a Ruby/Python API etc.
So as you can see - I've tried to keep things pretty distinct. If Haxe can function in a relatively standalone manner, and interacts with other code by taking a particular input and providing a particular output, then things can behave relatively predictably. I haven't tried much further integration than that, I think the way Haxe works (using it's own class system and data structures etc) is different enough that tight integration could prove problematic.
I rarely mix code but here it goes:
for flash you don't need anything, if you add your swf lib made in another program you can access the classes from haxe.
for js is not possible without externs unless you want to use untyped. Or maybe you understood what Jason made, i didn't :)
for cpp is even worse, you have to use cffi which will end up in a mess of code, for samples inspect how nme extensions work.
never used java but i think here is simple

Good Practice with External JavaScript files

I'm new to JavaScript.
How should I split my functions across external scripts? What is considered good practice? should all my functions be crammed into one external .js file or should I group like functions together?
I would guess more files mean more HTTP requests to obtain the script and that could slow down performance? However more files keep things organized: for example, onload.js initializes things on load, data.js retrieves data from the server, ui.js refer to UI handlers...
What's the pros advice on this?
Thanks!
As Pointy mentioned, you should try a tool. Try Grunt of Brunch, both are meant to help you in the build process, and you can configure them to combine all your files when you are ready for prod (also, minify, etc), while keeping separate files when you are developing.
When releasing a product, you generally want as little HTTP requests as possible to load a page (Hence image sprites, for example)
So I'd suggest concatenating your .js's for release, but keeping them separated in a way that works well for you, during development.
Keep in mind that if you "use strict", concatenating scripts might be a source of errors.
(more info on js strict mode here)
It depends on the size, count of your scripts and how many of them you use at any time.
Many performance good practices claim (and there's good logic in this) that it's good to inline your JavaScript if it's small enough. This leads to lower count of HTTP requests but it also prevents the browser from caching the JavaScript so you should be very careful. That's why there're a practices even to inline your images (using base64 encoding) in some special cases (for example look at Bing.com, all their JavaScript is inline).
If you have a lot of JavaScript files and you're using just a little part of them at any time (not only as count but as size) you can load them asynchronously (for example using require.js). But this will require a lot of changes in your application design if you haven't considered it at the beginning (and also make your application complexity bigger).
There are practices even to cache your CSS/JavaScript into the localStorage. For further information you can read the Web Performance Daybook
So let's make something like a short retrospection.
If you have your JavaScript inline this will reduce the first load of the page. The inline JavaScript won't be cached by the browser so every next load of your page will be slower that if you have used external files.
If you are using different external files make sure that you're using all of them or at least big part of them because you can have redundant HTTP requests for files which actually are unnecessary loaded. This will lead to better organization of your code but probably greater load time (still don't forget the browser cache which will help you).
To put everything in at single file will reduce your HTTP requests but you'll have one big file which will block your page loading (if you're using synchronous loading of the JS file) until the file have been loaded completely. In such case I can recommend you to put this big file in the end of the body.
For performance tracking you can use tools like YSlow.
When I think about good practice, then I think of MVC patterns. One might argue if this is the way to go in development, but many people use it to structure what the want to achieve. Usually it is not advisable to use MVC at all if the project is just too small - just like creating a full C++ windows app if you just needed a simple C program with a for loop.
In any case, MVC or MV* in javascript will help you structure your code to the extent that all the actions are part of the controllers, while object properties are just stored in the model. The views then are just for showing purposes and are rendered for the user via special requests or rendinering engines. When I stared using MV*, I stared with BackboneJS and the Guide "Developing BackboneJS Applications" from Addy Osmani. Of course there are a multitude of other frameworks that you can use to structure your code. They can be found on the TodoMVC website.
What you can also do is derive your own structure from their apps and then use the directory structure for your development (but without the MV* framework).
I do not agree to your concern that using such a structure lead to more files, which mean more HTTP requests. Of course this is true during development, BUT remember, the user should get a performance enhanced (i.e. compiled) and minified version as a script. Therefore even if you are developing in such an organized way, it makes more sense to minify/uglify and compile your scripts with closure compiler from Google.

A good way to separate client side templates from the main layout

What is the best way to deal with multiple client side templates?
I noticed that if I keep them in my "mother" html file, it soon gets bloated with stuff, so I thought that maybe it would be better if I just put them in separate js files and load them one by one.
Another idea of mine was to avoid putting them separately as templates, but rather write them as strings and sort of couple them with the backbone.js views which are going to use them. I know that this would bring a lot of negative from designers, web developers, and software engineers in general, but for the projects I am working on, this seems like a very speedy way to develop because I have logic and layout at the same place. Plus, by reverse engineering, I proved that a bunch of prominent web services are doing the same so ...
One option is to use RequireJS, which includes a 'text' plugin for templates.
You can then use the r.js optimizer to combine all of these (plus JS modules, if you go that route) into a single file.
The optimizer can be run either as part of your build process, or in-process if you're using node.js.
You can have them in separate files, but combine into one file on a server side.
And take a lot of negative from me for your idea to keep templates in strings :). It might work until they are simple, but when they get more complex it gets badly, because html structure is not so obvious, so it is harder to write css and so on.
As #stusmith said, require.js is a good option.
also, take a look at the boilerplate's examples
http://backboneboilerplate.com/
https://github.com/thomasdavis/backboneboilerplate/blob/gh-pages/js/views/backbone/page.js
cheers

What is the proper way to structure/organize javascript for a large application?

Let's say you are building a large application, and you expect to have tons of javascript on the site. Even if you separated the javascript into 1 file per page where javascript is used, you'd still have about 100 javascript files.
What is the best way to keep the file system organized, include these files on the page, and keep the code structure itself organized as well? All the while, having the option to keep things minified for production is important too.
Personally I prefer the module pattern for structuring the code, and I think this article gives a great introduction:
http://www.adequatelygood.com/2010/3/JavaScript-Module-Pattern-In-Depth
It keeps my global scope clean, enables namespacing and provides a nice way to seperate public and private methods and fields.
I'd structure the code in separate files, and seek to keep coupling low and cohesion high. Although multiple files are bad for client performance (you should minimize the number of requests), you'll easily get the best from both worlds using a compression utility.
I've some experience with YUI Compressor (for which there is a Maven plugin:
http://alchim.sourceforge.net/yuicompressor-maven-plugin/compress-mojo.html - I haven't used this myself). If you need to order the javascript files in a certain way (also applies for CSS), you could make a shell script, concatenating the files in a given order in advance (Tip: YUI Compressor defaults to STDIN).
Besides, either way you decide to minify your files will probably do just fine. Your focus should instead be on how you structure your code, so that it performs well on the client (should be highest priority in order to satisfy a good UX), and is maintainable in a way that works for you.
You mention namespaces, which is a good idea, but I'd be careful to adopt too many concepts from the traditional object oriented domain, and instead learn to utilize many of the excellent features of the javascript language :)
If the overall size is not too big, i'd use a script to compile all files into a single file and minify this file. Then I'd use aggressive caching plus the mtime in the url so people load that file only once but get the new one as soon as one is available.
Use:
jslint - to keep checks on code quality.
require.js - to get just the code you need for each page (or equivalent).
Organise your js as you would any other software project, break it up by roles, responsibilities and separating concerns.

Dynamic JavaScript loading with compression

Although I am using the Zend framework, MooTools JS library and my questions revolves around them, this is a more generic question.
I am working on a web app, in which I am using many elements which are sometimes useful on other pages (for example OpenLayers related MooTools classes).
Mootools already allows this "segmentation" by "classing" (creating "Class"..) so I feel tha the next thing to do is to have separate JS file for every class, then send a request to a PHP page with the classes I want and get in return JS file with what I need.
At the same time this mechanism will minify and gzip and cache locally on the server (for future requests) and send me back this one file.
I didnt go into design yet and was wondering if such / similar solution is out there?
Alternatively I see libraries like labJS that speeds things up by multi threading the requests, this however does not complete the solution with minification and gzip (I have to take care of this server side with another complementary solution).
Is any one using similar dynamic JS "Class" loading solution?
Cheers,
Roman
I think the tool you want to look at is the minify project. Here's a description:
Minify is a PHP5 app that helps you
follow several of Yahoo!'s Rules for
High Performance Web Sites.
It combines multiple CSS or Javascript
files, removes unnecessary whitespace
and comments, and serves them with
gzip encoding and optimal client-side
cache headers.
It's very straightforward to use, and, say you have it installed in http://example.com/min/, and you have two files to compress:
/resources/js/lib/library.js
/resources/js/main.js
You would call this url:
http://example.com/min/b=resources/js&f=lib/library.js,main.js
And the code looks like:
<script type="text/javascript"
src="/min/b=resources/js&f=lib/library.js,main.js"></script>
So basically there is a consistent URI based way to merge files together.
I'm not familiar with zend and only somewhat familiar with mootools so take this with a larger-than-average grain of salt.
Wouldn't it be easier to have a process that generates/minifies all the possible combinations of the classes you have and then just request a particular combination? You could automate this with a script pretty easily.
Also, I've used the HyperCache and WP_Minify plugins for wordpress and they do pretty much everything you are talking about so you might want to look there for inspiration.
What you describe is pretty much what the YUI PHP Loader does, although it is tailored towards the YUI JavaScript library. However, the project is open-source under the BSD license so it is a good place to get some ideas to implement your own.
Do also have a look at Lissa (which is based on YUI PHP Loader) that is a generic JavaScript (and CSS) loader.

Categories