I am hoping to upgrade an old project's i18next implementation. The i18n-related code is v1.11, which predates even the breaking changes mentioned for v2.x+
Once upon a time when jQuery was the norm, i18next.js would search the document for translations provided with data-i18n="key" attributes, and fill them in. If you changed languages, you simply called $(parent).i18n() and it would scan the document and fill them in.
I just read the most recent documentation, and they seem to be sticking strictly to the one-key-at-a-time paradigm. No data-attributes, just i18next.t('key').
Have I missed something, or do I need to write my own iterator to go through the document and make the updates? Their code samples seem to indicate "yes", but I have a hard time imagining that anybody sane would want to go through their document and update entries by selector, one-by-one.
To be clear: I'm not asking for a full migration guide. I have headaches ahead of me, this much I know. But I'm hoping I've missed something about using data-i18n attributes, which no longer appear to have built-in support.
Related
In Ember.js, we use Ember's own object variants which recommend/necessitate using this.get and this.set in order to access object attributes.
I (mostly) understand why this is done, and appreciate the consistency and flexibility it adds to the Ember programming experience, but I do feel that I lose out on my IDE's sanity checking.
With Jetbrain products (or any good IDE w/ deep analysis and completion) I can usually rely on symbol suggestions to make sure I'm choosing the correct variable name. Entering in strings with ember relies on me to get the name right, and I'm a fallible human.
I have a few thoughts regarding possible solutions.
Some IDE plugin which does static analysis to suggest the correct string to use
An ES6 or alternative transpired language which accesses members the ember way by default
Some way of automatically establishing string constants where I need them
Some ember debugging setting which at least throws warnings if I try to get a variable which hasn't been defined.
I would also find it useful to throw warnings if ember catches me trying to set an attribute to undefined.
Hopefully, someone will tell me that one of these solutions exists, something better has been thought of, or my problem isn't really a problem.
(An example to illustrate my problem:)
In the following snippet
const email = this.get('email');
const newInvitation = this.store.createRecord('invitation', {email: email});
I am trying to get the attribute email but the real attribute I meant to get was called emailAddress. When I create the record, I do so with an undefined email field which isn't caught until later in the code.
It wasn't terrible to debug, but if I have to manually sift through every line of code every time I misspell something, I'm going to waste a lot of time and be a sad debugging-boy. Help!
Currently we don't have a good solution for this. However the future looks bright!
Currently there is a lot of work going on in the ember-typings repository to build a typescript definition that will allow the typescript language server to give you that completion. This will give you completions for things like this.get('foo') but not for things like this.get('foo.bar') in near future.
Also I've build this, which will allow you to omit .get and .set on browsers that support the proxy object. However this is more a proof-of-concept then something you should use in production!
If you just want debug messages if you access a property thats null you can use unknownProperty:
unknownProperty(key) {
console.log('access to unknown property');
}
However I sometimes its required to access an unknown property for code like this:
if(!obj.get('foo')) {
obj.set('foo', 'bar');
}
So overall I would recommend you to try out typescript, because thats probably the solution that will give you a good developer experience soon and good support from the community as well. Interesting is also the ES classes RFC, that shows, that ember goes toward standard ES classes and at some point we won't .set and .get at all.
Also glimmer integration is going forward, and inside a glimmer component you won't need .set and .get.
Also I don't recommend you ember script. I tried it out, but there is basically no-one using it, and no support at all.
I am working on an assignment for a course in "Coding the Humanities" which involves writing a custom web component. This means I am required to use Polymer even though as far as I can see there is absolutely no added value to doing so.
I want to create a literal chat "room" in which users input a character to identify themselves and can walk around the room bumping into one another after the fashion of robotfindskitten.
My idea was to write each character and its position to a Firebase location, updating everyone's positions in real time, so I need the Firebase JS client- using core-ajax for REST requests isn't fast enough.
The GitHub readme for the core-firebase element consists of a link to a less than helpful component page.
Looking at the core-firebase element itself, I don't see anything that corresponds to the 'value' event; locationChanged has a 'child-added' event handler, but that's it.
Am I crazy for thinking the core-firebase element is just very incomplete? Should I try to write my own 'value' handler? If so, do I just add it to the locationChanged property of the object passed to Polymer()? I'm very confused - I know enough JS that what's happening in the core-firebase code is straddling the limits of my comprehension. (Which might have to do with the this keyword, I don't know.) Any input here would be appreciated. (And yes, I've already remarked to the instructor that I could have handled this using plain old jQuery and Firebase if I didn't have to use Polymer. No word as yet on that.)
Looking at the commits for core-firebase it looks like it's had about two days work on it plus some maintenance, so it wouldn't be surprising if there are missing features.
One nice part about Polymer is that it interops very well with other ways of writing apps. It's totally reasonable and supported to use jQuery and Firebase directly to read from firebase and react to changes. You can still make good use of polymer's templating and databinding by doing this within an element of your own and using Polymer's data binding, templating, and plain old DOM events to propagate those changes throughout your app and render them onto the page.
Let's say you would get a bunch of .js files and now it is your job to sort them into groups like:
requires at least JavaScript 1.85
requires at least E4X (ECMAScript 4 EX)
requires at least ECMAScript 5
or something like this.
I am interested in any solution, but especially in those which work using JavaScript or PHP. This is used for creation of automated specifications, but it shouldn't matter - this is a nice task which should be easy to solve - however, I have no idea how and it is not easy for me. So, if this is easy to you, please share any hints.
I would expect something like this - http://kangax.github.com/es5-compat-table/# - just not for browsers, rather for a given file to be checked against different implementations of JavaScript.
My guess is, that each version must have some specifics, which can be tested for. However, all I can find is stuff about "what version does this browser support".
PS: Don't take "now it is your job" literally, I used it to demonstrate the task, not to imply that I expect work done for me; while in the progress of solving this, it would be just nice to have some help or direction.
EDIT: I took the easy way out, by recquiring ECMAScript 5 to be supported at least as good as by the current FireFox for my projekt to work as intendet and expected.
However, I am still intereseted in any solution-attemps or at least an definite answer of "is possible(, with XY)" or "is not possible, because ..."; XY can be just some Keyword, like FrameworkXY or DesignPatternXY or whatever or a more detailed solution of course.
Essentially you are looking to find the minimum requirements for some javascript file. I'd say that isn't possible until run time. JavaScript is a dynamic language. As such you don't have compile time errors. As a result, you can't tell until you are within some closure that something doesn't work, and even then it would be misleading. Your dependencies could in fact fix many compatibility issues.
Example:
JS File A uses some ES5 feature
JS File B provides a shim for ES5 deficient browsers or at least mimics it in some way.
JS File A and B are always loaded together, but independently A looks like it won't work.
Example2:
Object.create is what you want to test
Some guy named Crockford adds create to Object.prototype
Object.create now works in less compatible browsers, and nothing is broken.
Solution 1:
Build or find a dependency map. You definitely already have a dependency map, either explicitly or you could generate it by iterating over you HTML files.
Run all relevant code paths in environments with decreasing functionality (eg: ES5, then E4X, then JS 1.x, and so forth).
Once a bundle of JS files fail for some code path you know their minimum requirement.
Perhaps you could iterate over the public functions in your objects and use dependency injection to fill in constructors and methods. This sounds really hard though.
Solution 2:
Use webdriver to visit your pages in various environments.
Map window.onerror to a function that tells you if your current page broke while performing some actions.
On error you will know that there is a problem with the bundle on the current page so save that data.
Both these solutions assume that you always write perfect JS that never has errors, which is something you should strive for but isn't realistic. This might; however, provide you with some basic "smoke testing" though.
This is not possible in an exact way, and it also is not a great way of looking at things for this type of issue.
Why its not possible
Javascript doesn't have static typing. But properties are determined by the prototype chain. This means that for any piece of code you would have to infer the type of an object and check along the prototype chain before determining what function would be called for a function call.
You would for instance, have to be able to tell that $(x).bind() o $(x).map are not making calls to the ecmascript5 map or bind functions, but the jQuery ones. This means that you would really have to parse out the whole code and make inferences on type. If you didn't have the whole code base this would be impossible. If you had a function that took an object and you called bind, you would have no idea if that was supposed to be Function.prototype.bind or jQuery.bind because thats not decided till runtime. In fact its possible (though not good coding practice) that it could be both, and that what is run depends on the input to a function, or even depends on user input. So you might be able to make a guess about this, but you couldn't do it exactly.
Making all of this even more impossible, the eval function combined with the ability to get user input or ajax data means that you don't even know what types some objects are or could be, even leaving aside the issue that eval could attempt to run code that meets any specification.
Here's an example of a piece of code that you couldn't parse
var userInput = $("#input").val();
var objectThatCouldBeAnything = eval(userInput);
object.map(function(x){
return !!x;
});
There's no way to tell if this code is parsing a jQuery object in the eval and running jQuery.map or producing an array and running Array.prototype.map. And thats the strength and weakness of a dynamically typed language like javascript. It provides tremendous flexibility, but limits what you can tell about the code before run time.
Why its not a good strategy
ECMAScript specifications are a standard, but in practice they are never implemented perfectly or consistently. Different environments implement different parts of the standard. Having a "ECMAScript5" piece of code does not guarantee that any particular browser will implement all of its properties perfectly. You really have to determine that on a property by property basis.
What you're much better off doing is finding a list of functions or properties that are used by the code. You can then compare that against the supported properties for a particular environment.
This is still a difficult to impossible problem for the reasons mentioned above, but its at least a useful one. And you could gain value doing this even using a loose approximation (assuming that bind actually is ecmascript5 unless its on a $() wrap. Thats not going to be perfect, but still might be useful).
Trying to figure out a standard thats implemented just isn't practical in terms of helping you decide whether to use it in a particular environment. Its much better to know what functions or properties its using so that you can compare that to the environment and add polyfills if necessary.
Is it possible to easily detect DOM manipulation by the user?
When a user uses the console in any modern browser, he/she can manipulate the DOM in ways the developer did not intend.
I have a web app that is very much tied to the DOM being in certain states and should the user do anything to the DOM via a console, I'd like to be notified.
The answer:
Doesn't need to be browser agnostic
Doesn't need to be perfect. I fully understand that most, if not all, methods could be circumvented, but I'd like a good general solution.
Can't be too convoluted. I'm not interested in registering an event handler with all DOM events that checks some flag set when my code performs an DOM manipulation
Edit:
There appears to be some confusion in the answers I've received thus far. As pointed out in #2 above, I understand that most, if not all, methods can be circumvented.
In addition, this is an internal tool and thus is protect by a VPN. Further more, there is server-side checking. However, there are reasons, which I cannot elaborate upon, for me wanting to know when a user (who are few in number) manipulated the DOM.
To be clear, this isn't for security reasons. I'm not trying to stop malicious users here. Think of this more as out of curiosity.
Don't do that. Code your web site to not trust user input and then don't care what the user does. If invalid input is submitted then reject it. Everyone is happy.
It's easy to think that you own the user's browser. You don't. It's serving you but only at the whim of the user.
If you really must know when the DOM is modified--and this seems a really fragile design--then just do what amounts to calculating checksums. After each legitimate step of the site's approved function, traverse the DOM elements you care about and record their positions, values, or whatever you are concerned with. At intervals, validation time, or a next UI interaction, compare. This is the only comprehensive, cross-browser (including old browsers) way to detect DOM changes. Modern browsers offer DOM mutation events (see Tim Down's answer for more detail) but have limited support and will apparently be replaced with yet another new thing, anyway.
Ultimately, nothing you do can stop someone determined to defeat your scheme. If anything, the user can copy the browser's POST request using Firebug, tweak it, and write a tiny program to submit his own malicious POST request. It is more important to protect your server from malicious input than it is to make your web page supposedly bullet-proof (because it won't be).
DOM mutation events work in current versions of all major browsers and do what you want. The following will cover common DOM modifications within the whole document:
function handleDomChange(evt) {
console.log("DOM changed via event of type " + evt.type);
}
document.addEventListener("DOMNodeInserted", handleDomChange, false);
document.addEventListener("DOMNodeRemoved", handleDomChange, false);
document.addEventListener("DOMCharacterDataModified", handleDomChange, false);
DOM mutation events will eventually be replaced by mutation observers, which are implemented in recent Mozilla and WebKit browsers.
Relying on a script to prevent or counteract malicious edits to the DOM is not the right approach. What exactly are you doing that depends on the DOM not being touched? Seems like that's a huge red flag in and of itself.
This is a pretty interesting question, and I think DOM mutation events may be a best solution. One thing I was initially thinking I might do is run a timed function that checks the DOM for specific modules, based on data- attributes or IDs. If I was building my page entirely client-side through JS, I would have a build configuration object for each module (DOM element like:
<div id='weather-widget' data-module-type='widget'>
<h1 data-module-name='weather'>Weather</h1>
<!-- etc etc -->
</div>
Anyhow, my config object would contain all of these things like module type, module name, etc, etc:
//Widget configuration object
var weatherWidgetConfig = {
type: 'widget',
name: 'weather'
}
and I would inspect the DOM element and all of its children to make sure the data- attributes still matched the configuration object, that they existed, and that they have not been changed. If they have, I would call a module.destroy() and module.build() again with the correct configuration.
I've received a lot of answers in which the respondent delivers advice about how to build a web app. While that may be useful to some readers, that isn't answering the question. Some, however, have attempted to answer. The closest I seen to a complete answer was given by #Keith. The only problem is that it fails the 'easy' test.
It appears that the correct answer, as some have said, is NO - it isn't possible to easily detect DOM manipulation by a user.
I recently discovered "Selector Listener", a technique that relies on css to detect DOM changes. It doesn't work in IE 9-. Applying it to the whole DOM doesn't sound like a good idea, the intent is rather to work with specific selectors.
More details can be found in this blog post.
I am trying to figure out any and all ways to prevent CSS modification and DOM modification of specific elements. I understand this might not be completely possible or that a talented developer could get around it, however, I am not so concerned about people potentially getting around it, I just want to stop newbies. In particular those using jQuery. An example would be to delete certain properties on prototype objects etc..
But why you need/want this? If you want to "protect" your code, you can use some JavaScript minifier as Google Closure Compiler or YUI compressor. They will rewrite your script and it will be difficult to read by a human. Nowadays, with tools like Firebug and Grease Monkey it's almost impossible to do what you want.
Don't use CSS or JavaScript :p Depend completely on server side checks etc.
You cannot stop anyone from messing with your javascript or your objects in the page. The way the browser is designed, your code and objects in your page are simply not protected. Everything from bookmarklets to javascript entered at a console to browser plug-ins can mess with your page and code and variables. That is the architecture of a browser.
What you can do is make things a little more difficult such that a little more work is required for some things. Here are a couple of things you could do:
Obfuscating/compressing/minimizing your code will do things like remove comments, remove whitespace, remove some linebreaks, shorten variable names, etc... That does not prevent anyone from modifying things, but does make it more work to understand and figure out.
Putting variables inside closures and not using globals. This makes it harder to directly modify variables from outside of your scripts.
Keep all important data and secrets on your server. Use ajax calls to ask the server to carry out operations using that data or secrets such that the important information is never available in the browser client.
You cannot keep anyone from modifying the DOM. There simply are no protections against that. Your code can check the DOM and refuse to operate if the DOM has been messed with in non-standard ways. But, of course, the code would then be modified to remove that check too.
If you are looking for a jquery specific solution a crude approach will involve altering the jQuery ($) function and replacing it with a custom one that delegates to the original function only if the provided selector does not match the element you want to secure.
(function(){
jQueryOrig = jQuery;
window.jQuery = window.$ = function(){
if (jQueryOrig("#secure").is(arguments[0])) {
throw new Error("Security breach");
} else return jQueryOrig.apply(this, arguments);
}
}());
Of course people using direct DOM manipulation would not be affected.
Also, if you are actually including arbitrary third party code in your production code, you should take a look at Caja ( http://code.google.com/p/google-caja/ ), which limits users to a subset of javascript capabilities. There is a good explanation regarding Caja here : http://due-diligence.typepad.com/blog/2008/04/web-20-investor.html .
This is possible but requires that the JS file to always be loaded from your server. Using observers you can lock CSS properties and using the on DOM remove/add listeners you can lock it to a parent. This will be enough to discourage most modification.
You can actually go a step further and modify core javascript functions making it nearly impossible to modify the DOM without loading the JS file locally or through a proxy. Further security can be added by doing additional domain checks to make sure the JS file is loaded from where it is supposed to be loaded from.
You can make everything in Flash. In Chrome, there's even a bug that prevents users from opening a console if the flash element has focus (not sure how exactly this works, but you can see an example at http://www.twist-cube.com or http://www.gotmilk.com). Even if users do manage to get a console open (which isn't that hard...), still about all you can do is change the shape of the element.