I don't have a particular use-case for this, so I'm open to anyone proving that there is none, however, I've been wondering if the event model of JavaScript in the browser allows code to insert an event into the event queue, so that it will be processed after all the other events already present have been handled.
I found EventTarget.dispatchEvent but that does not fit what I think I might want for two reasons:
It seems to be synchronous, so it will be processed before all other events already in the queue which "feels wrong".
I have to have a specific EventTarget instance to send it to, which also feels like a limitation. If I want the loosest coupling I should not have to know the target (indeed, I believe I might want more than one target to receive this event, and these targets need not necessarily be in a containment hierarchy in the way that DOM Elements/Nodes are.)
Is there some part of the API that I've failed to find?
Is there perhaps provably no value in this?
Or is there perhaps a reliable platform-independent way to achieve everything this concept might achieve?
Related
Conceptually just having one queue for jobs seems to be sufficient for most use-cases.
What are the reasons for having multiple queues and distinguishing those into "microtasks" and (macro)"tasks"?
Having multiple (macro) task queues allows for task prioritization.
For instance, an User Agent (UA) can choose to prioritize a user event (e.g a click event) over a network event, even if the latter was actually registered by the system first, because the former is probably more visible to the user and requires lower latency.
In HTML this is allowed by the first step of the Event Loop's Processing model, which states that the UA must choose the next task to execute from one of its task queues.
(note: HTML specs do not require that there are multiple queues, but they do define multiple task sources to guarantee the execution order of similar tasks).
Now, why have yet an other beast called microtask queue?
We can find a few early discussions about "how" it should be implemented, but I didn't dig as far as to find who proposed this idea the first and for what use case.
However from the discussions I found, we can see that a few proposals needing such a mechanism were on the road:
Mutation Observers
Now-deprecated ES Object.observe()
At-that-time-incoming ES Promises,
Also-incoming-at-that-time-and-I-don't-know-why-it's-cited ES WeakRefs
HTML Custom Elements callback
The first two being also the first to be implemented in browsers, we can probably say that their use case was the main reason for implementing this new kind of queue.
Both actually did similar things: they listened for changes on an object (or the DOM tree), and coalesced all the changes that occurred during a job into a single event (not to be read as Event).
One could argue that this event could have been queued in the next (macro) task, with the highest priority, except that the Event Loop was already a bit complex, and every jobs did not necessarily come from tasks.
For instance, the rendering is actually a part of every Event Loop iteration, except that most of the time it early exits because it wasn't the time to render.
So if you do your DOM modifications during a rendering frame, you could have the modifications rendered, and only after the whole rendering took place you'd get the callback.
Since the main use case for observers is to act on the observed changes before they trigger performance heavy side-effects, I guess you can see how it was necessary to have a mean to insert our callback right after the job which has done the modifications.
PS: At that time I didn't know much about the Event Loop, I was far from specs matters and I may have put some anachronisms in this answer.
Is it possible to easily detect DOM manipulation by the user?
When a user uses the console in any modern browser, he/she can manipulate the DOM in ways the developer did not intend.
I have a web app that is very much tied to the DOM being in certain states and should the user do anything to the DOM via a console, I'd like to be notified.
The answer:
Doesn't need to be browser agnostic
Doesn't need to be perfect. I fully understand that most, if not all, methods could be circumvented, but I'd like a good general solution.
Can't be too convoluted. I'm not interested in registering an event handler with all DOM events that checks some flag set when my code performs an DOM manipulation
Edit:
There appears to be some confusion in the answers I've received thus far. As pointed out in #2 above, I understand that most, if not all, methods can be circumvented.
In addition, this is an internal tool and thus is protect by a VPN. Further more, there is server-side checking. However, there are reasons, which I cannot elaborate upon, for me wanting to know when a user (who are few in number) manipulated the DOM.
To be clear, this isn't for security reasons. I'm not trying to stop malicious users here. Think of this more as out of curiosity.
Don't do that. Code your web site to not trust user input and then don't care what the user does. If invalid input is submitted then reject it. Everyone is happy.
It's easy to think that you own the user's browser. You don't. It's serving you but only at the whim of the user.
If you really must know when the DOM is modified--and this seems a really fragile design--then just do what amounts to calculating checksums. After each legitimate step of the site's approved function, traverse the DOM elements you care about and record their positions, values, or whatever you are concerned with. At intervals, validation time, or a next UI interaction, compare. This is the only comprehensive, cross-browser (including old browsers) way to detect DOM changes. Modern browsers offer DOM mutation events (see Tim Down's answer for more detail) but have limited support and will apparently be replaced with yet another new thing, anyway.
Ultimately, nothing you do can stop someone determined to defeat your scheme. If anything, the user can copy the browser's POST request using Firebug, tweak it, and write a tiny program to submit his own malicious POST request. It is more important to protect your server from malicious input than it is to make your web page supposedly bullet-proof (because it won't be).
DOM mutation events work in current versions of all major browsers and do what you want. The following will cover common DOM modifications within the whole document:
function handleDomChange(evt) {
console.log("DOM changed via event of type " + evt.type);
}
document.addEventListener("DOMNodeInserted", handleDomChange, false);
document.addEventListener("DOMNodeRemoved", handleDomChange, false);
document.addEventListener("DOMCharacterDataModified", handleDomChange, false);
DOM mutation events will eventually be replaced by mutation observers, which are implemented in recent Mozilla and WebKit browsers.
Relying on a script to prevent or counteract malicious edits to the DOM is not the right approach. What exactly are you doing that depends on the DOM not being touched? Seems like that's a huge red flag in and of itself.
This is a pretty interesting question, and I think DOM mutation events may be a best solution. One thing I was initially thinking I might do is run a timed function that checks the DOM for specific modules, based on data- attributes or IDs. If I was building my page entirely client-side through JS, I would have a build configuration object for each module (DOM element like:
<div id='weather-widget' data-module-type='widget'>
<h1 data-module-name='weather'>Weather</h1>
<!-- etc etc -->
</div>
Anyhow, my config object would contain all of these things like module type, module name, etc, etc:
//Widget configuration object
var weatherWidgetConfig = {
type: 'widget',
name: 'weather'
}
and I would inspect the DOM element and all of its children to make sure the data- attributes still matched the configuration object, that they existed, and that they have not been changed. If they have, I would call a module.destroy() and module.build() again with the correct configuration.
I've received a lot of answers in which the respondent delivers advice about how to build a web app. While that may be useful to some readers, that isn't answering the question. Some, however, have attempted to answer. The closest I seen to a complete answer was given by #Keith. The only problem is that it fails the 'easy' test.
It appears that the correct answer, as some have said, is NO - it isn't possible to easily detect DOM manipulation by a user.
I recently discovered "Selector Listener", a technique that relies on css to detect DOM changes. It doesn't work in IE 9-. Applying it to the whole DOM doesn't sound like a good idea, the intent is rather to work with specific selectors.
More details can be found in this blog post.
Am I right in thinking that spam bots can't simulate the 'keypress' event, and thus I can't get spammed if I require a keypress for each field in my contact form before being able to submit it?
Is this a good alternative to captcha, etc. if I don't care whether or not my viewers have JavaScript enabled?
Wizards, set me right.
I'm unsure if they can generate the keypress event "natively" (I think you might be right that they can't, but it wouldn't surprise me to learn that there's some edge case whereby this is possible).
However, I don't think they would have a problem merely executing element.onkeypress() directly. If the bot can determine that it needs to press a key to advance, then what that actually boils down to is that a particular event handler method needs to be invoked - and the bot can do the latter. It can create its own fake Event object too containing the keycode, and then pass this in and/or set it on window.event.
In theory you might be able to detect this by being very strict about instrospecting the event object in your handler. I don't think that the bot would easily be able to create a native-equivalent event object, so perhaps by inspecting the prototype chain you could distringuish between the two. However, this would almost certainly be too fragile for general use, and is not going to reliably work across different browsers/environments/plugins/etc.
Thus I don't think this is a fruitful path, because you can't tell in an event handler whether the event is "real" or not. Browser-native code is different, since bots cannot actually trigger a click event, but within Javascript I don't see a simple way to prevent your method from simply being called.
The current implementations of spambots might not be able to do that. But it's not that hard to simulate keypresses. If you're only a small website the bot author might not do the work to circumvent your system, but if it large enough for the auther to care your system will be broken really quick.
You know what I liked best about obtrusive javascript? You always knew what it was going to do when you triggered an event.
<a onclick="thisHappens()" />
Now that everybody's drinking the unobtrusive kool-aid it's not so obvious. Calls to bind events can happen on any line of any number of javascript file that get included on your page. This might not be a problem if you're the only developer, or if your team has some kind of convention for binding eventhandlers like always using a certain format of CSS class. In the real world though, it makes it hard to understand your code.
DOM browsers like Firebug seem like they could help, but it's still time consuming to browse all of an element's event handler properties just to find one that executes the code you're looking for. Even then it usually just tells you it's an anonymous function() with no line number.
The technique I've found for discovering what JS code gets executed when events are triggered is to use Safari's Profiling tool which can tell you what JS gets executed in a certain period of time, but that can sometimes be a lot of JS to hunt through.
There's got to be a faster way to find out what's happening when I click an element. Can someone please enlighten me?
Check out Visual Event... it's a bookmarklet you can use to expose events on a page.
If you're using jQuery you can take advantage of its advanced event system and inspect the function bodies of event handlers attached:
$('body').click(function(){ alert('test' )})
var foo = $('body').data('events');
// you can query $.data( object, 'events' ) and get an object back, then see what events are attached to it.
$.each( foo.click, function(i,o) {
alert(i) // guid of the event
alert(o) // the function definition of the event handler
});
Or you could implement your own event system.
To answer your question, try using the Firebug command line. This will let you use JavaScript to quickly grab an element by an ID, and then iterate through its listeners. Often, if used with console.log, you'll even be able to get the function definitions.
Now, to defend the unobtrusive:
The benefit I find in unobtrusive JavaScript is that it is a lot easier for me to see the DOM for what it is. That said, I feel that it is generally bad practice to create anonymous functions (with only few exceptions). (The biggest fault I find with JQuery is actually in their documentation. Anonymous functions can exist in a nether-world where failure does not lead to useful output, yet JQuery has made them standard.) I generally have the policy of only using anonymous functions if I need to use something like bindAsListener from Prototype.
Further, if the JS files are properly divided, they will only be addressing one sub-set of the DOM at a time. I have an "ordered checkbox" library, it is in only one JS file which then gets re-used in other projects. I'll also generally have all of the methods of a given sub-library as member methods of either a JSON object or a class and I have one object/class per js file (just as if I were doing everything in a more formalized language). If I have a question about my "form validation code", I will look at the formValidation object in formvalidation.js.
At the same time, I'll agree that sometimes things can become obtuse this way, especially when dealing with others. But disorganized code is disorganized code, and it is impossible to avoid unless you are working by yourself and are a good programmer.
In the end, though, I would rather deal with using /* */ to comment out most of two or three js files to find misbehaving code, then go through the HTML and remove the onclick attributes.
Calling it "kool-aid" seems unfair. DOM Level 2 events solve specific problems with inline event handling, like the conflicts that always result. I don't look back to writing code to use window.onload that has to check whether someone else has assigned it before, and sometimes having it overriden by accident or out of sloppiness. It also ensures a better separation of the structure (HTML) and behaviour (JS) layers. All in all, it's a good thing.
Regarding debugging, I don't think there's any way to solve the event handlers being anonymous functions, other than nagging the authors to use named functions where possible. If you can, tell them that it produces more meaningful call stacks and makes the code more maintainable.
One thing: you shouldn't be able to see what will happen in JavaScript by looking at the HTML code. What nuisance is that? HTML is for structure.
If you want to check what events are bound to certain elements, there's a bookmarklet called visual event for now, and firebug 1.6 (IIRC) will have some sort of event inspector.
I have seen this link: Implementing Mutual Exclusion in JavaScript.
On the other hand, I have read that there are no threads in javascript, but what exactly does that mean?
When events occur, where in the code can they interrupt?
And if there are no threads in JS, do I need to use mutexes in JS or not?
Specifically, I am wondering about the effects of using functions called by setTimeout() and XmlHttpRequest's onreadystatechange on globally accessible variables.
Javascript is defined as a reentrant language which means there is no threading exposed to the user, there may be threads in the implementation. Functions like setTimeout() and asynchronous callbacks need to wait for the script engine to sleep before they're able to run.
That means that everything that happens in an event must be finished before the next event will be processed.
That being said, you may need a mutex if your code does something where it expects a value not to change between when the asynchronous event was fired and when the callback was called.
For example if you have a data structure where you click one button and it sends an XmlHttpRequest which calls a callback the changes the data structure in a destructive way, and you have another button that changes the same data structure directly, between when the event was fired and when the call back was executed the user could have clicked and updated the data structure before the callback which could then lose the value.
While you could create a race condition like that it's very easy to prevent that in your code since each function will be atomic. It would be a lot of work and take some odd coding patterns to create the race condition in fact.
The answers to this question are a bit outdated though correct at the time they were given. And still correct if looking at a client-side javascript application that does NOT use webworkers.
Articles on web-workers:
multithreading in javascript using webworkers
Mozilla on webworkers
This clearly shows that javascript via web-workers has multithreading capabilities. As concerning to the question are mutexes needed in javascript? I am unsure of this. But this stackoverflow post seems relevant:
Mutual Exclusion for N Asynchronous Threads
Yes, mutexes can be required in Javascript when accessing resources that are shared between tabs/windows, like localStorage.
For example, if a user has two tabs open, simple code like the following is unsafe:
function appendToList(item) {
var list = localStorage["myKey"];
if (list) {
list += "," + item;
}
else {
list = item;
}
localStorage["myKey"] = list;
}
Between the time that the localStorage item is 'got' and 'set', another tab could have modified the value. It's generally unlikely, but possible - you'd need to judge for yourself the likelihood and risk associated with any contention in your particular circumstances.
See the following articles for a more detail:
Wait, Don't Touch That: Mutual Exclusion Locks & JavaScript - Medium Engineering
JavaScript concurrency and locking the HTML5 localStorage - Benjamin Dumke-von der Eh, Stackoverflow
As #william points out,
you may need a mutex if your code does something where it expects a
value not to change between when the asynchronous event was fired and
when the callback was called.
This can be generalised further - if your code does something where it expects exclusive control of a resource until an asynchronous request resolves, you may need a mutex.
A simple example is where you have a button that fires an ajax call to create a record in the back end. You might need a bit of code to protect you from trigger happy users clicking away and thereby creating multiple records. there are a number of approaches to this problem (e.g. disable the button, enable on ajax success). You could also use a simple lock:
var save_lock = false;
$('#save_button').click(function(){
if(!save_lock){
//lock
save_lock=true;
$.ajax({
success:function()
//unlock
save_lock = false;
}
});
}
}
I'm not sure if that's the best approach and I would be interested to see how others handle mutual exclusion in javascript, but as far as i'm aware that's a simple mutex and it is handy.
JavaScript is single threaded... though Chrome may be a new beast (I think it is also single threaded, but each tab has it's own JavaScript thread... I haven't looked into it in detail, so don't quote me there).
However, one thing you DO need to worry about is how your JavaScript will handle multiple ajax requests coming back in not the same order you send them. So, all you really need to worry about is make sure your ajax calls are handled in a way that they won't step on eachother's feet if the results come back in a different order than you sent them.
This goes for timeouts too...
When JavaScript grows multithreading, then maybe worry about mutexes and the like....
JavaScript, the language, can be as multithreaded as you want, but browser embeddings of the javascript engine only runs one callback (onload, onfocus, <script>, etc...) at a time (per tab, presumably). William's suggestion of using a Mutex for changes between registering and receiving a callback should not be taken too literally because of this, as you wouldn't want to block in the intervening callback since the callback that will unlock it will be blocked behind the current callback! (Wow, English sucks for talking about threading.) In this case, you probably want to do something along the lines of redispatching the current event if a flag is set, either literally or with the likes of setTimeout().
If you are using a different embedding of JS, and that executes multiple threads at once, it can get a bit more dicey, but due to the way JS can use callbacks so easily and locks objects on property access explicit locking is not nearly as necessary. However, I would be surprised if an embedding designed for general code (eg, game scripting) that used multi threading didn't also give some explicit locking primitives as well.
Sorry for the wall of text!
Events are signaled, but JavaScript execution is still single-threaded.
My understanding is that when event is signaled the engine stops what it is executing at the moment to run event handler. After the handler is finished, script execution is resumed. If event handler changed some shared variables then resumed code will see these changes appearing "out of the blue".
If you want to "protect" shared data, simple boolean flag should be sufficient.