Detect JS events (like Facebook message) fast - javascript

What kind of handler/hook do I need to set from Greasemonkey script to capture small changes like adding new page elements (thinking of FB messages..)?
Can I change the style and innerhtml before the element is drawn?

You can override many native functions. If it was the element creation, you'd override document.createElement:
//Remember the old function refference
var old_createElement = document.createElement;
//Override the native function
document.createElement = function(tagName) {
//Our own script for the function
if(!confirm("Element "+tagName+" is being created. Allow?"))
throw new Error("User denied creation of a element.");
else
//And eventual call for the original function
return old_createElement(tagName);
}
Regarding the DOM elements, there seems to be no means of capturing the DOM parser element creation. (creation from HTML string)
Similarly, you can override AJAX methods and in fact I have done this on facebook to see how messages are sent - and I noticed they're sent with tons of other data.
Here's a part of my greasemonkey script for this purpose:
function addXMLRequestCallback(event, callback){
var oldSend, i;
if( XMLHttpRequest.callbacks!=null ) {
// we've already overridden send() so just add the callback
//XMLHttpRequest.callbacks.push( callback );
if(XMLHttpRequest.callbacks[event]!=null)
XMLHttpRequest.callbacks[event].push(callback);
} else {
// create a callback queue
XMLHttpRequest.callbacks = {send:[], readystatechange:[]};
if(XMLHttpRequest.callbacks[event]!=null)
XMLHttpRequest.callbacks[event].push(callback);
// store the native send()
oldSend = XMLHttpRequest.prototype.send;
// override the native send()
XMLHttpRequest.prototype.send = function() {
// process the callback queue
// the xhr instance is passed into each callback but seems pretty useless
// you can't tell what its destination is or call abort() without an error
// so only really good for logging that a request has happened
// I could be wrong, I hope so...
// EDIT: I suppose you could override the onreadystatechange handler though
for( i = 0; i < XMLHttpRequest.callbacks.send.length; i++ ) {
XMLHttpRequest.callbacks.send[i].apply( this, arguments );
}
/*if(typeof this.onreadystatechange == "function")
callbacks.readystatechange.push(this.onreadystatechange);*/
var old_onreadystatechange = this.onreadystatechange;
this.onreadystatechange = function(event) {
for( i = 0; i < XMLHttpRequest.callbacks.readystatechange.length; i++ ) {
try {
XMLHttpRequest.callbacks.readystatechange[i].apply( this, arguments );
}
catch(error) {
console.error(error);
}
}
if(typeof old_onreadystatechange == "function") {
old_onreadystatechange.apply(this, arguments)
}
}
// call the native send()
oldSend.apply(this, arguments);
}
}
}
//Usage
addXMLRequestCallback( "send", function(data) {
console.log(data);
});
addXMLRequestCallback( "onreadystatechange", ...);
I also often use the MutationObserver in userscripts. At allows to to watch over either properties or children and calls a callback for every added/removed node.
I'm not sure how good the performance is and how easy will it be to hook up the correct node.
If you had eventually succeded in capturing creation of facebook chat message containers and/or posts on wall, I'd really love to see how you did it.
For a long time, I'm thinking of adding Markdown on facebook. Many friends share sourcecode here but it's barely readable.

Related

Is it necessary to delete callback function after being called/executed in JavaScript?

I have a web-app that polls for data periodically to 3rd party services (say Facebook, Twitter, and so on).
This poll/request is made via JSONP (to avoid cross-domain issue).
For example, a simple request would be something like this:
function jsonp_callback() {
// Do something
}
var url = 'http://some.service.com/getresult?callback=jsonp_callback';
$http.jsonp(url);
However since there can be another type of request that can be made at any given time (for example: to send or post an update), I created a wrapper to handle the callbacks.
The implementation is something like this:
// Callback handler
var myCb = (function() {
var F = function() {};
F.prototype.fn = {};
F.prototype.create = function(fn, scope) {
var self = this;
var id = new Date().getTime();
self.fn[id] = function() {
if (typeof fn === 'function') {
fn.call(scope);
}
}
return 'myCb.fn.' + id;
}
return new F();
})();
// JSONP request
var cb = myCb.create(function() {
// Do something
});
var url = 'http://some.service.com/getresult?callback=' + cb;
$http.jsonp(url);
If you notice, after some time, the myCb.fn will be bloated will callbacks that were old or have already executed.
Mu question is, do I need to create a mechanism to garbage-collect those who have been executed and no longer needed?
You don't necessarily need to remove old callbacks, if you will only make a few calls per page, but if your page is a long running one and makes calls repeatedly it could be a good idea to delete them.
The "mechanism" could be as simple as
delete self.fn[id];
after calling the function.

Singleton Websockets object with different callback handlers

I have a JavaScript Websockets implementation where I would like to use a singleton model that uses one Websocket connection for multiple calls to the server but with different callback event handlers. I have the implementation working just fine but have noticed some strange behaviors with messages being directed to the wrong callback handler. Here is some code:
Connection.js file
var connection = function(){
var _socket = null;
return {
socket : function(){
if (_socket == null){
_socket = new WebSocket("ws://localhost:8081/index.ashx");
_socket.onclose = function(evt){alert('Closed');}
_socket.extraParameter = null;
}
return _socket;
},
send : function(data, callback){
var localSocket = connection.socket();
localSocket.extraParameter = new Date().toString();
localSocket.onmessage = callback;
localSocket.originalDataSent = data;
localSocket.send(data);
}
}
}();
App.js file
var App = function(){
return {
cpuUtilization : function(evt){
var localSocket = this;
var dateTimeOfRequest = localSocket.extraParameter;
var originalDataSent = localSocket.originalDataSent
var jsonData = $.parseJSON(evt.data);
if ($.parseJSON(originalDataSent).type == "cpu"){
$("#dateTimeContainer").html();
$("#cpuContainer").html(jsonData.value);
}
}
}
}();
Third Party Signal.js file
var Signal = function(){
return {
handlerProcess : function(evt){
// Does some third party stuff...
}
}
}();
usage
connection.send("{type:'process'}", Signal.handlerProcess);
connection.send("{type:'cpu'}", App.cpuUtilization);
connection.send("{type:'memory'}", Signal.handlerMemory);
connection.send("{type:'harddrive'}", Signal.handlerHardDrive);
Now where I think I am see the problem is when multiple request are made through the same websocket and the message returns. Since this is asynchronous, I have no way of tieing the request to the event callback. My solution uses the options in the handler for reference, but depending on the time it takes for the websocket request to run, the wrong callback handler is being called and process fails. I think it is failing because I am accessing properties from the websocket instance that may be changing between calls.
Is there a way to pass a reference or additional parameters along with the evt parameter? Maybe wrapping this somehow?
I think it is failing because I am accessing properties from the websocket instance that may be changing between calls.
Yes.
Since this is asynchronous, I have no way of tieing the request to the event callback.
No. You can create a closure for the callback function instead of calling using callback directly:
... send: function(data, callback){
var localSocket = connection.socket();
var extraParameter = new Date().toString();
localSocket.onmessage = function(evt) {
callback(evt.data, /* original- */ data, extraParameter);
};
localSocket.send(data);
}
But still, you have a changing onmessage callback handler. That means, an event may be sent to a handler that does not deserve it. Having an asynchronous system, you will need to add a piece of information to the server resonse that indicates which process the data belongs to. The one universal message handler then could resolve that and call the right callback.

Usage of Observable pattern in JavaScript

function Observer() {
this.fns = [];
}
Observer.prototype = {
subscribe : function(fn) {
this.fns.push(fn);
},
unsubscribe : function(fn) {
this.fns = this.fns.filter(
function(el) {
if ( el !== fn ) {
return el;
}
}
);
},
fire : function(o, thisObj) {
var scope = thisObj || window;
this.fns.forEach(
function(el) {
el.call(scope, o);
}
);
}
};
var fn = function() {};
var o = new Observer;
o.subscribe(fn);
o.fire('here is my data');
o.unsubscribe(fn);
I am not able to understand the whole concept behind this. I want to implement this pattern in my project. I have a view where the form gets submitted and it calls an WebService and returns me response.
If i have to implement this in my project where this a simple request and response... how would i go about with it? i understand you notify your observer when there is a change... let's take i make a request to my API and i get the response back... now i want it to get notified to my view back through observable pattern
Observer appears to be a constructor that you call with var o = new Observer(); then o will be an object with a reference to a bunch of functions. you add functions to the list via subscribe. and remove them from the list via unsubscribe
then the whole point of it all is the "fire" method which will loop through the function list then call each of the functions one by one . "observer pattern" appears to be a lot like the singleton pattern
Are you familiar with the "watch" method in JavaScript? its a method supported via Firefox that you can use on any object.
document.myform.myfield.watch('value', function (v) {
alert(v);
return v;
})
then whenever the value of the object changes, the watch function is called. so basically the concept behind the observer pattern is that you want to basically simulate Firefox's watch method in a cross-browser fashion
you toss a reference to a bunch of functions or objects into subscribed list.then have Observer.fire call a callback method on each of the watched objects or functions. that way if the user preforms some sort of action such as clicking, then the whole list of functions would be updated via a callback function
I hope this helps.
If you only want to do a simple request then in jQuery (such as with $.ajax(...) or $.get(...)) that would look like this:
var requestUrl = "text.html";
// Callback is defined here
var viewCallback = function(data) {
// this will be called when the request is done
console.log('view is notified');
console.log('data looks like this:');
console.log(data);
// you could chain method calls to other callbacks here if you'd like
};
// Request is done here
$.ajax({
url: requestUrl,
}).done(viewCallback);
Most of the time you only want to do one thing when doing a request for which the above is enough code. Using javascript libraries such as jQuery or mootools will abstract away the oddities with the XMLHttpRequest object.
However if you want to do something much more advanced I'd recommend you look at libraries that do this sort of thing such as Radio.js.

jQuery - set Ajax handler priority

actually the question is as simple as the topic says. Is there any way to give different ajax handlers a higher/lower priority (which means here, that they will fire earlier) ?
What do I mean? Well, I have to deal with a fairly huge web-app. Tons of Ajax requests are fired in different modules. Now, my goal is to implement a simple session-timeout mechanism. Each request sends the current session-id as parameter, if the session-id is not valid anymore, my backend script returns the request with a custom response-header set (value is a uri).
So I'm basically going like this
window.jQuery && jQuery( document ).ajaxComplete(function( event, xhr, settings ) {
var redirectto = xhr.getResponseHeader( 'new_ajax_location' );
if( redirectto ) {
location.href = redirectto;
}
});
This does work like a charm of course, but my problem is that this global ajax event actually needs to get fired first 100% of the time, which is not the case. Some of those original ajax-requests handlers will throw an error because of missing or unexpected data, in that case, the global handler never gets executed.
Now, I'd be kinda happy if I wouldn't need to go through every single request handler and make it failsafe for invalid session data responses. I'd much more prefer to do the job at one particular place. But how can I make sure my global handler will get executed first ?
If I understand you correctly, you want to add your own callback to json requests everywhere. You could use the ajaxPrefilter hook.
$.ajaxPrefilter(function( options, originalOptions, jqXHR ) {
options.complete = function (jqXHR2, settings) {
var redirectto = jqXHR2.getResponseHeader( 'new_ajax_location' );
if( redirectto ) {
location.href = redirectto;
}
}
});
You can also change the options there to add the session-id to the request params.
jQuery allows you to "patch" it's methods, such as .post/.ajax.
You might be able to patch the appropriate method(s) so your special AJAX event is always triggered before the one your code wants to call.
This is some "pseudo-jQuery-code" which should help you get started. I doubt it works as is, but it demonstrates the basic concept and it should get you started.
(function( $ ){
var originalPost = $.post;
var calledSpecialOne = false;
$.post = function(url, data, success, dataType) {
if (!calledSpecialOne) {
originalPost( ... your special AJAX query ... ).then( function() {
originalPost(url, data, success, dataType);
}
calledSpecialOne = true;
} else {
originalPost(url, data, success, dataType);
}
}
})( jQuery );
The above is based on some other, unrelated, code I have tested, which makes $.each() work for undefined/null arrays):
(function($){
var originalEach = $.each;
$.each = function(collection, callback) {
return collection? originalEach(collection, callback) : [];
}
})(jQuery);
Note that many exposed functions are called internally by jQuery too, so be VERY carefull using this trick; you might break a different part of jQuery or cause other trouble.
In case of the AJAX functions, you should probably patch the innermost function only, to prevent infinite recursion.
(FWIW, I haven't found any side-effects for the $.each() patch so far).
I'm not sure if this is even what you meant, but I figured I might need it as some point anyway.
It patches jQuery's ajax function to accept an object with the property priority. If there isn't a priority set, its priority becomes 1 (highest priority 0). If there is a priority, it does one of two things. If the priority is something like 5, it checks to see if the ajax call with the previous priority (in this case 4) was called. If not, it adds it to an array of ajax calls with that priority (in this case 5). If the previous priority has been called, it calls the request normally. When a request is called with a priority, it calls any outgoing requests with the next highest priority.
In other words, if there's no priority, it waits for priority 0, if there is a priority, it waits for the priority below it to get called. If you want calls with no priority to be sent immediately, comment out the line with the comment I put saying default action?
(function( $ ){
var oldAjax=$.ajax;
var outgoing=[];
var sent=[];
$.ajax=function(url, settings) {
var pr = (typeof url==='object'?url:settings).priority;
pr = pr===undefined ? 1 : pr; // default action?
if(pr!==undefined){
if(pr==0 || sent[pr-1]!==undefined){
oldAjax(url,settings);
sent[pr]=true;
if(outgoing[pr]){
var rq=outgoing[pr].splice(0,1)[0];
$.ajax(rq[0],rq[1]);
}
if(outgoing[pr+1]){
var rq=outgoing[pr+1].splice(0,1)[0];
$.ajax(rq[0],rq[1]);
}
}
else{
if(outgoing[pr]!==undefined){
outgoing[pr].push([url,settings]);
}
else{
outgoing[pr]=[[url,settings]];
}
}
}
else{
oldAjax(url, settings);
}
};
})( jQuery );
Patches for get and post. You will have to include those extra arguments, or change this yourself to get the priority working with post and get requests. This is straight from jQuery 1.7, except for the extra argument, and the two lines I added comments to.
jQuery.each( [ "get", "post" ], function( i, method ) {
jQuery[ method ] = function( url, data, callback, type, priority ) {
// shift arguments if data argument was omitted
if ( jQuery.isFunction( data ) ) {
priority = type; // my change
type = type || callback;
callback = data;
data = undefined;
}
return jQuery.ajax({
type: method,
url: url,
data: data,
success: callback,
dataType: type,
priority: priority // my change
});
};
});
I setup an example here: http://jsfiddle.net/tk39z/
I actually created some ajax requests targeting google, so you can check the network panel in Chrome to see the requests.
I recently implemented something very similar to the functionality that I believe you're looking for.
I set up a filter on the server side to test for a valid session and, if not, return a 401 error with custom status text for a more specific error detail.
The ajax calls I made to these pages were written in the normal convention using the 'success' attribute.
For situations where there was an error, I added a global callback to the error function that would test for the specific error and redirect accordingly. such as:
$( document ).ajaxError(function(e, xhr) {
var redirectto = xhr.getResponseHeader( 'new_ajax_location' );
if( redirectto ) {
location.href = redirectto;
}
});

How to detect whether an object is ready in JS?

My web app includes an ActiveX control. However, when I run the app, I got error "object expected" error intermittently. It seems sometimes the control is not ready when I call its properties/methods. Is there a way that I can detect whether an object is ready using JS?
Thanks a lot.
If its not you own app, see if you can identify some harmless property or method and then design a wrapper method around the call that tests with try catch if it can access the object, and if yes, call next method in chain (maybe using a delegate to include arguments, and if not ready, use setTimeout to call the wrapper again in say 100 ms.
You might want to include a retry counter to bailout after a few tries so that it's not an infinite loop if the object is broken.
Example:
function TryCallObject(delegate, maxtries, timebetweencalls, failCallback, retrycount)
{
if(typeof retrycount == "undefined")
retrycount = 0;
if(typeof failCallback == "undefined")
failCallback null;
try {
//code to do something harmless to detect if objects is ready
delegate(); //If we get here, the object is alive
} catch(ex) {
if(retrycount >= maxtries)
{
if(failCallback != null)
failCallback();
return;
}
setTimeout(function () {
TryCallObject(delegate, maxtries, timebetweencalls, failCallback, retryCount + 1);
}, timebetweencalls);
}
}
And its called like this
TryCallObject(function() { /* your code here */ }, 5, 100);
or
TryCallObject(function() { /* your code here */ }, 5, 100, function() {alert("Failed to access ActiveX");});
If it is your own app, include a readystate event
http://msdn.microsoft.com/en-us/library/aa751970%28VS.85%29.aspx
The way we do this in FireBreath (http://firebreath.org) is to fire an event to javascript; it does this by providing a function name in a <param> tag, get a reference to the browser window IDispatch pointer, and do a PROPERTYGET for the function named in the param tag.
We then call that method when the plugin is ready to go. This has the advantage of working pretty much the same way in all browsers, since FireBreath plugins work both as ActiveX controls and NPAPI Plugins.

Categories