Consider this sample (say this is module)
function Calculator(value){
return {
add: function(value2){
return: {
value: function(){
return value + value2;
}
}
}
}
}
This is a class, and requires an argument when initialization, sample usage:
var Calculator = require('calculator_app_module');
var myCalc = new Calculator(1); // initialized with 1
myCalc.add(2).value(); // === 3;
Which is obviously expected, what i want is to execute add function in async way, just like that
var Calculator = require('calculator_app_module');
var myCalc = new Calculator(1); // initialized with 1
myCalc.add(2).value() ==== 3 // this executes in 2secs (async)
// and then returns result
I would like to patch Calculator.add method so that it can work with async
function patch(module){ //module is Calculator class
var oldAdd = Calculator.add;
Calculator.add = function(){
// some magic
// trigger event or whatever
oldAdd.apply(Calculator, arguments);
}
}
INDEX.JS
var Calculator = require('calculator_app_module');
var calc = new Calculator(1);
calc.add(2).value() === 3; // equalize within 2 seconds
// after async call is done
calc.add(2).value().equal(3); // also legit
The problem is that calc.add(n) returns new function value which is undefined in async call, is there a way to get the calling fn of add and call it back when result comes
update
Prior to #Zohaib Ijaz answer, you cannot modify content/logic of package, only extend/patch, Package must return same API but in promise way, no code breaking
calc.add(2).value() === 3; // sync code
calc.add(2).value() === 3; // async code
calc.add(2).value().equal(3); // async code
How to achieve
update
According to #Zohaib Ijaz comment, this also legit
myCalc.add(2).value().equal(3); //async
Point is in converting sync to async without breaking package, but extending the outcome
If you request a result by calling a chain of methods, like this:
a = myCalc.add(2).value();
or this:
myCalc.add(2).value().equal(3);
then there is no possibility to retrieve and use results that become available only asynchronously (i.e. later, after the statement has been evaluated). Note that asynchronous involves some event being put in the event queue. The currently executing code must finish first (i.e. until the call stack is empty), before that event can get processed.
The above syntax is useful for immediate evaluation only. In order to process asynchronous results you need to provide a call-back function somewhere for being informed about those results.
So with an asynchronous dependency in the add method, your code could provide a callback to the add method, which it would call when it has received the asynchronous result:
myAsyncCalc.add(2, function (added) {
a = added.value();
});
Or, when using promises (which is really nice to work with), the add method would return an object to which you can assign the same call-back:
myAsyncCalc.add(2).then(function (added) {
a = added.value();
});
Note that the callback function is not part of the currently executing code. It just is a function reference, that can be used at a later, asynchronous event for calling you back. But that will be part of a separate execution sequence, that only starts when the internal event queue has been processed and an event has been processed that triggered that execution sequence.
If this is not an acceptable solution, and you really need the former syntax to somehow take an asynchronous produced result into account, then you are without hope: it is not possible, because that really represents synchronous code execution.
Wrapping your Object
You write that you cannot modify the content of the package, but can only extend it.
One way to do that is to make use of proxies.
The idea is that you trap a reference to the add method, and return
your own adapted version of the method, which can optionally still call the original method.
See the above referenced MDN article for examples.
Calling HTTP request Synchronously
If you really want to write code like this:
a = myCalc.add(2).value();
even when the implementation of add performs an HTTP request, then you could have a look at making the HTTP request synchronously. But it should be noted that this is considered bad practice.
Code Example
Here is code that performs the addition in three ways:
unmodified (synchronous)
with an asynchronous HTPP call
with a synchronous HTTP call
For the two modified versions, a proxy pattern is used. For the asynchronous example, a call back is used using the Promise pattern.
Code:
// code in module is not modified
function Calculator(value){
return {
add: function(value2){
return {
value: function(){
return value + value2;
}
}
}
}
}
// standard object creation
var myCalc = new Calculator(1); // initialized with 1
// Create a proxy for the above object, which will expose
// an asynchronous version of the "add" method. Note that the
// "myCalc" object is not modified.
var myCalcHttpAsync = new Proxy(myCalc, {
get: function(myCalc, name) {
if (name !== 'add') return myCalc[name]; // pass-through
return function(value2) {
return new Promise(function(resolve, reject) {
// Define some url
var url = 'http://api.stackexchange.com/2.2';
// Perform HTTP request
var request = new XMLHttpRequest();
// Define call back for when response becomes available
request.onload = function() {
if (request.readyState !== 4) return;
// When async task notifies it has finished:
// call the original "add" method and notify those
// waiting for the promise to get resolved
resolve(myCalc.add(value2));
};
// `true` as third argument makes the request asynchronous
request.open('GET', url, true);
request.send(null);
});
};
}
});
// Create another, alternative proxy for demonstrating
// synchronous HTTP call:
var myCalcHttpSync = new Proxy(myCalc, {
get: function(myCalc, name) {
if (name !== 'add') return myCalc[name]; // pass-through
return function(value2) {
// Define some url
var url = 'http://api.stackexchange.com/2.2';
// Perform HTTP request
var request = new XMLHttpRequest();
// `false` as third argument makes the request synchronous
request.open('GET', url, false);
// code execution "hangs" here until response arrives
request.send(null);
// process response...
var data = request.responseText;
// .. and return the value
return myCalc.add(value2);
};
}
});
// I/O
var std = document.getElementById('std');
var async = document.getElementById('async');
var sync = document.getElementById('sync');
// 1. Standard
std.textContent = myCalc.add(2).value();
// 2. Asynchronous HTTP
myCalcHttpAsync.add(2).then(function (added) {
// This needs to happen in a callback, otherwise it would be synchronous.
async.textContent = added.value();
});
// 3. Synchronous HTTP
sync.textContent = myCalcHttpSync.add(2).value();
Unmodified result: <span id="std">waiting...</span><br>
Result after asynchronous HTTP call: <span id="async">waiting...</span><br>
Result after synchronous HTTP call: <span id="sync">waiting...</span><br>
Here is my solution using promise.
Here is a link to jsbin where you can execute the code.
http://jsbin.com/qadobor/edit?html,js,console,output
function Calculator(value) {
return {
add: function(value2) {
return new Promise(function(resolve, reject) {
setTimeout(
function() {
resolve(value + value2);
}, 2000);
});
}
};
}
var myCalc = new Calculator(1); // initialized with 1
myCalc.add(2).then(function(ans){
// this callback will be called after 2 seconds after promise resolve.
console.log(ans);
});
Related
I have a button that is saving data.
I am trying to execute some line once button work is done.
function executionRequest(action, acting, data) {
document.getElementById('vpl_ide_save').click();
if (!data) {
data = {};
}
if (!lastConsole.isConnected()) {
VPL_Util.requestAction(action, '', data, options.ajaxurl)
.done(function (response) {
VPL_Util.webSocketMonitor(
response,
action,
acting,
executionActions
);
})
.fail(showErrorMessage);
}
}
Can I call a callback function with
document.getElementById('vpl_ide_save').click()
so rest of the lines run after that only
I know your question mentions callbacks, but promises might provide a better way to handle this asynchonous operation. This snippet has pretty comprehensive comments that should explain how you can implement this.
There are three main functions in the snippet. Execution requests are simulated by simulateExecutionRequest, which is responsible for calling asyncSave and then (assuming the save was successful) doExecutionRequest.
The magic happens in asyncSave, which returns a promise. This lets simulateExecutionRequest wait until asyncSave is done and then call doExecutionRequest if the save was successful. Note that asyncSave is also the listener on the Save button (and since it is called directly by simulateExecutionRequest, there's no need to call the button's .click method programmatically).
The doExecutionRequest function is really just a placeholder for whatever you need to do after the save completes. (Right now, it just logs some hard-coded information.)
const
// Identifies some DOM elements
input = document.getElementById("myInput"),
saveBtn = document.getElementById("vpl_ide_save"),
simulateBtn = document.getElementById("simulateBtn"),
// Simulates saving some data (but is synchronous)
save = function(){
const text = input.value;
if(text){
console.log(`saved: '${text}'`);
return text; // So we can know if it succeeded
}
}
// Will call `asyncSave` when saveBtn is clicked
saveBtn.addEventListener("click", asyncSave);
// (IRL, execution requests probably happen programatically)
simulateRequestBtn.addEventListener("click", simulateExecutionRequest);
// Defines `asyncSave`, which creates and returns a promise.
// The promise calls `save` asynchronously using setTimeout.
// Because `asyncSave` returns a promise, our other code
// can be delayed until after `save` has finished.
// (This is called programatically as well as by user interaction)
function asyncSave(event){
const promisedSave = new Promise(
// Promise constructor takes an executor function
// (which takes two functions that are used to provide
// output once the promise is resolved or rejected)
(resolve, reject) => {
// Simulates a delay (because I/O is asynchronous)
setTimeout(saveAndSettlePromise, 1500);
// Calls `save`, then resolves or rejects promise
function saveAndSettlePromise(){
if( save() ){ resolve("saved"); }
else{
if(event){ console.log("save what?"); }
reject("nothing to save");
}
}
}
);
return promisedSave;
}
// This is what happens in response to an execution request
function simulateExecutionRequest(){
// Calls `asyncSave`
asyncSave().then(
// If async save resolves, then this anonymous function runs
(resolvedValue) => {
// (The resolved value is available here if we want it)
//Defines some variables
const
action_val = "log",
acting_val = true,
lastConsoleIsConnected = false;
// Calls `doExecutionRequest`
doExecutionRequest(action_val, acting_val);
// Defines `doExecutionRequest` function (Demo version)
function doExecutionRequest(action, acting, data = {}){
// `data = {}` makes `data` an optional parameter
// Checks connection, and does something with data
if (!lastConsoleIsConnected){
console.log(`execution request: 'data' has type '${typeof data}'`);
}
}
}
).catch(
// If `asyncSave` rejects, this runs instead
(reason)=> console.log("request canceled: " + reason)
);
}
<label>
Enter something to save: <input id="myInput" value="" />
</label>
<div>
<button id="vpl_ide_save">Save</button>
</div>
<hr />
<div>
<button id="simulateRequestBtn">Simulate Request</button>
</div>
I have an Angular service where I'm using $q service in combination with webworkers. In my original function before using webworkers, my completeClass function would return an object.
I replaced the code to post a message to my new web worker script.
The callback of the webworker is in my initWorkers function where I add the eventlistener.
My goal is that the completeClass function returns the result of the webworker. How can I make this happen?
this.classWorker = new Worker('app/shared/autocomplete/autocomplete-class-worker.js');
this.completeClass = function(text){
var self = this;
var defer = $q.defer();
classWorker.postMessage([text, this.oldText, this.oldProposals, this.aliases, this.entityClasses])
};
this.initWorkers = function(){
var self = this;
worker.addEventListener('message', function(e) {
defer.resolve(e.data);
self.oldProposals = e.data[1];
self.oldText = text;
return e.data[0];
}, false);
};
If you are going to call the worker when the previous call is still running, then you need something to either queue or keep track of in-progress requests. My suspicion is that unless you need control of the queue of requests, it's simpler for UI thread to fire off the requests to the worker, so the browser essentially queues the requests for you.
But you would still need to keep track of the requests sent somehow, so when you get a message back from the worker, you know which one it is responding do. You can do this by, in the main thread
Generating a unique ID for each request. An ever-increasing integer can be enough for a lot of cases.
Creating a deferred object and storing it, associated with the ID
Firing off the request to the worker, passing the ID
Passing a the promise of the deferred object back to the caller
The worker then
Receives the message, with the ID
Does its' work
Posts the result back, along with the ID
The main thread then
Receives the message, with the ID and result of the work.
Retrieves the deferred object by the ID, and resolves it with the results of the work.
To do this, you can use some code like below. I've slightly simplified your case by passing just text to the worker, getting completeText back. You can add more information going either way in a real case.
app.service('AutoComplete', function($q) {
var id = 0;
var worker = new Worker('app/shared/autocomplete/autocomplete-class-worker.js');
var inProgress = {};
this.completeClass = function(text) {
var deferred = $q.defer();
id++;
var request = {
id: id,
text: text
};
inProgress[id] = deferred;
worker.postMessage(request);
return deferred.promise;
};
worker.onmessage = function(e) {
var response = e.data;
var id = response.id;
var type = response.type; // resolve, reject, or notify
var completeText = response.completeText;
inProgress[id][type](completeText);
if (type === 'resolve' || type === 'reject') {
delete inProgress[id];
}
};
});
Then in the worker you can have code like:
self.onmessage = function(e) {
var request = e.data;
var text = request.text;
var id = request.id;
// Do the work here
var completeText = ...
var response = {
id: id,
type: 'resolve', // Can reject the promise in the main thread if desired
completeText: completeText
};
self.postMessage(response);
};
My goal is that the completeClass function returns the result of the webworker. How can I make this happen?
To clarify, it can't directly return the result, because the result is calculated asynchronously, and all function calls must return synchronously. But it can return a promise that resolves to the result later, just like $http does for example. To use the above, you can do something like
app.controller('MyController', function($scope, AutoComplete) {
$scope.complete = function(text) {
AutoComplete.completeClass(text).then(function(result) {
// Do something with result
});
});
});
Note: technically, passing an ID along with each request isn't necessary if the worker does all its' work synchronously on one request, and so responds to each call in the order received. In the above example, the main thread can always assume the calls to the worker make a first-in-first-out queue. However, passing an ID gives the flexibility of the worker not finishing the work in the order received. Say in a later version it needs to do something asynchronous, like call another worker, or make an ajax request, this method will allow that.
Please forgive me if this is a stupid question. I have been trying for hours and my brain have just stopped working.
I have such system that consists of three AJAX calls. Server response of first call usually is a 200 Success; but second and third queries are fragile because they are image uploading, and on the server side, I have so much validation rules that client's images mostly fail.
window.AjaxCall = function () {
// to pass to $.ajax call later
this.args = arguments;
// xhr status
this.status = null;
// xhr results (jqXHR object and response)
this.xhrResponse = {};
this.dfr = new $.Deferred();
// to provide an easier interface
this.done = this.dfr.done;
this.fail = this.dfr.fail;
this.then = this.dfr.then;
};
AjaxCall.prototype.resetDfr = function () {
this.dfr = new $.Deferred();
};
AjaxCall.prototype.resolve = function () {
this.dfr.resolve(
this.xhrResponse.result,
this.xhrResponse.jqXHR
);
this.resetDfr();
};
AjaxCall.prototype.reject = function () {
this.dfr.reject(
this.xhrResponse.jqXHR
);
this.resetDfr();
};
AjaxCall.prototype.query = function () {
var _this = this;
// if query hasn't run yet, or didn't return success, run it again
if (_this.status != 'OK') {
$.ajax.apply(_this, _this.args)
.done(function (result, textStatus, jqXHR) {
_this.xhrResponse.result = result;
_this.xhrResponse.jqXHR = jqXHR;
_this.resolve();
})
.fail(function (jqXHR) {
_this.xhrResponse.jqXHR = jqXHR;
_this.reject();
})
.always(function (a, b, c) {
var statusCode = (typeof c !== 'string'
? c
: a).status;
if (statusCode == 200) {
_this.status = 'OK';
}
});
}
// if query has been run successfully before, just skip to next
else {
_this.resolve();
}
return _this.dfr.promise();
};
AjaxCall class is as provided above, and I make the three consecutive calls like this:
var First = new AjaxCall('/'),
Second = new AjaxCall('/asd'),
Third = new AjaxCall('/qqq');
First.then(function () {
console.log('#1 done');
}, function() {
console.error('#1 fail');
});
Second.then(function () {
console.log('#2 done');
}, function() {
console.error('#2 fail');
});
Third.then(function () {
console.log('#3 done');
}, function() {
console.error('#3 fail');
});
var toRun = function () {
First.query()
.then(function () {
return Second.query();
})
.then(function () {
return Third.query()
});
};
$('button').click(function () {
toRun();
});
Those code are in a testing environment. And by testing environment, I mean a simple HTML page and basic server support for debugging.
Home page (/) always returns 200 Success.
/asd returns 404 Not Found for the first 3 times and 200 Success once as a pattern (i.e. three 404s -> one 200 -> three 404s -> one 200 -> three 404s -> ... ).
/qqq returns 404 Not Found all the time.
When I click the only button on the page, first query returns success and second fails as expected. When I click the button second time, first query skips because it was successful last time and second fails again, also as expected.
The problem here is:
before I used the resetDfr method because the dfr is alreay resolved or rejected, it doesn't react to resolve and reject methods anymore.
When I call the resetDfr method in the way I show in the example, dfr is able to get resolved or rejected again, but the callbacks of the old dfr are not binded with the new dfr object and I couldn't find a way to clone the old callbacks into the new dfr.
What would be your suggestion to accomplish what I'm trying to do here?
Promises represent a single value bound by time. You can't conceptually "reuse" a deferred or reset it - once it transitions it sticks. There are constructs that generalize promises to multiple values (like observables) but those are more complicated in this case - it's probably better to just use one deferred per request.
jQuery's AJAX already provides a promise interface. Your code is mostly redundant - you can and should consider using the existent tooling.
Let's look at $.get:
It already returns a promise so you don't need to create your own deferred.
It already uses the browser cache, unless your server prohibits HTTP caching or the browser refuses it only one request will be made to the server after a correct response arrived (assuming you did not explicitly pass {cache: false} to its parameters.
If making post requests you can use $.post or more generally $.ajax for arbitrary options.
This is how your code would roughly look like:
$("button").click(function(){
var first = $.get("/");
var second = first.then(function(){
return $.get("/asd");
});
var third = second.then(function(){
return $.get("/qqq");
});
});
The reason I put them in variables is so that you will be able to unwrap the result yourself later by doing first.then etc. It's quite possible to do this in a single chain too (but you lose access to previous values if you don't explicitly save them.
For the record - it wasn't a stupid question at all :)
I have the following code that causes the two call to Webtrends to be cancelled (ie these two calls did not give a http 200 but a cancelled message in the network tab of the browser) when I call it
mercury.Tracking.logUsage("export", GetSelectedExportType(form));
mercury.Tracking.logUsage('exportchart', mercury.ChartContainer.currentChartUri(), path);
form[0].submit();
I rewrote this in this way to avoid this issue, as it seemed to me that the reason why the calls to Webtrends were being cancelled was because the form submit was making that happen so before calling submit on the form I wait two seconds.
mercury.Tracking.logUsage("export", GetSelectedExportType(form));
mercury.Tracking.logUsage('exportchart', mercury.ChartContainer.currentChartUri(), path);
var submit = function () {
setTimeout(function() {
form[0].submit();
}, 2000);
};
submit();
Question is, is there a better way, using promises or callbacks or whatever to do this?
The logUsage code is
(function ($, window) {
function Tracking() {
}
Tracking.prototype.chartTitle = function () {
return $('#chartNameInfo').text();
};
Tracking.prototype.hostName = function () {
return $('#trackingVars').data('host-name');
};
Tracking.prototype.page = function () {
return $('#trackingVars').data('page');
};
Tracking.prototype.currentUser = function () {
return window.config.userId;
};
Tracking.prototype.logUsage = function (action, resourceUri, actionTargetUri, additionalTags) {
// action: action performed - e.g. create, delete, export
// resourceUri: URI of API resource *on* which action is being performed (required), e.g. /users/current/annotations/{annotation-id}
// actionTargetUri: URI of API resource *to* which action is being performed (optional), e.g. /charts/{chart-id}
if (action.indexOf("DCSext.") < 0) {
action = "DCSext." + action;
}
var jsonString = '{"' + action + '"' + ':"1"}';
var jsonObj = JSON.parse(jsonString);
if (additionalTags == null) {
additionalTags = jsonObj;
}
else {
additionalTags = $.extend({}, additionalTags, jsonObj); //Append two JSON objects
}
var trackingargs = $.extend({
'DCSext.resource-uri': resourceUri,
'DCSext.action-target-uri': actionTargetUri,
'WT.ti': this.chartTitle(),
'DCSext.dcssip': this.hostName(),
'DCSext.em-user-id': this.currentUser(),
dsci_uri: this.page()
}, additionalTags);
try {
WebTrends.multiTrack({ args: trackingargs });
} catch (e) {
console.log(e);
}
};
window.Tracking = new Tracking();
$(function() {
$('body').on('click', 'a[data-tracking-action]', function() {
window.Tracking.logUsage($(this).data('tracking-action'), $(this).data('tracking-resource'));
});
$(document).on('attempted-access-to-restricted-resource', function(event, href) {
window.Tracking.logUsage('unauthorisedResourceAccessUpsell', href.url);
});
});
})(jQuery, window);
With the extra information provided, I think I can now answer your question.
From WebTrends doc, you can add a finish callback to your WebTrends.MultiTrack call.
What you could do:
Tracking.prototype.logUsage = function (action, resourceUri, actionTargetUri, additionalTags) {
...
var finished = $.Deferred();
...
try {
WebTrends.multiTrack({ args: trackingargs, finish: function(){finished.resolve();}});
}
...
return finished;
}
and then in your code:
$.when(mercury.Tracking.logUsage("export", GetSelectedExportType(form)),
mercury.Tracking.logUsage('exportchart', mercury.ChartContainer.currentChartUri(), path))
.done(function(){
form[0].submit();
});
I have not tested this, but I think it should work. Hope it helps.
Explanations:
jQuery.when()
Description: Provides a way to execute callback functions based on one
or more objects, usually Deferred objects that represent asynchronous
events.
Basically, jQuery.when() will take one or more deferreds (which build promises) or promises and will return one promise that fulfills when they all fulfill. From there, we can choose to add handlers using th e .done() or .then() method to our promise, which will be called once or promise is fulfilled . (A promise represents the result of an asynchronous operation).
So, in the code above, I created a new deferred object in your logUsage method, and that method returns the deferred, so you can pass those deferreds to jQuery.when method and when they will be fulfilled (this is why I added the finish callback in your WebTrends.Multitrack call), the handler passed to deferred.done() will be executed.
I hope this is not too confusing, I'm not sure I'm explaining it correctly.
Not trying to steal Antoine's rep. His answer is essentially fine, but the ... sections can be fleshed out far more efficiently than in the question, plus a few other points for consideration.
Tracking.prototype.logUsage = function (action, resourceUri, actionTargetUri, additionalTags) {
// action: action performed - e.g. create, delete, export
// resourceUri: URI of API resource *on* which action is being performed (required), e.g. /users/current/annotations/{annotation-id}
// actionTargetUri: URI of API resource *to* which action is being performed (optional), e.g. /charts/{chart-id}
try {
// you might as well wrap all the preamble in the try{}, just in case it it error-prone
if (action.indexOf("DCSext.") < 0) {
action = "DCSext." + action;
}
//trackingargs can be defined efficiently as follows, avoiding the need for the variable `jsonObj` and the ugly JSON.parse().
var trackingargs = $.extend({
'DCSext.resource-uri': resourceUri,
'DCSext.action-target-uri': actionTargetUri,
'WT.ti': this.chartTitle(),
'DCSext.dcssip': this.hostName(),
'DCSext.em-user-id': this.currentUser(),
'dsci_uri': this.page()
}, additionalTags || {}); // `additionalTags || {}` caters for missing or null additionalTags
trackingargs[action] = 1;//associative syntax gets around the limitation of object literals (and avoids the need for JSON.parse()!!!).
//to keep things tidy, return $.Deferred(fn).promise()
return $.Deferred(function(dfrd) {
WebTrends.multiTrack({
args: trackingargs,
finish: dfrd.resolve //no need for another function wrapper. `$.Deferred().resolve` and `$.Deferred().reject` are "detachable"
});
}).promise();//be sure to return a promise, not the entire Deferred.
} catch (e) {
console.log(e);
//Now, you should really ensure that a rejected promise is always returned.
return $.Deferred.reject(e).promise();//Surrogate re-throw.
}
};
see comments in code
As Tracking.prototype.logUsage can now return a rejected promise, and as you probably don't want .logUsage() failure to inhibit your form submission, you probably want to convert rejected promises to fulfilled.
$.when(
mercury.Tracking.logUsage("export", GetSelectedExportType(form)).then(null, function() {
return $.when();//resolved promise
}),
mercury.Tracking.logUsage('exportchart', mercury.ChartContainer.currentChartUri(), path).then(null, function() {
return $.when();//resolved promise
})
).done(function() {
form[0].submit();
});
It may seem to be an unnecessary complication to return a rejected promise then convert to success, however :
it is good practice to report asycnhronous failure in the form of a rejected promise, not simply log the error and return undefined.
window.Tracking.logUsage() may be called elsewhere in your code, where it is necessary to handle an error as an error.
I'm writing a Chrome extension with the socket api(though this doc is out of date, the latest version of the api is here), and I found that the code is really hard to organize:
All the methods are under the namespace chrome.experimental.socket, I would just use socket below for simplicity.
socket.create("tcp", {}, function(socketInfo){
var socketId = socketInfo.socketId;
socket.connect(socketId, IP, PORT, function(result){
if(!result) throw "Connect Error";
socket.write(socketId, data, function(writeInfo){
if(writeInfo.bytesWritten < 0) throw "Send Data Error";
socket.read(socketId, function(readInfo){
if(readInfo.resultCode < 0) throw "Read Error";
var data = readInfo.data; // play with the data
// then send the next request
socket.write(socketId, data, function(writeInfo){
socket.read(socketId, function(readInfo){
// ............
});
});
});
})
});
})
because both socket.write and socket.read are asynchronous, I have to nest the callbacks to make sure that the next request is send after the previous request got the correct response.
it's really hard to manage these nested functions, how could I improve it?
UPDATE
I'd like to have a method send which I can use as:
send(socketId, data, function(response){
// play with response
});
// block here until the previous send get the response
send(socketId, data, function(response){
// play with response
});
How about (something like) this?
var MySocket = {
obj: null,
data: null,
start: function() { ... some code initializing obj data, ending with this.create() call },
create: function() { ... some code initializing obj data, ending with this.connect() call },
connect: function() { ... some connection code, ending with this.write() call },
write: function() { ... some writing code that updates this.data, ending with this.read() call },
read: function() { ... you probably get the idea at this point )) ... },
};
This object could be used with MySocket.start() or something. The idea is to encapsulate all data (and nested calls) within the single (yet more-o-less globally usable) object.
Or even more, one can create two objects: one purely for writing purposes, and another for purely reading, each operating with its own data, then wrap them (and their inter-calls, so to speak) into a single SocketManager object.
Consider using an asynchronous continuation passing style, where functions end with a SetInterval call with the function they were passed. Then we construct a function that entwines two functions to call each other using this mechanism. The guts of it would be like this:
var handle;
// pairs two functions
function pair(firstfunc, secondfunc, startarg) {
var callbackToFirst = function(valuetofill) {
handle = setInterval(firstfunc(valuetofill,callbackToSecond));
};
var callbackToSecond = function(valuetofill) {
handle = setInterval(secondfunc(valuetofill,callbackToFirst));
};
callbackToFirst(startarg);
}
What we are doing here is constructing a pair of mutually-calling callbacks which take a single argument, which each contain references to the two inter-calling functions. We then kick off the process by calling the first callback.
Construct the pair for an example pair of read and write functions (assuming you've set the socketId in the enclosing object definition):
// starts read/write pair, sets internal variable 'handle' to
// interval handle for control
function startReadWrite(initialarg, myDataFunc) {
var readcall = function(value, func) {
readSocket(getData(myDataFunc(func)));
};
var writecall = function(value, func) {
writeSocket(checkBytesWritten(func));
};
handle = pair(readcall, writecall, initialarg);
}
The rest of the object is like this:
function myIO() {
var socketInfo, socketId, handle;
function create(func) {
socket.create('tcp',{},function(thisSocketInfo) {
socketInfo = thisSocketInfo;
}
setInterval(func(this),0);
}
function connect(IP, PORT, func) {
socket.connect(p_socketId, IP, PORT, function() {
if(!result) throw "Connect Error";
setInterval(func(result),0);
});
}
function readSocket(func) {
socket.read(p_socketId, function(readInfo){
setInterval(func(readInfo),0);
});
}
function writeSocket(data, func) {
socket.write(p_socketId, data, function(writeInfo){
setInterval(func(writeInfo),0)
});
}
function checkBytesWritten(writeInfo, func) {
if(writeInfo.bytesWritten < 0) throw "Send Data Error";
setInterval(func(writeInfo),0);
}
function getData(readInfo, func) {
if(readInfo.resultCode < 0) throw "Read Error";
var data = readInfo.data;
setInterval(func(data),0);
}
//** pair and startReadWrite go here **//
}
Finally the call to set the whole thing going:
var myIOobj = new myIO();
myIOobj.create(startReadWrite(myDataFunc));
Notes:
This is meant to demonstrate a style, not be ready code! Don't just copy and paste it.
No, I haven't tested this; I do javascript but not Chrome API stuff yet. I'm focussing on the callback mechanisms etc.
Be careful with the different classes of callback; single argument callbacks (like the read and write callbacks) which take a single value (as presumably defined by the API), and 2 argument callbacks (like most of the methods) which take an argument and a function to call at the end.
The getData method takes a callback and passes data to it; this callback (myDataFunc) is the function that actually gets to use the data. It needs to take a callback as a second argument and call it synchronously or asynchronously.
TLDR: Consider using asynchronous calls to avoid the nesting. I've given a vague example of a mechanism to have two functions call each other continuously using this style as seems to be needed.
Although I call it asynchonous, the setInterval calls will execute serially, but the key is that the stack is cleared after the parent call is done, rather than adding endless layers with nesting.