I have a scenario where users can store their mapping styling. When a user navigates back to that page with the map it should display the map the way it was previously.
The user can change colors of geographies, and borders as well as apply thematic styles based on demographic information.
With the release of the new loadGeoJson I have had great success with speeding up my application, Initially I was drawing all of the geographies myself, via JS. This was causeing some memory issues at around 1500 geographies. This new functionality decreases it significantly. I am choosing to generate the Features objects on the server side and just pass a URL to the map for rendering. This URL is passed in along with all of the other style information to rebuild the map as it was previously.
The problem I'm encountering, is when I attempt to apply the stylings, the map hasn't retrieved its list of features to be displayed on the map.
Looking through the documenation I don't see any events specifically for the loadGeoJson method, but did see for addfeature and setgeometry, but neither of these events get raised when using the loadGeoJson method.
This has all been written in Typescript and tries to adhere to a pretty strict MVVM approach using Knockout to handle all of UI changes for me via binding.
Is there another event I can tie into to see when this method has completed so I can apply my styles appropriately?
Is there another approach where I can bake in a 'wait' somewhere? .
addfeature event is definitely invoked by loadGeoJson, if isn't for you is some setup error.
loadGeoJson has an optional callback parameter what is invoked after all features were loaded and has the features as parameter.
https://developers.google.com/maps/documentation/javascript/3.exp/reference
loadGeoJson(url:string, options?:Data.GeoJsonOptions,
callback?:function(Array))
You can signal your code from this callback that it can continue processing.
map.data.loadGeoJson('google.json', null, function (features) {
map.fitBounds(bounds); // or do other stuff what requires all features loaded
});
You could also wrap loadGeoJson into a promise and resolve the promise in loadGeoJson's callback.
function loadGeoJson(url, options) {
var promise = new Promise(function (resolve, reject) {
try {
map.data.loadGeoJson(url, options, function (features) {
resolve(features);
});
} catch (e) {
reject(e);
}
});
return promise;
}
// Somewhere else in your code:
var promise = loadGeoJson('studs.json');
promise.then(function (features) {
// Do stuff
});
promise.catch(function (error) {
// Handle error
});
You can listen to addFeature event which will be fired for each feature in your json. e.g. if you use https://storage.googleapis.com/maps-devrel/google.json six events are raised after the json request is completed i.e. when google api starts adding the features from the json file:
You can see the demo here : http://jsbin.com/ofutUbA/4/edit
One would assume, if you are using knockout, to maybe bind onto some listeners, or attach listeners to an event.
Most events through JavaScript always have a callback function, this is good practive so that once a function has finished executing some long I/O or Database query it will immediately execute that function. Obviously that callback will be defined by you.
someLongTimeFunction(function (data) {
// usually this callback will be fired when the function has finished.
});
If you are familiar with knockout, then you would know that once the data has been retrieved and you pass through an observable that observable would update its binding.
Here is the link for extending observables: knockout js
As far as I can understand from you question, this loadGeoJson, is it server side? If so, you do a long poll from client side and tie into that:
function doRequest() {
var request = $.ajax({
dataType: "json",
url: 'loadGeoJson',
data: data,
});
request.done(function(data, result) {
if (result !== "success") {
setTimeout(doRequest, 10000);
}
});
}
Related
I have some third party library whose events I'm listening. I get a chance to modify data which that library is going to append in the UI. It is all fine until that data modification is synchronous. As soon as I involve Ajax callbacks/promises, this fails to work. Let me put an example to show case the problem.
Below is how I'm listening to a event:-
d.on('gotResults', function (data) {
// If alter data directly it works fine.
data.title = 'newTitle';
// Above code alters the text correctly.
//I want some properties to be grabbed from elsewhere so I make an Ajax call.
$.ajax('http://someurl...', {data.id}, function (res) {
data.someProperty = res.thatProperty;
});
// Above code doesn't wait for ajax call to complete, it just go away and
renders page without data change.
// Yes I tried promises but doesn't help
return fetch('http://someurl...').then(function (data) {
data.someProperty = res.thatProperty;
return true;
});
// Above code also triggers the url and gets away. Doesn't wait for then to complete.
});
I cannot change/alter the third party library. All I have is to listen to event and alter that data.
Any better solutions. Nope. I can't use async/wait, generators, because I want to have it supported for ES5 browsers.
You cannot make a synchronous function wait for an asynchronous response, it's simply not possible by definition. Your options pretty much are:
BAD IDEA: Make a synchronous AJAX request. Again: BAD IDEA. Not only will this block the entire browser, it is also a deprecated practice and should not be used in new code, or indeed ever.
Fetch the asynchronous data first and store it locally, so it's available synchronously when needed. That obviously only works if you have an idea what data you'll be needing ahead of time.
Alter the 3rd party library to add support for asynchronous callbacks, or request that of the vendor.
Find some hackaround where you'll probably let the library work with incomplete data first and then update it when the asynchronous data is available. That obviously depends a lot on the specifics of that library and the task being done.
Does the gotResults callback function really need to return anything else than true? If not, then you could just write regular asynchronous code without this library knowing about it. Let me explain myself by rewriting your pseudocode:
d.on('gotResults', function (data) {
// If alter data directly it works fine.
data.title = 'newTitle';
// Above code alters the text correctly.
//I want some properties to be grabbed from elsewhere so I make an Ajax call.
$.ajax('http://someurl...', {data.id}, function (res) {
data.someProperty = res.thatProperty;
// Above code doesn't wait for ajax call to complete, it just go away and
// EDIT: now it should render properly
renders page without data change.
// Yes I tried promises but doesn't help
return fetch('http://someurl...');
// Above code also triggers the url and gets away. Doesn't wait for then to complete.
}).then(function (data) {
data.someProperty = res.thatProperty;
// maybe render again here?
}).catch(function(err) {
handleError(err); // handle errors so the don't disappear silently
});
return true; // this line runs before any of the above asynchronous code but do we care?
});
I'm using angular, and in an angularUI modal window I want to show the Drop In form from Braintree to get a payment method. Thus, I create the usual form (partial.html):
<form id="creditCard" >
<div id="dropin"></div>
<button type="submit" id="btnPay" >Pay</button>
</form>
and then I show the modal with this:
var modalInstance = $modal.open({
templateUrl: 'partial.html',
controller: 'ModalController'
});
Where ModalController contains the call to the Braintree setup:
braintree.setup($scope.clientToken, 'dropin', {
container: 'dropin',
onPaymentMethodReceived: function (result) {
$scope.$apply(function() {
$scope.success = true;
// Do something else with result
});
}
});
This will show the Drop In form from braintree nicely (the setup generates the form) and accept the credit card and expiration date, all working fine so far.
The problem is, each time I call the modal, the ModalController is executed, and thus the braintree.setup() is also executed. Then, when I enter the credit card number and the expiration date and hit pay, the onPaymentMethodReceived() event is triggered once per setup execution! That is, if the first time I call the modal it will trigger the event once, the second time it will trigger it twice, and so on. Like if each time I call setup, a new hook to the event is created.
Any idea on how to avoid this? Is there a way to "unbind" the onPaymentMethodReceived() event handler? I do need to call the setup several times since each time I call the modal, the clientToken may have changed.
Thanks for any help or pointer to help.
Calling braintree.setup multiple times in angular seems unavoidable, either for the asker's reasons, or simply because setup is called in a controller that may be instantiated multiple times in a browsing session – like a cart or checkout controller.
You can do something like this:
$rootScope.success = false;
braintree.setup($scope.clientToken, 'dropin', {
container: 'dropin',
onPaymentMethodReceived: function (result) {
if(!$rootScope.success) {
$scope.$apply(function() {
$rootScope.success = true;
// Do something else with result
});
}
}
});
I found I wasn't able to avoid having the callback fire multiple times (the number of times seems to explode each time I revisit the view - yikes), but I could test whether I had performed my actions in response to the callback. Since the $scope will be destroyed if I leave the view, $scope.success is effectively reset when I need it to be. Because each new controller will have its own $scope, setting a success flag on the $scope may only halt additional executions on that $scope (which seems to still be available to the callback, even if the controller has been "destroyed"), so I found that using $rootScope meant only one execution total, even if I re-instantiated the controller multiple times. Setting $rootScope.success = false in the controller means that once the controller is loaded, the callback will succeed anew – once.
I think it's handled by the API since then with teardown:
In certain scenarios you may need to remove your braintree.js integration. This is common in single page applications, modal flows, and other situations where state management is a key factor. [...]
Invoking teardown will clean up any DOM nodes, event handlers, popups and/or iframes that have been created by the integration.
https://developers.braintreepayments.com/guides/client-sdk/javascript/v2#teardown
(I haven't tried it yet)
The link given by Arpad Tamas does not contain the info anymore. So I am posting the info given by BrainTree for posterity ;) Especially since it took me a few tries to find it with a Google search.
In certain scenarios you may need to remove your Braintree.js integration. This is common in single page applications, modal flows, and other situations where state management is a key factor. When calling braintree.setup, you can attach a callback to onReady which will provide an object containing a teardown method.
Invoking teardown will clean up any DOM nodes, event handlers, popups and/or iframes that have been created by the integration. Additionally, teardown accepts a callback which you can use to know when it is safe to proceed.
var checkout;
braintree.setup('CLIENT_TOKEN_FROM_SERVER', 'dropin', {
onReady: function (integration) {
checkout = integration;
}
});
// When you are ready to tear down your integration
checkout.teardown(function () {
checkout = null;
// braintree.setup can safely be run again!
});
You can only invoke teardown once per .setup call. If you happen to call this method while another teardown is in progress, you'll receive an error stating Cannot call teardown while in progress. Once completed, subsequent calls to teardown will throw an error with this message: Cannot teardown integration more than once.
I've wrapped this code in a function that I call each time the related checkout ionic view is entered.
$scope.$on('$ionicView.enter', function() {
ctrl.setBraintree(CLIENT_TOKEN_FROM_SERVER);
});
var checkout;
ctrl.setBrainTree = function (token) {
braintree.setup(token, "dropin", {
container: "dropin-container",
onReady: function (integration) {
checkout = integration;
$scope.$emit('BTReady');
},
onPaymentMethodReceived: function(result) {
...
},
onError: function(type) {
...
}
});
// Prevents a call to checkout when entering the view for the first time (not initialized yet).
if (checkout) {
// When you are ready to tear down your integration
checkout.teardown(function () {
checkout = null; // braintree.setup can safely be run again!
});
}
};
I have a function that is bound to mouse click events on a Google Map. Due to the nature of the function it can take a few moments for processing to complete (.1sec - 2sec depending on connection speeds). In itself this is not much of a problem, however if the user gets click happy, this can cause problems and later calls are a bit depended on the previous one.
What would be the best way to have the later calls wait for previous ones to complete? Or even the best way to handle failures of previous calls?
I have looked at doing the following:
Using a custom .addEventListener (Link)
Using a while loop that waits previous one has processed
Using a simple if statement that checks if previous one needs to be re-run
Using other forms of callbacks
Now for some sample code for context:
this.createPath = function(){
//if previous path segment has no length
if (pathSegment[this.index-1].getPath().length === 0){
//we need the previous path segment recreated using this same function
pathSegment[this.index-1].createPath();
//now we can retry this path segment again
this.createPath();
}
//all is well, create this path segment using Google Maps direction service
else {
child.createPathLine(pathSegment[this.index-1].getEndPoint(), this.clickCoords);
}
}
Naturally this code as it is would loop like crazy and create many requests.
This is a good use case for promises.
They work like this (example using jQuery promises, but there are other APIs for promises if you don't want to use jQuery):
function doCallToGoogle() {
var defer = $.Deferred();
callToGoogleServicesThatTakesLong({callback: function(data) {
defer.resolve(data);
}});
return defer.promise();
}
/* ... */
var responsePromise = doCallToGoogle();
/* later in your code, when you need to wait for the result */
responsePromise.done(function (data) {
/* do something with the response */
});
The good thing is that you can chain promises:
var newPathPromise = previousPathPromise.then(
function (previousPath) { /* build new path */ });
Take a look to:
http://documentup.com/kriskowal/q/
http://api.jquery.com/promise/
To summarize promises are an object abstraction over the use of callbacks, that are very useful for control flow (chaining, waiting for all the callbacks, avoid lots of callback nesting).
As an example, suppose I want to fetch a list of files from somewhere, then load the contents of these files and finally display them to the user. In a synchronous model, it would be something like this (pseudocode):
var file_list = fetchFiles(source);
if (!file_list) {
display('failed to fetch list');
} else {
for (file in file_list) { // iteration, not enumeration
var data = loadFile(file);
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
}
}
This provides decent feedback to the user and I can move pieces of code into functions if I so deem necessary. Life is simple.
Now, to crush my dreams: fetchFiles() and loadFile() are actually asynchronous. The easy way out is to transform them into synchronous functions. But this is not good if the browser locks up waiting for calls to complete.
How can I handle multiple interdependent and/or layered asynchronous calls without delving deeper and deeper into an endless chain of callbacks, in classic reductio ad spaghettum fashion? Is there a proven paradigm to cleanly handle these while keeping code loosely coupled?
Deferreds are really the way to go here. They capture exactly what you (and a whole lot of async code) want: "go away and do this potentially expensive thing, don't bother me in the meantime, and then do this when you get back."
And you don't need jQuery to use them. An enterprising individual has ported Deferred to underscore, and claims you don't even need underscore to use it.
So your code can look like this:
function fetchFiles(source) {
var dfd = _.Deferred();
// do some kind of thing that takes a long time
doExpensiveThingOne({
source: source,
complete: function(files) {
// this informs the Deferred that it succeeded, and passes
// `files` to all its success ("done") handlers
dfd.resolve(files);
// if you know how to capture an error condition, you can also
// indicate that with dfd.reject(...)
}
});
return dfd;
}
function loadFile(file) {
// same thing!
var dfd = _.Deferred();
doExpensiveThingTwo({
file: file,
complete: function(data) {
dfd.resolve(data);
}
});
return dfd;
}
// and now glue it together
_.when(fetchFiles(source))
.done(function(files) {
for (var file in files) {
_.when(loadFile(file))
.done(function(data) {
display(data);
})
.fail(function() {
display('failed to load: ' + file);
});
}
})
.fail(function() {
display('failed to fetch list');
});
The setup is a little wordier, but once you've written the code to handle the Deferred's state and stuffed it off in a function somewhere you won't have to worry about it again, you can play around with the actual flow of events very easily. For example:
var file_dfds = [];
for (var file in files) {
file_dfds.push(loadFile(file));
}
_.when(file_dfds)
.done(function(datas) {
// this will only run if and when ALL the files have successfully
// loaded!
});
Events
Maybe using events is a good idea. It keeps you from creating code-trees and de-couples your code.
I've used bean as the framework for events.
Example pseudo code:
// async request for files
function fetchFiles(source) {
IO.get(..., function (data, status) {
if(data) {
bean.fire(window, 'fetched_files', data);
} else {
bean.fire(window, 'fetched_files_fail', data, status);
}
});
}
// handler for when we get data
function onFetchedFiles (event, files) {
for (file in files) {
var data = loadFile(file);
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
}
}
// handler for failures
function onFetchedFilesFail (event, status) {
display('Failed to fetch list. Reason: ' + status);
}
// subscribe the window to these events
bean.on(window, 'fetched_files', onFetchedFiles);
bean.on(window, 'fetched_files_fail', onFetchedFilesFail);
fetchFiles();
Custom events and this kind of event handling is implemented in virtually all popular JS frameworks.
Sounds like you need jQuery Deferred. Here is some untested code that might help point you in the right direction:
$.when(fetchFiles(source)).then(function(file_list) {
if (!file_list) {
display('failed to fetch list');
} else {
for (file in file_list) {
$.when(loadFile(file)).then(function(data){
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
});
}
}
});
I also found another decent post which gives a few uses cases for the Deferred object
If you do not want to use jQuery, what you could use instead are web workers in combination with synchronous requests. Web workers are supported across every major browser with the exception of any Internet Explorer version before 10.
Web Worker browser compatability
Basically, if you're not entirely certain what a web worker is, think of it as a way for browsers to execute specialized JavaScript on a separate thread without impacting the main thread (Caveat: On a single-core CPU, both threads will run in an alternating fashion. Luckily, most computers nowadays come equipped with dual-core CPUs). Usually, web workers are reserved for complex computations or some intense processing task. Just keep in mind that any code within the web worker CANNOT reference the DOM nor can it reference any global data structures that have not been passed to it. Essentially, web workers run independent of the main thread. Any code that the worker executes should be kept separate from the rest of your JavaScript code base, within its own JS file. Furthermore, if the web workers need specific data in order to properly work, you need to pass that data into them upon starting them up.
Yet another important thing worth noting is that any JS libraries that you need to use to load the files will need to be copied directly into the JavaScript file that the worker will execute. That means these libraries should first be minified(if they haven't been already), then copied and pasted into the top of the file.
Anyway, I decided to write up a basic template to show you how to approach this. Check it out below. Feel free to ask questions/criticize/etc.
On the JS file that you want to keep executing on the main thread, you want something like the following code below in order to invoke the worker.
function startWorker(dataObj)
{
var message = {},
worker;
try
{
worker = new Worker('workers/getFileData.js');
}
catch(error)
{
// Throw error
}
message.data = dataObj;
// all data is communicated to the worker in JSON format
message = JSON.stringify(message);
// This is the function that will handle all data returned by the worker
worker.onMessage = function(e)
{
display(JSON.parse(e.data));
}
worker.postMessage(message);
}
Then, in a separate file meant for the worker (as you can see in the code above, I named my file getFileData.js), write something like the following...
function fetchFiles(source)
{
// Put your code here
// Keep in mind that any requests made should be synchronous as this should not
// impact the main thread
}
function loadFile(file)
{
// Put your code here
// Keep in mind that any requests made should be synchronous as this should not
// impact the main thread
}
onmessage = function(e)
{
var response = [],
data = JSON.parse(e.data),
file_list = fetchFiles(data.source),
file, fileData;
if (!file_list)
{
response.push('failed to fetch list');
}
else
{
for (file in file_list)
{ // iteration, not enumeration
fileData = loadFile(file);
if (!fileData)
{
response.push('failed to load: ' + file);
}
else
{
response.push(fileData);
}
}
}
response = JSON.stringify(response);
postMessage(response);
close();
}
PS: Also, I dug up another thread which would better help you understand the pros and cons of using synchronous requests in combination with web workers.
Stack Overflow - Web Workers and Synchronous Requests
async is a popular asynchronous flow control library often used with node.js. I've never personally used it in the browser, but apparently it works there as well.
This example would (theoretically) run your two functions, returning an object of all the filenames and their load status. async.map runs in parallel, while waterfall is a series, passing the results of each step on to the next.
I am assuming here that your two async functions accept callbacks. If they do not, I'd require more info as to how they're intended to be used (do they fire off events on completion? etc).
async.waterfall([
function (done) {
fetchFiles(source, function(list) {
if (!list) done('failed to fetch file list');
else done(null, list);
});
// alternatively you could simply fetchFiles(source, done) here, and handle
// the null result in the next function.
},
function (file_list, done) {
var loadHandler = function (memo, file, cb) {
loadFile(file, function(data) {
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
// if any of the callbacks to `map` returned an error, it would halt
// execution and pass that error to the final callback. So we don't pass
// an error here, but rather a tuple of the file and load result.
cb(null, [file, !!data]);
});
};
async.map(file_list, loadHandler, done);
}
], function(err, result) {
if (err) return display(err);
// All files loaded! (or failed to load)
// result would be an array of tuples like [[file, bool file loaded?], ...]
});
waterfall accepts an array of functions and executes them in order, passing the result of each along as the arguments to the next, along with a callback function as the last argument, which you call with either an error, or the resulting data from the function.
You could of course add any number of different async callbacks between or around those two, without having to change the structure of the code at all. waterfall is actually only 1 of 10 different flow control structures, so you have a lot of options (although I almost invariably end up using auto, which allows you to mix parallel and series execution in the same function via a Makefile like requirements syntax).
I had this issue with a webapp I'm working on and here's how I solved it (with no libraries).
Step 1: Wrote a very lightweight pubsub implementation. Nothing fancy. Subscribe, Unsubscribe, Publish and Log. Everything (with comments) adds up 93 lines of Javascript. 2.7kb before gzip.
Step 2: Decoupled the process you were trying to accomplish by letting the pubsub implementation do the heavy lifting. Here's an example:
// listen for when files have been fetched and set up what to do when it comes in
pubsub.notification.subscribe(
"processFetchedResults", // notification to subscribe to
"fetchedFilesProcesser", // subscriber
/* what to do when files have been fetched */
function(params) {
var file_list = params.notificationParams.file_list;
for (file in file_list) { // iteration, not enumeration
var data = loadFile(file);
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
}
);
// trigger fetch files
function fetchFiles(source) {
// ajax call to source
// on response code 200 publish "processFetchedResults"
// set publish parameters as ajax call response
pubsub.notification.publish("processFetchedResults", ajaxResponse, "fetchFilesFunction");
}
Of course this is very verbose in the setup and scarce on the magic behind the scenes.
Here's some technical details:
I'm using setTimeout to handle triggering subscriptions. This way they run in a non-blocking fashion.
The call is effectively decoupled from the processing. You can write a different subscription to the notification "processFetchedResults" and do multiple things once the response comes through (for example logging and processing) while keeping them in very separate, tiny and easily-managed code blocks.
The above code sample doesn't address fallbacks or run proper checks. I'm sure it will require a bit of tooling to get to production standards. Just wanted to show you how possible it is and how library-independent your solution can be.
Cheers!
Node.js approach is event driven and I was wondering how would you tackle the problem of when to fire off an event?
Lets say that we have some actions on a web application: create some data, serve pages, receive data etc.
How would you lay out these events? In a threaded system the design is rather "simple". You dedicated threads to specific set of tasks and you go down the road of thread synchronization. While these task are at low on demand the threads sit idle and do nothing. When they are needed they run their code. While this road has issues it's well documented and kind of solved.
I find it hard to wrap my head around the node.js event way of doing things.
I have 10 request coming in, but I haven't created any data so I can't serve anying, creating data is a long action and another 5 client wants to send data. What now?
I've created the following untested code which is basically a pile of callbacks which get registered and should be executed. There will be some kind of a pile manager that will run and decide which code does it want to execute now. All the callback created by that callback can be added "naturally" to the even loop. It should also register it's self so the event loop could give the control back to it. Other things like static content and what ever can be bound differently.
How can I register a call back to be the last call in the current event loop state?
Is this a good way to solve this issue?
The most important thing to remember when coming from a threaded environment is that in node you don't wait for an action to finish happening, instead you tell it what to do when it is done. To do this you use a callback, this is a variable which contains a function to execute, or a pointer to a function if you like.
For example:
app.get('/details/:id?', function (req, res) {
var id = req.params.ucid,
publish = function (data) {
res.send(data);
};
service.getDetails(id, publish);
});
You can then invoke the publish method from within your get details method once you have created the required data.
getDetail : function (id, callback) {
var data = makeMyData(id);
callback(data)
}
Which will then publish your data back to the response object. Because of the event loop node will continue to serve requests to this url without interrupting the data generation from the first request
The answer chosen is the most correct, there is but one minor code change and that is:
Change this function from this:
getDetail : function (id, callback) {
var data = makeMyData(id);
callback(data)
}
To that:
getDetail : function (id, callback) {
var data = makeMyData(id);
setTimeout(callback, 0, data);
}
Update 2019:
In order to comply with community standard I've broken off an update to a new answer.
I've used setTimeout because I wanted to defer the callback to the back of the event loop. Another option I've used was process.nextTick(), this helped to defer the callback to the end of the current event processed.
For example:
getDetail : function (id, callback) {
var data = makeMyData(id);
process.nextTick(((info)=> callback(info))(data))
}