Sometime ago I had a code question in a take home test. It was as follows:
Database Throttling
You are given an array userInfo of user data and a function updateDB that takes a single user data argument. updateDB makes an asynchronous call that parses the user data and inserts the parsed data into a database. The database throttles requests so to make sure all user data is added to the database we need a function addAllUserData that calls updateDB on each entry in userInfo making sure never to exceed 7 calls per second to prevent being throttled.
var userInfo = [{'name':'antonio', 'username':'antonio_pavicevac_ortiz'}], dataBase = [];
function updateDB(singleUserDataArgument, callback){
dataBase.push(callback(singleUserDataArgument));
}
function addAllUserInfo(data) {
var eachUserData;
setInterval(function(){
eachUserData = data.map(data)
}, 7000);
}
As you can see by my attempt I am having a hard time wrapping my head around this exercise. Could anyone also inject what is meant by throttling in regards to async calls?
Thanks in advance!
// contains times at which requests were made
var callTimes = [];
function doThrottle(){
// get the current time
var time - new Date().getTime();
// filter callTimes to only include requests this second
callTimes = callTimes.filter(function(t){
return t > time-1000;
});
// if there were more than 7 calls this second, do not make another one
if(callTimes.length > 7) return true;
else{
// safe, do not throttle
callTimes.push(time);
return false;
}
}
// use like this
function makeRequest(){
if(doThrottle()){ /* too many requests, throttle */ }
else{ /* it's safe, make the ajax call*/ }
}
Related
I have the function bellow called every 5 seconds to get data from the server, which is flask/python. My question is how can I adapt the getjson call to have callback when the data is successfully retrieved.
I know there's .done .fail and so on, but I was wondering if I can keep this structure and just add bellow it, but I don't know the syntax in this particular case, hope this isn't too confusing, thanks for reading, here's the code.
// get data from the server every getDataFromServerInterval milliseconds
var getDataFromServerInterval = 5000;
function getData(){
// request timesince table entries from server for user...
$.getJSON($SCRIPT_ROOT + '/_database', {
action: "getUserTable_timesince",
username: $('input[name="username"]').val()
}, function(data) { // do something with the response data
timesince_dataBuffer = data;
});
return false; // prevent get
}
// get data from the server every getDataFromServerInterval milliseconds
setInterval(getData, getDataFromServerInterval);
You could do something like this. Instead of processing the data in getData or using a callback, take advantage of the promise that $.getJSON returns. Have a separate function that is called by the timeout which calls for the data, then processes it. It neatly separates your code out into more managable functions.
var getDataFromServerInterval = 5000;
function getData() {
return $.getJSON($SCRIPT_ROOT + '/_database', {
action: "getUserTable_timesince",
username: $('input[name="username"]').val()
}
}
function wrangleData() {
getData().then(function (data) {
console.log(data);
});
}
setInterval(wrangleData, getDataFromServerInterval);
I found a partial solution, I realized that I can add a callback at the end of the function that handles the data received, which is somewhat equivalent to .done in a different getjson call structure, I'm not sure yet if the function gets called before or after the data is received.
// global timesince buffer, holds
var timesince_dataBuffer;
// get data from the server every getDataFromServerInterval milliseconds
var getDataFromServerInterval = 5000;
function getData(){
// request timesince table entries from server for user
$.getJSON($SCRIPT_ROOT + '/_database', {
action: "getUserTable_timesince",
username: $('input[name="username"]').val()
}, function(data) { // do something with the response data
timesince_dataBuffer = data;
updateEntryStruct(); // the hope is to call this when data is received
});
return false; // prevent get
}
// get data from the server every getDataFromServerInterval milliseconds
setInterval(getData, getDataFromServerInterval);
This is the solution I came up with.
var timesince_dataBuffer;
function getData(){
// gets user's entries from sql table
$.getJSON($SCRIPT_ROOT + '/_database', { // $SCRIPT_ROOT, root to the application
action: "getUserTable_timesince",
username: $('input[name="username"]').val()
}, function(data) { // if a response is sent, this function is called
timesince_dataBuffer = data;
updateEntryStruct(); // recreate the structure of each content, buttons etc
});
return false;
}
I get the data, put in a global variable, call another function which takes that data and re-creates a structure for each object received, this way I don't recreate parts of the structure which are static, most importantly the buttons.
Another function is called every 1 second, which updates the dynamic parts.
(formatted time) passed since
(event name)
Anyway, this is actually my final project in CS50, I started by communicating with the server via form submissions, refreshing the page each time the user pressed a button, then I did it by ajax, but I was sending requests to the server every 2 seconds, and having unresponsive buttons because I would keep re-creating the buttons themselves on a time interval.
And now the page feels responsive and efficient, it's been a great learning experience.
If anyone wants to check out the code, everything is here.
https://github.com/silvermirai/cs50-final-project
It's basically a bunch of random functionality that came to mind.
The application can be found here as of now.
http://ide502-silvermirai.cs50.io:8080/
For my website I am using an API where I need to load several variables from. Each of the variables are dependent on the return value of the previous call (I use the returned variable from call 1 to make call 2 etc).
Example:
Say that I need to make 5 different API calls to gather all of my data and each is dependent on the return value of the previous call. Then in my case I am doing like this. I am passing a callback function to the first function that loads the data. Then that function will make the first API call. When that call is finished it will pass the callback function to the next function that makes the second API call, and so on. When the last API call is finished the callback function gets called and then I know that all the data has been loaded. In code it would look something like this (I am using the Trello API in my application so I will use it in the example below):
function loadData(cb){
//Make the first API call
Trello.get('/member/me/boards', function(boards){
myBoards = boards;
for(var i = 0; i < boards.length; i++){
//Make the second API call
Trello.get('/boards/' + board[i].id + '/lists', function(lists){
board[i].lists = lists;
//Then make the third and fourth and so on
.....
//When all calls are made call the callback function
cb();
});
});
}
As you can see the callback function will be passed a long way into the callstack. I was wondering if there is a better way to load the data and to store it (as of now I just store everything in a large array). And what is some best practices for loading large amount of data from an API?
P.S. In my original code each of the API calls are in separate functions, but I simplified it here to reduce the amount of code in the example.
I don't know if this is an option for you but using TypeScript makes solving this kind of JavaScript problem much more simple:
async function loadData() {
const boards = await Trello.get('/member/me/boards');
return boards.map(async (board) => {
const lists = await Trello.get('/boards/' + board.id + '/lists');
const something = await Trello.get('/...');
const somethingElse = await Trello.get('/...');
// ...more calls
return {
...board,
lists: lists,
something: something,
somethingElse: somethingElse
// ... more attributes
};
});
}
loadData().then((data) => console.log(data));
Without fully understanding your problem this may not be a valid solution, but taking a quick glance at the trello api docs shows a batch call you could make to avoid looping at each level. Batching these would allow for many fewer API calls at each level and would be considered a best practice:
function loadData(cb){
//Make the first API call
Trello.get('/member/me/boards', function(boards){
myBoards = boards;
var boardAPIs = [];
var boardResponses = [];
for(var i = 0; i < boards.length; i++){
boardAPIs.push('/boards/' + board[i].id + '/lists');
//max of 10 at a time per documentation
if (boardAPIs.length == 10 || i >= (boards.length - 1)) {
//Make the second level API call
Trello.get('/batch/?urls=' + boardAPIs.join(','), function(boards){
// collect response information on all boards, then continue with third request
boardResponses.push(...);
if (i >= (boards.length - 1)) {
// all board requests have been made, continue execution at third level
// if this were the last level of calls, you could call cb() here
for(var j = 0; i < boardResponses.length; i++){
// loop inside responses to get individual board responses, build up next set of batch requests
}
}
});
boardAPIs= [];
}
});
});
}
One thing to note here: the docs mentioned that you can only batch 10 requests at a time, so I added some code in there to check for that.
This post provides more information on how to consume the batch service:
this means you get only a single response back, and it looks a little
different from a normal response. The response is an array of objects
– but not of the normal response objects you might expect. Instead,
it’s an object with a single property, with a name set to the HTTP
response code of the request.
You may focus on a deep first approach, so that the first data arrives fast at the client:
function loadData(showChunk){
//Make the first API call
Trello.get('/member/me/boards', function(boards){
myBoards = boards;
(function getboard(i){
//Make the second API call
Trello.get('/boards/' + board[i].id + '/lists', function(lists){
board[i].lists = lists;
//Then make the third and fourth and so on
.....
//When all calls are made for the first board call the callback function, and also continue with the next board
showChunk();
if(i+1<boards.length) setTimeout(getboard, 1, i+1);
});
})(0);
});
}
I have a problem with a PhantomJS script. The script gets a JSON encoded string from a web page and does other things with it. The script:
var address = address;
var amount = 0;
function changeAmount()
{
var page=require('webpage').create();
page.open (address, function(){
//parse json, set amount to something (usually 4)
amount = 4;
});
}
changeAmount();
console.log(amount); //prints 0
//Do stuff with amount
phantom.exit(); //amount not changed yet.
How can I check if the changeAmount function is finished before going forward? Timeout is not possible since I don't know the time it takes to process the changeAmount.
page.open() is an inherently asynchronous function. The only reliable way to do this is to use callbacks in the PhantomJS script:
var address = address;
function changeAmount(callback)
{
var page = require('webpage').create();
page.open (address, function(){
//parse json, set amount to something (usually 4)
var amount = 4;
callback(amount);
});
}
You can even go as far as passing amount into that callback to remove the global variable.
After that, you will need to write your script using that callback pattern.
changeAmount(function(amount){
console.log(amount);
//Do stuff with amount
phantom.exit();
});
Furthermore, you probably shouldn't create a new page every time you call changeAmount() (if you do this repeatedly). You can reuse the same page. If you think that creating a new page gives you a fresh environment to work in, then you're mistaken. It is just like a new tab. It will use the same session as all the other pages that you have created.
If you do this often, this will lead to a memory leak, because you're not closing the previously opened pages.
You can use a callback, like so:
function changeAmount(callback) {
var page=require('webpage').create();
page.open (address, function () {
//parse json, set amount to something (usually 4)
amount = 4;
callback();
});
}
changeAmount(function () {
// This function runs when callback() (above) is reached
console.log(amount);
//Do stuff with amount
phantom.exit();
});
And if you're not using the amount variable elsewhere, you could eliminate it by passing it as an argument to the callback:
changeAmount(function (amount) {
and then
callback(amount); // or callback(4);
Consider the following setup, regarding asynchronous functions:
Client.prototype.auth = function(callback) {
//authenticate the client
//run callback
};
Client.prototype.get = function() {
this.auth(function(){
//run the rest of this `get` function
}
};
The get function is called numerous times through an eventlistener, and this event fires only once
The very first get should start the authentication which stays valid for every subsequent calls
The authentication function is takes a couple of seconds to complete
Every subsequent get call does not need to reauthenticate because it is still valid because of the first function call
Every subsequent get call should only be run after the client is authenticated. If it is not authenticated it should wait for the authentication to finish
The point is to prevent 10 get calls to fire 10 auth calls. Whenever the 1st auth function gets called, the other 9 get calls should wait for it to finish and then carry on with the rest of the get function (while being authenticated)
I cant get my head around this. I tried to keep this example as simple as possible
I think solution for you is caching. Make a cache that will hold value isUserAutheniticated and isAuthenitcationProcess and when you need to call auth just check if user is authenticated and if not call it. Inside auth subscribe callback, check if authentication process is open if not do authentication set and call all registered callbacks. Globallist is not cleanest option to implement Observable pattern so you can do it in other way
Here is my idea:
var isAuthenticated = false;
var isAuthenticatioProcess = false;
var authenticationCallbacks = [];
Client.prototype.auth = function(callback) {
authenitcationCallbacks.push(callback);
if (isAuthenticonProcess) {
return;
}
//authenticate
authenitcationCallbacks.forEach(function(call) {
call();
});
authenitcationCallbacks = [];
isAuthenticonProcess = false;
isAuthenticated = true;
};
Client.prototype.get = function() {
if (isAuthenticated) {
this.auth(function(){
//run the rest of this `get` function
}
} else {
function(){
//run the rest of this `get` function
}
}
};
If you could use Async.js look at this answer
I develop a web application that is getting user updates from a web service (that is in another domain) I want to get the updates every 10 seconds.
For calling the service for the first time I dynamically insert a script in the page. The service is JSONP. Then I would like to define a trigger that insert a script from 10 to 10 seconds to get the updates. Is this correct? Can I do that without affecting the user experience on the website? I mean the site performance ... it will be great if I could do the call async and when I have the results I will update the status.
Is there any better solution for accessing the remote services. Is there an efficient way of dynamically reusing the same script using a trigger? I am pretty new to Javascript. Can you give me a short sample how can I define a trigger that calls a remote web service? ... or if there is a better solution.
I suggest that in your AJAX callback, when you get the result, you schedule a timer (window.setTimeout(ex, t)) so that your updating script is called again.
The reason to set the time in the AJAX callback is that you don't know exactly how long it will take for the AJAX call to complete. In this way, you ensure a smooth 10 sec delay between successive updates.
About performance, you will have to check that. It depends on the amount of data and the kind of processing you do of it (and the rest of your page)... but you can try it and check processor usage...
The following will dynamically create the script tags, and delete them (and the global it requires) after being finished with a given call.
You could also use CORS to allow requests besides GET ones, though as you may be aware, that is not supported in older browsers.
To avoid race conditions or performance problems during a slow network, you could allow the JSONP callback to recursively call the function, thereby only making a new call if the callback was returned, though with an optional setTimeout call to ensure there is at least a minimum delay.
The following uses Wikipedia's API to grab a specific page revision and its user.
<script>
var JSONP = function(global){
// (C) WebReflection Essential - Mit Style ( http://webreflection.blogspot.com/2011/02/all-you-need-for-jsonp.html )
// 202 bytes minified + gzipped via Google Closure Compiler
function JSONP(uri, callback) {
function JSONPResponse() {
try { delete global[src] } catch(e) { global[src] = null }
documentElement.removeChild(script);
callback.apply(this, arguments);
}
var
src = prefix + id++,
script = document.createElement("script")
;
global[src] = JSONPResponse;
documentElement.insertBefore(
script,
documentElement.lastChild
).src = uri + "=" + src;
}
var
id = 0,
prefix = "__JSONP__",
document = global.document,
documentElement = document.documentElement
;
return JSONP;
}(this);
// Be sure to include the callback parameter at the end
function startAPI (start) {
start = start || new Date();
var url = 'http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Main%20Page&rvprop=timestamp|user|comment|content&format=json&callback';
var minimum = 10000;
function execute (str) {
alert(str);
}
JSONP(url, function (obj) {
for (var pageNo in obj.query.pages) {
var page = obj.query.pages[pageNo];
var str = 'The user ' + page.revisions[0]['user'] + ' left the page with this code ' + page.revisions[0]['*'];
execute(str);
var elapsed = (new Date().getTime()) - start;
setTimeout(startAPI, (elapsed < minimum) ? (minimum - elapsed) : 0);
break;
}
});
}
startAPI();
</script>
I would make use of JavaScript's setInterval method
function getUpdate () {
//AJAX goes here
}
var myInterval = setInterval(getUpdate,10000);
This way you'll need to inject the script-tag only once.