Greasemonkey Javascript, Speed up xmlHttpRequest - javascript

I want to know the methods to speed up an XHR (xmlHhttpRequest) GET.
I have an application that keeps sending an GM_xmlHttpRequest every X seconds.
The essential idea is, the faster I get the a response that satisfy a condition, faster I can take an action.
What I've done so far, is to set a timeout, like 2000ms, and track the number of timeouts and success.
Depending on these numbers I increase or decrease the X value.
Example:
sent 10 requests, timeout_count=8, success_count=2.
if (timeout_count==8) xSeconds=xSeconds+100;
if (timeout_success==10) xSeconds=xSeconds-10;
Something like this.
Of course, this isn't a performance, and it's only a way I found to improve the speed of the calls depending on the internet speed factor.
What I need to know is...
Is there another methods to increase the speed of the requests of a "keep-sending" XHR application?
Methods like setting headers with specific values, or maybe some API, or maybe some programming approach? Something/Someway that make the requests lighter...
Thank you very much.
EDIT: As the user Bergi demanded, I'm adding this explanation:
somesite.com is a feed site, like... if I run a regExp to find a certain string in the response, I will send-me an e-mail. Something like this.
My code is something like this:
var count_success = 0;
var count_timeout = 0;
var requestTime = 1000;
var httpRequest = function() {
setTimeout(function() {
httpRequest();
httpRequest_get();
},requestTime);
}
var httpRequest_get = function() {
GM_xmlhttpRequest({
method: 'GET',
url: "http://www.somesite.com/",
headers: {
"User-Agent": "Mozilla/5.0",
"Accept": "text/xml",
},
timeout: 2000,
ontimeout: function() {
timeout_count++;
count_timeout++;
if(count_timeout==5) {
requestTime=requestTime+100;
count_timeout=0;
}
},
onload: function(response) {
// do something with the response
count_success++;
if(count_success==5) {
requestTime=requestTime-50;
count_success=0;
}
}
});
}
Answer to the comment made by the user Bergi:
It's not under my control. It's a feed site, as I mentioned. I want to monitor the offers showed by this feed, each offer has a price, and when an item under a certain price is matched, I send-me an e-mail. Then seeing this e-mail poping out in my phone, I can decide either to buy it or not.
There is doc to their API, as far as it goes, I only have the code I can study under his page. I started with the making requests for the feed site, just www.somesite.com, but the requests were tooking too much, the length of the string was greater than 80k. Then, I monitored their feed to see what their site are doing to update the feed, then I found the XHR they do every X time, I used to replicate this request, then the string went down to 10k or less sometimes.
Well, you got the idea, The faster I do it, the better. Matter of fact, I want to do it under 300ms. I already achieved the avg of 800ms. Also, you must be questioning yourself, why so fast? what's the matter between 800ms and 300ms? The answer is, there is also another ppl scripting the same thing and their scripts are faster than mine. I'm looking for somehow reduce the size of the request, maybe changing the headers, but I'm not getting much success on it. The internet factor is also important, I know this for sure, but I can't afford it at the moment. I just hope there are ways of optimizing it without depending only of the connection speed factor. Thanks.

Related

Fetching data from multiples pages of API

I'm fetching data from a PS4 Games API but it's split into 400+ pages. I wanted to get the data from all pages, but the solution I came up with did not work very well. It gives me an error 'JSON Value of type NSNull cannot be converted to a valid URL'. Also, I don't think the for loop works well either, it shows me it going through all the pages when it displays the results in my list.
Additionally, this API is dynamic because new games keep getting released. So how could I get data up to latest page without manually changing my last page number everytime? I looked at some questions here but I couldn't fit it into my code
My code is rather long so I'm just going to post the part that matter:
componentDidMount() {
var i;
for (i = 0; i < 400; i++) {
fetch(`https://api.rawg.io/api/games?page=${i+1}&platforms=18`, {
"method": "GET",
"headers": {
"x-rapidapi-host": "rawg-video-games-database.p.rapidapi.com",
"x-rapidapi-key": "495a18eab9msh50938d62f12fc40p1a3b83jsnac8ffeb4469f"
}
})
.then(res => res.json())
.then(json => {
const { results: games } = json;
this.setState({ games });
//setting the data in the games state
});
}
}
The API also has an item that gives me the link of the next page, I think there is a way to use 'next' and fetch data from that URL
If anyone could help, that would be AWESOME. Thank you in advance
its my first answer on this forum, so... I hope be helpfull.
For me you have 2 options:
Each request send 2 variables in the answer count and next, or you make a for loop with count/20 as limit (20 is the number of items gaves in the answer), or you make a while loop until the next variable give null as answer (currently at page 249).
What is currently happening is you are making 400 requests and when each comes in it is overwriting the component with the response it received. It does not care about what is already there or any of the other requests.
An approach you could try instead is as the responses come in append the results to the ongoing list and update the state with the running list.
Going forward and for your other question about handling new releases. Instead of running 400 queries every time the application is used, try looking into caching the results. When the app loads you can see if cache exists and load or query if it does not. The rawg.io /games endpoint has a parameter for ordering by release. When the application loads in future you can conditionally loop until you reach a game that is already in cache at which point terminate.

Elegant way of harnessing facebook ads data without api

I was helping one of my relatives with a Facebook campaign for their store.The campaign was a success as we gathered about more than 1000 new likes and a lot of queries.They were happy but I really wanted to do some more with that data like tag those people who liked if their setting allowed or send a message on messenger for arrival of new items.In short keep a track of the all that was happening.The idea is to harness the data so that maximum can be achieved next time on similar campaigns.
I wanted a something simple so that the guys at the store can do it themselves without any fiddling with api.After some trial and error i came up with this js code which can be pasted into console after opening the window which appears when you click link just on the side of like button.
/*a script to get all the people who liked the page
after a facebook campaign. A successful capmpaign will get 1000's of
likes so it will be impossible to load all the names in one go.Also the
list loads progressively with each scroll. So
the code introduces a last element in the json which you have to put in
place of "i" in the given code when you press see more button in
subsequent runs.On my fairly powerful laptop and decent internet I was
not able to get more than 350 persons without a good lag.
The value of i is calculated by trial and error as the data attributes `
before that holds something else(not required) and not the names.I hope
it will be more or less similar in all of them.
This code is to be pasted in the console once the window with all the
likes is opened.*/
var arrayName = document.querySelectorAll('[data-gt]');
var PersonObject={};
try {
for(var i = 55;i<=arrayName.length;i++ ){
var element=arrayName[i];
console.log(element);
var name=element.innerHTML;
// console.log(i)
PersonObject[("name"+i)]=name;
// console.log(PersonObject)
}
}
catch(error){
console.log("error occured at"+i)
}
finally{
PersonObject["lastElement"]=i;
var NamesJson = JSON.stringify(PersonObject)
console.log(NamesJson)
}
I tried to write the gist of code in comments.
Now my real question,this all seems so hacky and patched stuff but not elegant. Isn't there a way for business owners to actually harness this data in more systematic way without the need for any api's or any programming knowledge?

Preventing client side abuse/cheating of repeating Ajax call that rewards users

I am working on a coin program to award the members for being on my site. The program I have makes two random numbers and compares them, if they are the same, you get a coin. The problem I have is someone could go in the console and get "free" coins. They could also cheat by opening more tabs or making a program to generate more coins right now which I am trying to stop. I am thinking about converting it over to php from js to stop the cheating (for the most part) but I don't know how to do this. The code in question is:
$.ajax({
type: 'post',
url: '/version2.0/coin/coins.php',
data: {Cid : cs, mode : 'updateCoins'},
success: function (msg) {
window.msg=msg;
}});
And the code for the console is that with a loop around it. In the code above, "cs" is the id of the member so by replacing it with their id would cause them to get all the coins they would want.
Should I just have an include with variable above it? But then how would I display the success message which has the current number of coins. Also, this code is in a setInterval function that repeats every 15 milliseconds.
There are multiple ways you could do this, but perhaps the simplest would be to go in your server side code - when a request comes in, you check the time of last coin update, if there ins't one, you run your coin code and save the time of this operation in their session. If there is a stored time, ensure that it is beyond the desired time. If it is, continue to the coin update. If it isn't, simply respond with a 403 or other failure code.
In pseudo code:
if (!$userSession['lastCoinTime'] || $currentTime + $delay > $userSession['lastCoinTime']) {
// coin stuff
$userSession['lastCoinTime'] = // new time
} else {
// don't give them a chance at coin, respond however you want
}
However, since you're talking about doing this check every 15ms, I would use websockets so that the connection to the server is ongoing. Either way, the logic can be comparable.
Just in case there's any uncertainty about this, definitely do ALL of the coin logic on the server. You can never trust the user for valid data coming in. The most you can trust, depending on how your authentication is setup, is some kind of secret code only they would have that would just let you know who they are, which is a technique used in place of persistent sessions. Unless you're doing that, you would rely on the session to know who the user is - definitely don't let them tell you that either!

Check/Log how much bandwidth PhantomJS/CasperJS used

Is it possible to check/log how much data has been transferred during each run of PhantomJs/CasperJS?
Each instance of Phantom/Casper has a instance_id assigned to it (by the PHP function that spun up the instance). After the run has finished, the amount of data transferred and the instance_id will have to make its way to be inserted into a MySQL database, possibly via the PHP function that spawned the instance. This way the bandwidth utilization of individual phantomjs runs can be logged.
There can be many phantom/casper instances running, each lasting a minute or two.
The easiest and most accurate approach when trying to capture data is to get the collector and emitter as close as possible. In this case it would be ideal if phantomjs could capture that data that you need and send it back to your PHP function to associate it to the instance_id and do the database interaction. Turns out it can (at least partially).
Here is one approach:
var page = require('webpage').create();
var bytesReceived = 0;
page.onResourceReceived = function (res) {
if (res.bodySize) {
bytesReceived += res.bodySize;
}
};
page.open("http://www.google.com", function (status) {
console.log(bytesReceived);
phantom.exit();
});
This captures the size of all resources retrieved, adds them up, and spits out the result to standard output where your PHP code is able to work with it. This does not include the size of headers or any POST activity. Depending upon your application, this might be enough. If not, then hopefully this gives you a good jumping off point.

setInterval alternative

In my app I am polling the webserver for messages every second and displaying them in the frontend.
I use setInterval to achieve this. However as long as the user stays on that page the client keeps polling the server with requests even if there is no data. The server does give an indication when no more messages are being generated by setting a variable.
I thought of using this variable to clearInterval and stop the timer but that didn't work. What else can I use in this situation?
I am using jquery and django. Here is my code:
jquery:
var refresh = setInterval(
function ()
{
var toLoad = '/myMonitor'+' #content';
$('#content').load(toLoad).show();
}, 1000); // refresh every 1000 milliseconds
});
html:
div id=content is here
I can access the django variable for completion in html with each refresh. How can I set clearInterval if at all ?
Note: stack overflow does not let me put is &gt &lt so html is incomplete
Thanks
Updated 03/16/2010
I must be doing something wrong. But cannot figure it out. Here is my script with clearTimer and it does not work.
var timer = null;
$(function(){
if ("{{status}}" == "False")
{
clearInterval(timer);
}
else
{
timer = setInterval(
function(){
var toLoad = '/myMonitor'+' #content';
$('#content').load(toLoad).show();}
,1000); // refresh every 1000 milliseconds
}
});
status is a boolean set in "views.py" (Django).
Thanks a bunch.
A couple people have already answered with specific resources to your problem, so I thought I would provide a bit of background.
In short, you want the server to push data to the browser to avoid extensive client-side polling. There isn't a good cross-browser way to support server push, so a common solution that requires much less polling is to use the Comet (another cleaning product, like AJAX) long-poll technique.
With Comet, the browser makes a request, and the server keeps the connection open without responding until new data is available. When the server does has new data, it sends it over the open connection and the browser receives it right away. If the connection times out, the browser opens a new one. This lets the server send data to the client as soon as it becomes available. As others have indicated, this approach requires special configuration of your web server. You need a script on the server that checks for data at an interval and responds to the client if it exists.
Something to keep in mind with this approach is that most web servers are built to get a request from a client and respond as quickly as possible; they're not intended to be kept alive for a long period of time. With Comet you'll have far more open connections than normal, probably consuming more resources than you expect.
Your clearInterval check is only checking when the document ready event is fired.
If the code you gave is exactly what's in the browser, then you're comparing the string "{{status}}" to the string "False". I'd rather watch paint dry than wait for that to evaluate as true.
What if your requests taking longer than 1 second to complete? : You'll flood your server with requests.
function update () {
$('#content').show().load('/myMonitor'+' #content', function (response, status) {
if (!/* whatever you're trying to check*/) {
setTimeout(update, 1000);
};
});
};
$(document).ready(function () {
update();
});
Is closer than where you were, but you still need to work out how you're going to decide when you want to stop polling.

Categories