I have a text field in which I respond to typing by checking the database for the number of matches, but I don't want to pummel the database with too many queries for no reason if the user types several characters quickly.
Searching the web, the advice seems to be to wrap it all in setTimeout(), but I apparently don't understand how to use it properly. Here is my code at the moment:
$(".qs-text").keyup(function() {
if ($(this).val().length > 2) {
setTimeout(function() {
$.get("ajax_request.php?req=Quicksearch&qs=" + $(".qs-text").val(), function(data) {
$('.qs-hits').text(data);
});
}, 500);
} else {
$('.qs-hits').text('-');
}
});
It does wait 500ms, and at the end of the timeout period, it does use the final state of the field in the request - good.
However then it sends multiple identical requests (one for every character I typed) instead of just one. That defeats the purpose of having the timeout in the first place. I can almost see why the code would do that, since the keyup event fires every time, but I have no idea how to solve it. There are other questions whose titles sound like what I'm asking, but every one I've read is different enough that I can't quite apply any of them to my case.
You need cancel timeout when create a new.
var timeout = null;
$(".qs-text").keyup(function() {
if(timeout != null) clearTimeout(timeout);
if ($(this).val().length > 2) {
timeout = setTimeout(function() { $.get("ajax_request.php?req=Quicksearch&qs="+$(".qs-text").val(), function(data) {
$('.qs-hits').text(data);
}); },500);
} else {
$('.qs-hits').text('-');
}
});
We usually store the timeout in variable and then clear it conditionally when call a new one. However for large web application, I suggest using web sockets for such subsequent calls for real-time experience
var timer;
$(".qs-text").keyup(function() {
if ($(this).val().length > 2) {
clearTimeout(timer);
timer = setTimeout(function() {
$.get("ajax_request.php?req=Quicksearch&qs=" + $(".qs-text").val(), function(data) {
$('.qs-hits').text(data);
});
}, 500);
} else {
$('.qs-hits').text('-');
}
});
I would recommend using something like lodash for debouncing, so you can do something like this
$(".qs-text").keyup(function() {
_.debounce(()=> {
$.get("ajax_request.php?req=Quicksearch&qs="+$(".qs-text").val(), function(data) {
$('.qs-hits').text(data);
})
}, 500)
});
for more info, https://lodash.com/docs/4.17.15#debounce
Related
I have the following JSON data in an API:
[
{
notification: "'James' has created a new user. This requires
approval",
read: null
}
]
I have the following jQuery:
$.ajax({
dataType: "json",
url: "/api/notifications",
success: function(data) {
counterText = 0;
$.each(data, function(index, value) {
if(value.read == 0 || value.read == null)
{
var theCounter = parseInt($('.counter').text());
counterText += theCounter += 1;
}
});
$('.counter').text(counterText);
}
});
The problem is that this only works when someone refreshes the browser. I am using Socket.io in order to do real-time notifications, however, instead of each notification coming in, I just ideally need to update this code each time a socket comes in. For example, an event called "Ping"
socket.on("test-channel:App\\Events\\Ping", function(message) {
$.toast({
heading: 'Information',
text: message.data.message,
icon: 'info',
loader: false, // Change it to false to disable loader
loaderBg: '#9EC600' // To change the background
});
});
There are, however, a lot of events so I don't really want to have to update on each on. Ideally, I would like instant polling on this file so that notifications can updated immediately without the need of refreshing the browser.
To paraphrase, it sounds like you'd like to add a polling mechanism to your AJAX approach. (Sockets can have advantages such as reduced latency, but they can be slightly more complex, so I understand why you may not want to try them at this time.) All your current AJAX code lacks is really just a mechanism to make continual requests on an interval. You can do that with:
var polling = true;
var period = 60 * 1000; // every 60 seconds
var interval = polling && setInterval(function() {
if (polling) {
$.ajax({
... // existing ajax call here
});
} else {
if (interval) {
clearInterval(interval);
}
}
}, period);
// Later, if you want to stop polling, you can:
polling = false;
// ...or even just:
if (interval) {
clearInterval(interval);
}
I have an application in AngularJS and I want to implement an autocomplete input for a certain list of programs.
My problem is that I have lots of programs in my database and I don't want to load them all when the page loads. Instead I load pages and have a button that loads the next page when clicked.
scope.loadPrograms = function() {
Programs.getPage($scope.page)
.success(function(data) {
$scope.allprograms.push.apply($scope.allprograms, data.campaigns);
$scope.page++;
if(data.pagination.pages < $scope.page) {
$scope.page = -1;
}
})
.error(function(data){
alert('There has been an error. Please try again later!');
});
}
and the button
<md-button ng-click="loadPrograms()" ng-show="page != -1">Load more data</md-button>
So this approach makes me do a request everytime I write/delete a letter in the autocomplete input, given the fact that I don't have all the program loaded on $scope. Is it ok to make so many request? Is there another approach?
Thanks.
EDIT
Ok so now I put a delay on the autocomplete, but the method doesn't work anymore.
// Search for programs
scope.querySearch = function(query) {
if (typeof pauseMonitor !== 'undefined') {
$timeout.cancel(pauseMonitor);
}
pauseMonitor = $timeout(function() {
var results = query ? scope.allprograms.filter(createFilterFor(query)) : [];
return results;
}, 250);
};
// Create filter function for a query string
function createFilterFor(query) {
var lowercaseQuery = angular.lowercase(query);
return function filterFn(programs) {
return (programs.name.toLowerCase().indexOf(lowercaseQuery) != -1);
};
};
It enters in the createFilterFor method, finds a good match but doesn't show it anymore.
If you need to retrieve a set of words for the purpose of auto completion from a large database, one simple trick is to use $timeout with some time threshold which can detect the pauses of the user typing.
The idea is to prevent a request being generated for every key. You look for a pause in the user typing pattern and make your request there for the letters typed. This is a simple implementation of this idea in your key handler.
function processInput(input) {
if (typeof pauseMonitor !== 'undefined') {
$timeout.cancel(pauseMonitor);
pauseMonitor = $timeout(function() {
//make your request here
}, 250);
}
Take a look at ng-model-options
you can set a debounce time and some other interesting things.
ng-model-options="{ debounce: '1000' }"
Above line means the input value will be updated in the model after 1 sec
I am streaming in data from a mongodb collection, doing some calculations with the data at hand and then storing it back in mongo. The process runs fine through the first 50k or so records and then after that it gets bogged down. The first 50k records it seems to store 2-3k records per second, then closer to 2 per second.
var stream = Schema.find().stream();
stream.on('data', function (doc) {
pauseStream(this);
total++;
OtherSchema.find().exec(function(err,others) {
doc.total = others.data + doc.data;
doc.save(function(err) {
written++;
});
});
});
function pauseStream(stream) {
if((total > (written + 50)) && !timedout) {
timedout = true;
stream.pause();
setTimeout(function() {
timedout = false;
pauseStream(stream);
}, 100);
}
else {
stream.resume();
}
}
I am trying to control the flow to only 50 outstanding updates at a time, I have changed this number up and down, no change in where it all gets hung up. What am I doing wrong? Some sort of memory leak it seems like. When I use memwatch the stats at 50k look like:
{ num_full_gc: 2368,al: 168610
num_inc_gc: 55680,
heap_compactions: 2368,
usage_trend: 4177.7,
estimated_base: 89033445,
current_base: 121087440,
min: 15957344,
max: 366396904 }
Try setting a batchSize instead of pausing the stream yourself.
var stream = Schema.find().batchSize(50).stream();
I suspect leak may be in your code. You use doc in internal callback closure.
Try to tell GC doc is not necessary anymore after it is saved.
OtherSchema.find().exec(function(err,others) {
doc.total = others.data + doc.data;
doc.save(function(err) {
written++;
});
doc = null;//tell GC to free doc
});
I have a requirement where I need to poll the database via ajax from js to check for a status. If the status is "active" then the polling should stop and an alert should popup "case is now active". The js should check the db every 2 seconds until the db status returns "active." Can you provide an elegant js routine for this? Here's some general js to show what I want to do:
function ReportAsActivePoll()
{
for(var i=0; i<10; i++)
{
setTimeout(StatusIsActive,(i*2000));
if(statusIsActive)
{
ReportAsActive();
break;
}
}
}
var statusIsActive = false;
function StatusIsActive(case)
{
statusIsActive = GetStatusFromDB(case) == "active";
}
function ReportAsActive()
{
alert("case is now active")
}
A few notes:
I know the code above is not correct. It's just for illustrative purposes.
The code above will call StatusIsActive 10 times. I would like the calls to stop/break/discontinue after status is active. However, I think polling requires to queue up all the calls ahead of time so I'm not sure how to achieve this.
Use setInterval() and clearInterval() for simplicity. Like so:
<script type="text/javascript">
function checkStatus(theCase) {
var intervalId = window.setInterval(function() {
if (getStatusFromDb(theCase) == 'active') {
clearInterval(intervalId)
reportAsActive()
}
}, 2000)
}
function reportAsActive()
{
alert("case is now active")
}
var tmpCounter = 0
function getStatusFromDb(theCase)
{
if (tmpCounter++ == 4) return "active"
}
checkStatus('case 123')
</script>
You should also consider making functions start with a lowercase letter, because that is the normal JS convention. By choosing another style, you risk having case-sensitive errors that are annoying to track down.
You need to use setInterval instead of your setTimeout and when you received a valid response you have to remove this interval with clearInterval.
So you need to do something like this
var intervalID = window.setInterval(function(){
var resFromYourDB = ...; // get your result via ajax
if (resFromYourDB['active']){
window.clearInterval(intervalID);
// do you alert
}
}, 2000)
This way it will be polling your server till it will get active as a response and not a predefined amount of time as with setTimeout. Also when it will get this response it will properely stops.
This LOOP queries the Parse.com server & then plays with the results if any. The problem is that when nArray is greater than 100, the function exceeds the query/burst limit of Parse.com CloudCode & it fails.
One idea would be to delay the LOOP for a second after every 100 LOOPS, but I'm not sure how to do that. Any other solutions would be greatly appreciated.
Thanks in Advance,
for (var k = 1; k < nArray.length; k++) {
(function (k, mArray) { // <-- define an inline function
query2.equalTo("username", nArray[k]); // BURST LIMIT EXCEEDS
query2.find({
success: function (results) {
if (results.length !== 0) {
var object = results[0];
var compareUserEmail = object.get('email');
if (compareUserEmail !== userEmail) {
// alert("The result is equal to" + object.get('Name'));
mArray.push({
name: object.get('Name'),
email: object.get('email'),
bloxID: object.get('bloxID')
});
gameScore.set("filtered", mArray);
gameScore.save(null, {
success: function (gameScore) {
response.success("Success!");
alert('New object created with objectId: ' + gameScore.id);
},
error: function (gameScore, error) {
alert('Failed to create new object, with error code: ' + error.description);
}
});
}
};
},
error: function () {}
});
})(k, mArray);
// <-- call it after definition using (k)
};
You've got a couple of issues to deal with.
The reason Parse.com doesn't support setInterval is because that would be inviting disaster. It terminates your Cloud Code if it takes too long, so letting you add delays would just increase the chance your code is terminated before completion.
The reason Parse.com has a burst limit is that this usually suggest "you are doing it wrong (tm)". In your case you are looping through an array and running a query for each item in the array. Instead you should be using the containedIn method to get all records for the array in one go. If you are getting more than 100 items in your array you can choose to increase the record limit to 1000, but first consider carefully if this is really what you need.
Given that you are modifying a lot of objects and saving them all, consider using the saveAll method to save them all in one hit too.
You might want to consider batching these operations, but be aware of the restrictions on overall duration for Cloud Code.
You can use a setInterval:
var i = 0;
var intervalId = setInterval(function() {
if(i < nArray.length) {
... your code ...
i++;
} else {
clearInterval(intervalId);
}
}, 100); //every 100ms; change it to what you need