I am porting an old game from C to Javascript. I have run into an issue with display code where I would like to have the main game code call display methods without having to worry about how those status messages are displayed.
In the original code, if the message is too long, the program just waits for the player to toggle through the messages with the spacebar and then continues. This doesn't work in javascript, because while I wait for an event, all of the other program code continues. I had thought to use a callback so that further code can execute when the player hits the designated key, but I can't see how that will be viable with a lot of calls to display.update(msg) scattered throughout the code.
Can I architect things differently so the event-based, asynchronous model works, or is there some other solution that would allow me to implement a more traditional event loop?
Am I making sense?
Example:
// this is what the original code does, but obviously doesn't work in Javascript
display = {
update : function(msg) {
// if msg is too long
// wait for user input
// ok, we've got input, continue
}
};
// this is more javascript-y...
display = {
update : function(msg, when_finished) {
// show part of the message
$(document).addEvent('keydown', function(e) {
// display the rest of the message
when_finished();
});
}
};
// but makes for amazingly nasty game code
do_something(param, function() {
// in case do_something calls display I have to
// provide a callback for everything afterwards
// this happens next, but what if do_the_next_thing needs to call display?
// I have to wait again
do_the_next_thing(param, function() {
// now I have to do this again, ad infinitum
}
}
The short answer is "no."
The longer answer is that, with "web workers" (part of HTML5), you may be able to do it, because it allows you to put the game logic on a separate thread, and use messaging to push keys from the user input into the game thread. However, you'd then need to use messaging the other way, too, to be able to actually display the output, which probably won't perform all that well.
Have a flag that you are waiting for user input.
var isWaiting = false;
and then check the value of that flag in do_something (obviously set it where necessary as well :) ).
if (isWaiting) return;
You might want to implement this higher up the call stack (what calls do_something()?), but this is the approach you need.
Related
I'm working on a closed system web application to aid companies in their everyday online commerce chores. That means on the one hand that it won't be open to the public, on the other: it will have to deal with large amounts of data while maintaining a fluent work experience.
This is why I turned to web workers in JS to run all sorts of database access and data loading in the background.
My understanding is, that not only the main UI/main JS remains uninterrupted but also the different web workers run without hindering each other.
I now have the following setup:
mainJS: function statusCheck which runs on pageload:
function statusCheck() {
if(typeof(w__statusCheck) == "undefined") {
var w__statusCheck = new Worker("...statusCheck.js");
w__statusCheck.postMessage("go");
w__statusCheck.onmessage = function(e) {
var message = JSON.parse(e.data);
if(message.text!=undefined) displayMessage(message.text);
}
}
statusCheck.js which is the worker simply goes like this:
function checkStatus() {
console.log("statusCheck started");
// I will leave standard parts out:
// creating and testing the ajax variable against different browsers
ajaxRequest.onreadystatechange = function() {
if(ajaxRequest.readyState == 4) {
self.postMessage(ajaxRequest.responseText);
var timer;
timer = self.setTimeout(function(){
checkStatus();
}, 1000);
}
}
ajaxRequest.open("GET", "...worker_statusCheck.php", true);
ajaxRequest.send(null);
}
this.onmessage = function(e){
checkStatus();
};
As you can see, this restarts itself every second (for now). The intervall might be longer in production.
worker_statusCheck.php simply gets different things from the database and knits them into a JSON object which gives me the system status.
This works beautifully.
Now I have another worker which is supposed to get initiated by a click on a link to effectively call some php to perform actions:
mainJS loadWorker
function loadWorker(url="") {
console.log("loadWorker started");
if(url!="") {
var uniqueID = "XXX" // creating a random ID based on timestamp and Math.random()
if(typeof(window[uniqueID]) == "undefined") {
var variables = { ajaxURL: url };
window[uniqueID] = new Worker("....loadWorker.js");
window[uniqueID].postMessage(JSON.stringify(variables));
window[uniqueID].onmessage = function(e) {
var message = JSON.parse(e.data);
if(message["success"]!=undefined) {
variables["close"] = "yes";
window[uniqueID].postMessage(JSON.stringify(variables));
}
}
}
With every click on a certain link this gets called, creates a uniquely named worker, runs it, receives the data and tells the worker to close().
The php again does its thing and writes a progress update in the DB after each step of the lengthy procedure. These progress updates I fetch from the DB with the above repeating statusCheck.
Now, I can see the entries in the DB with timestamp, so I know they get written each at their time.
So, both workers do their job and run reliably. But I have noticed, that whenever I initiate the manual (randomly named) worker the statusCheck actually stops performing. It just gets stuck... I was able to confirm this with console output from both workers. So it's not the main JS that seems stuck, but the statusCheck actually pauses... and resumes when loadWorker is done.
Am I missing something fundamental here? Any insight would be appreciated since I'm new to this concept of web workers.
Thanx :)
Your question lacks resources to truly figure out what exactly goes wrong. I can concur that two web workers can operate at the same time, even with synchronous operations. I tested this for both for loops and sync XHR requests.
There are multiple things I would recommend though.
First - unless you're processing the data with some CPU heavy algorithm, web workers are waste of time. XHR requests do not block main thread (unless you explicitly ask them to).
In statusCheck() you declare var w__statusCheck which means a local variable. Therefore it will always be null as seen from outer scope. It might get garbage-collected once no code is running in the worker.
Do not use XMLHttpRequest.onreadystatechange. Use onload and onerror.
Random unique ID's for variables are almost always wrong. If you need to store the worker refference at all, either give it a reasonable name (eg. the url it's supposed to load) or use incremental id.
Do NOT stringify data that you post to web worker. It's already done for you by the browser, possibly in more optimal manner. Converting the data to something is a single most common stupid thing people do with web workers.
Also when posting question, at least make sure the code makes some sense. In your post curly braces do not match.
Alright.. I figured it out:
I was looking in all the wrong places. Turns out, I had initialized my php session in all the php scripts which are called by the workers. And my two parallel workers both called one. So the session file was locked by the first php script and the second had to wait until it was back open again. It was not the workers or the JS being hindered, it was the php.
I now took out the session initialization from my statusCheck.php and it works like a charm. I will keep it in those others that handle the user input responses because there it actually makes sense: user clicks on button "compile data XY" which is run by the worker and takes a while. Impatient as he is he already clicks the next button "show this data"... and due to the locked session file I have sort of a neat queue for those actions. :)
I still will take above recommendations to heart and see to it to improve my code. :)
Currently I am using ProtractorJS to access a page that has an unspecified number of popup dialog boxes that are not crucial to the operation of the webpage, but block further interaction with it.
For example, when I open the login screen several (but of unknown quantity) pop-ups appear, and one after another (they are not present at the same time). I am currently handling this in a sloppy way (I see if the object is present and click the existing button to terminate it) and I believe their has to be a better way of handling this... Essentially, I would like to "loop" through these actions till they are finished, in a promise like manner if possible.
Also, as a caveat, I would like to be able to handle messages that randomly should appear, without disrupting the flow of my tests. I understand the latter maybe a little to good to be true, but I figured I would ask.
This is one way.
boolean popupFound=true;
while (popupFound) {
try {
// You should temporarily lower implicit wait to avoid slowing things down
driver.setImplicitWait(6);
driver.switchTo().alert().accept(); // select ok or cancel
} catch (NoAlertPresentException ex) {
// Not needed so exit; assumes no delay between popups
popupFound = false;
} finally {
driver.setImplicitWait(60); // Or whatever you had it set at originally
}
}
Better make sure that implicit wait is only as large as it needs to be.
I want to create a function that when called would display a "Loading..." message, and display the results as soon as it finishes. when I do it like this:
function load() {
$('#status').html("loading...");
/* do something */
...
$('#status').html("done");
$('results').html(result);
}
The "loading" message never appears, after a while what a user sees is just the "done" message and the results. How can I make the "loading" text appear just the moment I want it to?
If "do something" is synchronous, the browser never gets a chance to update the UI between the two content changes.
To have the changes appear you need to do something like:
$('#status').html('loading...');
setTimeout(function() {
// do something
$('#status').html('done');
}, 0);
The setTimeout call gives the UI a chance to update the display before jumping into your "something".
Note that if possible you should try to ensure that "something" doesn't tie up your browser for a long time. Try to break the task up into multiple chunks, where each chunk is also dispatched using setTimeout().
Hard to tell without seeing the "stuff" part, but I hope this helps a little;
function load() {
$('#status').html("loading...");
function onLoaded(result) {
$('#status').html("done");
$('results').html(result);
}
// do your stuff here
// Not being able to see the "stuff", I guess you do some AJAX or something
// else which is asynchronous? if you do, at the end of your callback, add
// onLoaded(result)
}
The key is in the "do something". It depends on what you want to do but I would expect that you want to use jQuery's load() function.
Many jQuery functions accept 'callback functions' which are executed after the task is complete. The callback function section of the load() documentation should explain everything.
This is a really basic JavaScript question and probably duplicate, but I don't know the answer!
I have code as follows:
function userlist_change(myval, function_type) {
// relatively slow code involving Ajax call
// based on Ajax results, change some client-side stuff
}
$("#subjectlist").change(function() {
userlist_change($("#subjectlist").val(), 'change');
}).change();
$("#subjectlist").keypress(function() {
userlist_change($("#subjectlist").val(), 'keypress');
});
I have the problem that if the .change() event is called, the userlist_change function kicks off, and it's relatively slow. If the user changes the list again (e.g. by typing), my code waits for userlist_change to complete before restarting it with the new value.
This looks quite odd in the UI, as it can take a few seconds for anything to change client-side - and sometimes the results of the first call only appear after the user has already made a second call.
Is there any way I can interrupt any existing userlist_change process when the .change() or `keypress() event is fired?
[EDIT] What would be ideal is a simple 'kill any running functions with this name' command - is this possible? Or do I really have to fiddle around with timers?!
you can store last request time in a global variable, and store a request time in each ajax request, so that when you are just showing the result of first request, if the global last request time is greater than request, request time, you should show, other wise not. For example:
var lastRequestTime;
function userlist_change(myval, function_type,requestTime) {
// relatively slow code involving Ajax call
// based on Ajax results, change some client-side stuff
if(lastRequestTime <= requestTime){
//show
}
}
$("#subjectlist").change(function() {
lastRequestTime = new Date();
userlist_change($("#subjectlist").val(), 'change',lastRequestTime );
}).change();
$("#subjectlist").keypress(function() {
lastRequestTime = new Date();
userlist_change($("#subjectlist").val(), 'keypress',lastRequestTime );
});
You should use throttling of event. It is quite easily done with RX for JavaScript, but library is quite complicated. You can try filter value with timer.
Here is useful plugin for throttling: http://benalman.com/projects/jquery-throttle-debounce-plugin/
My users are presented a basically a stripped down version of a spreadsheet. There are textboxes in each row in the grid. When they change a value in a textbox, I'm performing validation on their input, updating the collection that's driving the grid, and redrawing the subtotals on the page. This is all handled by the OnChange event of each textbox.
When they click the Save button, I'm using the button's OnClick event to perform some final validation on the amounts, and then send their entire input to a web service, saving it.
At least, that's what happens if they tab through the form to the Submit button.
The problem is, if they enter a value, then immediately click the save button, SaveForm() starts executing before UserInputChanged() completes -- a race condition. My code does not use setTimeout, but I'm using it to simulate the sluggish UserInputChanged validation code:
<script>
var amount = null;
var currentControl = null;
function UserInputChanged(control) {
currentControl = control;
// use setTimeout to simulate slow validation code
setTimeout(ValidateAmount, 100);
}
function SaveForm() {
// call web service to save value
document.getElementById("SavedAmount").innerHTML = amount;
}
function ValidateAmount() {
// various validationey functions here
amount = currentControl.value; // save value to collection
document.getElementById("Subtotal").innerHTML = amount;
}
</script>
Amount: <input type="text" onchange="UserInputChanged(this)">
Subtotal: <span id="Subtotal"></span>
<button onclick="SaveForm()">Save</button>
Saved amount: <span id="SavedAmount"></span>
I don't think I can speed up the validation code -- it's pretty lightweight, but apparently, slow enough that code tries to call the web service before the validation is complete.
On my machine, ~95ms is the magic number between whether the validation code executes before the save code begins. This may be higher or lower depending on the users' computer speed.
Does anyone have any ideas how to handle this condition? A coworker suggested using a semaphore while the validation code is running and a busy loop in the save code to wait until the semaphore unlocks - but I'd like to avoid using any sort of busy loop in my code.
Use the semaphore (let's call it StillNeedsValidating). if the SaveForm function sees the StillNeedsValidating semaphore is up, have it activate a second semaphore of its own (which I'll call FormNeedsSaving here) and return. When the validation function finishes, if the FormNeedsSaving semaphore is up, it calls the SaveForm function on its own.
In jankcode;
function UserInputChanged(control) {
StillNeedsValidating = true;
// do validation
StillNeedsValidating = false;
if (FormNeedsSaving) saveForm();
}
function SaveForm() {
if (StillNeedsValidating) { FormNeedsSaving=true; return; }
// call web service to save value
FormNeedsSaving = false;
}
Disable the save button during validation.
Set it to disabled as the first thing validation does, and re-enable it as it finishes.
e.g.
function UserInputChanged(control) {
// --> disable button here --<
currentControl = control;
// use setTimeout to simulate slow validation code (production code does not use setTimeout)
setTimeout("ValidateAmount()", 100);
}
and
function ValidateAmount() {
// various validationey functions here
amount = currentControl.value; // save value to collection
document.getElementById("Subtotal").innerHTML = amount; // update subtotals
// --> enable button here if validation passes --<
}
You'll have to adjust when you remove the setTimeout and make the validation one function, but unless your users have superhuman reflexes, you should be good to go.
I think the timeout is causing your problem... if that's going to be plain code (no asynchronous AJAX calls, timeouts etc) then I don't think that SaveForm will be executed before UserInputChanged completes.
A semaphore or mutex is probably the best way to go, but instead of a busy loop, just use a setTimeout() to simulate a thread sleep. Like this:
busy = false;
function UserInputChanged(control) {
busy = true;
currentControl = control;
// use setTimeout to simulate slow validation code (production code does not use setTimeout)
setTimeout("ValidateAmount()", 100);
}
function SaveForm() {
if(busy)
{
setTimeout("SaveForm()", 10);
return;
}
// call web service to save value
document.getElementById("SavedAmount").innerHTML = amount;
}
function ValidateAmount() {
// various validationey functions here
amount = currentControl.value; // save value to collection
document.getElementById("Subtotal").innerHTML = amount; // update subtotals
busy = false;
}
You could set up a recurring function that monitors the state of the entire grid and raises an event that indicates whether the entire grid is valid or not.
Your 'submit form' button would then enable or disable itself based on that status.
Oh I see a similar response now - that works too, of course.
When working with async data sources you can certainly have race conditions because the JavaScript process thread continues to execute directives that may depend on the data which has not yet returned from the remote data source. That's why we have callback functions.
In your example, the call to the validation code needs to have a callback function that can do something when validation returns.
However, when making something with complicated logic or trying to troubleshoot or enhance an existing series of callbacks, you can go nuts.
That's the reason I created the proto-q library: http://code.google.com/p/proto-q/
Check it out if you do a lot of this type of work.
You don't have a race condition, race conditions can not happen in javascript since javascript is single threaded, so 2 threads can not be interfering with each other.
The example that you give is not a very good example. The setTimeout call will put the called function in a queue in the javascript engine, and run it later. If at that point you click the save button, the setTimeout function will not be called until AFTER the save is completely finished.
What is probably happening in your javascript is that the onClick event is called by the javascript engine before the onChange event is called.
As a hint, keep in mind that javascript is single threaded, unless you use a javascript debugger (firebug, microsoft screipt debugger). Those programs intercept the thread and pause it. From that point on other threads (either via events, setTimeout calls or XMLHttp handlers) can then run, making it seem that javascript can run multiple threads at the same time.