inline editing with ajax (insert value to database) - javascript

This can be a UX question but it can also be technical. I'm building a list with ajax and PHP. I doubt I should manipulate the DOM before the backend is finish. What will be the problem if I don't use a loader to indicate the backend activities?

Well in general, as a concept of usability, you have to inform your user about the status of his/her requests. If an action takes too long and no indication exists that the request is being processed then the user might think that something doesn't work or got stuck. As a result the user will repeat the request a number of times. This won't give a chance to your site/app/program to complete its task resulting to frustration and even a belief that your product does not work.

Related

CRUD strategies, do you use one or two calls per CRUD operation?

When developing web applications I generally see two ways people perform crud and sync the view.
Here are the high level examples which use ajax.
1- Method one
A create operation could involve a POST request and on success of that just do another GET and fetch all the data and rerender you view.
A delete operation is similar just do another GET on delete success
2- Method two
A create operation could involve a POST request which would return just the inserted id and on success of that do NOT do another GET request rather append the data that was just was sent into the current list of items in your view.
A delete operation would return the id and on success search the element that has that id and delete from DOM or array of items etc.
I am interested to know what is usually more preferable, method two for example saves a GET request but it comes at a lot of complexity in the front-end code as now you have to write the code the figures out which item needs to be delete updated etc and if the server needs to add more data to the item that was created before it is displayed this will make method two harder. On the other performance will be better if the GET requests takes a long time to load.
In my projects depending on the complexity I may use either method depending on the situation but I do believe it's better to stick with one approach.
Go with method 2. Manipulate your HTML as soon as the user requests the action. Your users expect whatever action they take on your page to have immediate feedback. You can't afford to wait a quarter of a second to a full second waiting for your back-end to respond before you provide this feedback. If you do, the user will most likely try the action again - this is just a natural instinct all users have today.
This brings up the question: what if my operation doesn't succeed and is rejected by the back-end? You should also write your code so that on error response, you are undoing the view changes you made when the initial action was detected and that you are providing a message (whether showing a bright DIV error box or alerting the user through a pop-up) to the user that their action did not complete successfully.
This gives you the best of both worlds. Your user gets a smooth UI experience, and you provide a way of informing your users if their requests are rejected.
Here's an example of how you would write this using jQuery. But this technique applies to any JavaScript framework (Backbone, Angular, etc...)
$('#button').on('click', function(){
//do some DOM changes that tell my user their action succeeded
$.ajax({
url: 'http://myendpoint.com',
type: 'POST',
data: {key: 'value'},
error: function(){
//undo my DOM changes since request failed
}
});
});

Way to determine/circumvent if an AJAX request timed out?

I have a simple web-page (PHP, JS and HTML) that is displayed to illustrate that a computation is in process. This computation is triggered by a pure JavaScript AJAX-request of a PHP-script doing the actual computations.
For details, please see
here
What the actual computation is, does not play a role, so for simplicity, it is just a sleep()-command.
When I execute the same code locally (browser calls website under localhost: linux, apache, php-mod) it works fine, independant of the sleep-time.
However, when I let it run on a different machine (not localhost, but also Linux, apache, php-mod), the PHP-script does run through (results are created), but the AJAX-request does not get any response, so there is no "onreadystatechange" if the sleep-time is >= 900 seconds. When sleep-time < 900 seconds it also works nicely and the AJAX-request is correctly terminated (readyState==4 and status==200).
The apache and php-configuration are more or less default and I verified the crucial options there already (max_execution_time etc.) but none seems to be valid here as they are either shorter (<1 min.) or bigger, e.g. for the garbage-collector (24 min.).
So I am absolutely confused what may cause this. I am thinking it might be network-related, although I didn't find any appropriate option in my router or so.
Also no error is reported in the apache-logs or in PHP (error loggin to file).
Letting the JavaScript with the AJAX-request display the request.status upon successfull return, surprisingly when I hit "Esc" in the browser window after the sleep is over, I also get the status "200" displayed but not automatically as it should do it.
In any case, I am hoping that you may have an idea how to circumvent this problem?
Maybe some dummy-communication between client and server every 10 minutes or so might do the trick, but I don't have an idea how to best do something like this, especially letting this be transparent to the user and not interfering with the actual work of doing the computations/sleep.
Best,
Shadow
P.S. The post that I am referencing is written by me, but seems to tramsit the idea that it might be related to some config-option, which seems not to be the case. This is why I am writing this post here, basically asking for a way to circumvent such an issue regardless of it's origin.
I'm from the other post you mentioned!
Now that I know more about what you are trying to do: monitor a possibly long running server job, I can recommend something which should turn out a lot better, its not a direct answer to your question, but its a design consideration which includes by its nature a more suitable solution.
Basically, unlink the actions of "starting" the server side task, from monitoring its progress.
execute.php kicks off your background job on the server, and immediately returns.
Another script/URL (lets call it status.php) is available to check the progress of the task execute.php is performing.
When status.php is requested, it won't return until it has something to report, UNLESS 30 seconds (or some other fixed) amount of time passes, at which point it returns a value that you know means "check again". Do this in a loop, and you can be notified immediately of when the background task has completed.
More details on an approach similar to this: http://billhiggins.us/blog/2011/04/27/resty-long-ops
I hope this help give you some design ideas to address your problem!

Is it possible to complete the loop from browser->java->c++->java->browser?

I've got a question about data flow that is summarized best by the image below:
I've got the data path from the UI (WaveMaker) down to the hardware working perfectly. The question I have is whether I'm missing something in the connection from the Java Service to Wavemaker.
I'm trying to provide information back to Wavemaker from the HW. The specifics of shared memory and semaphore signaling are worked out already. Where I'm running into a problem is how to get the data from the Java Service back to WaveMaker, when it hasn't specifically requested it. My plan was to generate events when the Java Service returned, but another engineer here insists that it won't work, since there's no direct call from Wavemaker and we don't want to poll.
What I proposed was to call the function after the page loaded, allow the blocking to occur at the .so level, as shown below, and then handle the return string when the call returned. We would then call the function again. That has the serious flaw of blocking out interaction with the user interface.
Another option put forth would be to use a hidden control, somehow pass it into Java, and invoke an event on it from Java, which could then be made to execute a script to update the UI with the HW response. That keeps the option of using threads alive, and possibly resolves the issue. Is there some more elementary way of getting information from Java->JavaScript->UI without it having been asked for?

How can live search / search suggestions be implemented using Dojo?

I want to implement a 'live search' or 'search suggestions' feature in a web application that uses the Dojo Framework. It would be similar to the way Google and Bing searches display matches as you type: when you type in the search box, a list of potential matches appears below. Searches would be performed server side, with the results sent back to the browser using AJAX.
Does anyone know of a good way to implement this using Dojo?
Here are some potential options:
The built-in widget dijit.form.ComboBox
This has very similar functionality, but I've only seen it used with limited data sets. The examples always use small lists (such as the 50 states in USA) and preload the entire data set for client-side filtering. However I presume I could hook it up to a dojox.data.JsonQueryRestStore for server-side search — can anyone confirm whether that works?
QueryBox http://marumushi.com/code/querybox/
This implementation mainly does the job, but it has some minor bugs and doesn't look like it's being maintained. I'd have to do some bugfixes on the code before using it.
Medryx http://blog.medryx.org/2008/09/10/dijitsearch-part-2/
This also looks like it does the job, but it is described as 'alpha-level' code and the link to the code seems to be broken...
I could probably make one of the above work, but I'd like to know if there are any better alternatives out there.
I implemented it 5 years ago when Dojo was at 0.2:
http://www.lazutkin.com/blog/2005/12/23/live-filtering/
While the code is ancient, it is trivial, and hopefully it'll give you ideas on how to attack it. The rough sketch:
Attach an event handler to your input box, which is triggered on changes — use "onkeyup" to detect a change in the input box.
Wait until user stopped typing by setting a timer in your event handler, if it is not set yet. 200-500ms are good waiting times.
The timeout plays a dual role:
It throttles our requests to a server to prevent overloading.
It plays on our perception of time and our typing habits.
If our timeout is up, and we don't wait for a server ⇒ send server a string we have so far.
If we are still waiting for a server, cancel the request and ask again.
This part is app-specific: we don't want to overload a server, and sometimes a server cannot handle broken connections well.
In the example I don't cancel the XHR call, but wait it to finish first before submitting new request.
Server responds with relevant results, which are promptly shown.
In the blog post I implemented it as a widget. Obviously the exact packaging is up to you.

A reasonable number of simultaneous, asynchronous ajax requests

I'm wondering what the consensus is on how many simultaneous asynchronous ajax requests is generally allowable.
The reason I ask, is I'm working on a personal web app. For the most part I keep my requests down to one. However there are a few situations where I send up to 4 requests simultaneously. This causes a bit of delay, as the browser will only handle 2 at a time.
The delay is not a problem in terms of usability, for now. And it'll be awhile before I have to worry about scalability, if ever. But I am trying to adhere to best practices, as much as is reasonable. What are your thoughts? Is 4 requests a reasonable number?
I'm pretty sure the browser limits the number of connections you can have anyway.
If you have Firefox, type in about:config and look for network.http.max-connections-per-server and that will tell you your maximum. I'm almost positive that this will be the limit for AJAX connections as well. I think IE is limited to 2. I'm not sure about Chrome or Opera.
Edit:
In Firefox 23 the preference with name network.http.max-connections-per-server doesn't exist, but there is a network.http.max-persistent-connections-per-server and the default value is 6.
That really depends on if it works like that properly. If the logic of the application is built that 4 simultaneos requests make sense, do it like this. If the logic isn't disturbed by packing multiple requests into one request you can do that, but only if it won't make the code more complicated. Keep it as simple and straight forward until you have problems, then you can start to optimize.
But you can ask yourself if the design of the app can be improved that there is no need for multiple requests.
Also check it on a really slow connection. Simultaneous http requests are not necessarily executed on the server in the proper order and they might also return in a different order. That might give problems you'll experience only on slower lines.
It's tough to answer without knowing some details. If you're just firing the requests and forgetting about them, then 4 requests could be fine, as could 20 - as long as the user experience isn't harmed by slow performance. But if you have to collect information back from the service, then coordinating those responses could get tricky. That may be something to consider.
The previous answer from Christian has a good point - check it on a slow connection. Fiddler can help with that as it allows you to test slow connections by simulating different connection speeds (56K and up).
You could also consider firing a single async request that could contain one or more messages to a controlling service, which could then hand the messages off to the appropriate services, collect the results and then return back to the client. Having multiple async requests being fired and then returning at different times could present a choppy experience for the user as each response is then rendered on the page at different times.
In my experience 1 is the best number, but I'll agree there may be some rare situations that might require simultaneous calls.
If I recall, IE is the only browser that still limits connections to 2. This causes your requests to be queue, and if your first 2 requests take longer than expected or timeout, the other two requests will automatically fail. In some cases you also get the annoying "allow script to continue" dialog in IE.
If your user can't really do anything until all 4 requests come back (especially with IE's bogged down JavaScript performance) I would create a transport object that contains the data for all requests and then a returning transport object that can be parsed and delegated on return.
I'm not an expert in networking, but probably four wouldn't be much of a problem for a small to medium application, however, the larger it gets the higher the server load which could eventually cause problems. This really doesn't answer your questions, but here is a suggestion. If delay is not a problem why don't you use an queue.
var request = []//a queue of the requests to be sent to the server
request[request.length] = //whatever you want to send to the server
startSend();
function startSend(){//if nothing is in the queue go ahead and send this one
if(request.length===1){
send();
}
}
function send(){//the ajax call to the server using the first request in queue
var sendData = request[0];
//code to send the data
//then when you get the response (I can't remember exactly the code for it)
//send it to a function to process the data
}
function process(data){
request.splice(0,1);
if(request.length>0){//check to see if you need to do another ajax call
send();
}
//process data
}
This is probably isn't the best way to do it, but that's the idea you could probably modify it to do 2 requests instead of just one. Also, maybe you could modify it to send as many requests as their are in the queue as one request. Then the server splits them up and processes each one and sends the data back. All at once or even as it gets it since the server can flush the data several times. You just have to make sure your are parsing the response text correctly.

Categories