I have a service method written in ASP.Net WebAPI :http://diningphilospher.azurewebsites.net/api/dining?i=12
and JavaScript client gets the response and visualizes it here
But the nature of Dining Philosophers problem is I never know when the Dead-lock or starvation will happen. So Instead of having a request/response I would like to stream the data through service method and client side JavaScript read the data I assume JSON asynchronously. Currently several post directs me towards changing the default buffer limit in WebAPI so you get a streaming like behavior.
what other(easy or efficient) ways exist to achieve this above behavior.
You can return PushStreamContent from ASP.NET Web API and use Server Sent Events (SSE) JavaScript API on the client side. Check out Push Content section in Henrik's blog. Also, see Strathweb. One thing I'm not sure about the latter implementation is the use of ConcurrentQueue. Henrik's implementation uses ConcurrentDictionary and that allows you to remove the StreamWriter object from the dictionary corresponding to the clients who drop out, which will be difficult to implement using ConcurrentQueue, in my opinion.
Also, Strathweb implementation uses KO. If you don't like to use KO, you don't have to. SSE JavaScript APIs have nothing to do with KO.
BTW, SSE is not supported in IE 9 or lesser.
Another thing to consider is the scale out option. Load balancing will be problematic, in the sense there is a chance that the load will not be uniformly distributed, since clients are tied to the server (or web role) they hit first.
Related
I have a C++ project for windows, using MiniBlink as embedded browser. (MiniBlink is a smaller Blink, which is close to chromium). I use this embedded browser to show responsive and nice looking dialogs with Quasar.js (wrapper for vue.js).
Problem:
Mostly a browser is just the passive backend. In my case, both the backend (project with embedded browser) and the frontend (dialog) are active and thus I need some communication. At the moment I use a local server to catch HTTP send from the frontend to the backend.
But is there a way to communicate from the backend to the frontend? At the moment I could only think about catching cookies or using a permanent loop in JS to send http queries to check for a possible response.
And is there no other way to send information to a backend? Everything is local, I dont need nor really want to send it into the network.
Thanks!
Idea 1: Use a local temp file to save on one side and read on other (can be also used both way)
Idea 2 (similar to question author solution): Local server with both side communication (GET/POST request into one side, text/json other way around)
Idea 3: Use launch parameter to pass though data directly into links for example: instead of using browserprocess.exe file.html, use browserprocess.exe file.html#showsomething
There are also other ways which like catching for example: checking window title of process with certain binary name from running tasks by other side; we didin't get good enough info about your background becouse you coud either use it in same process or other process, if thats same process you coud also just directly use variables both ways directly in code of miniblink and do action when they meet if statement
As CertainPerformance added as a comment, WebSockets might be the best way to go.
If one does not like to implement a websocket server, because a http server is already running, long polling requests might be the best workaround to simulate this behaviour.
Long polling: The client sends a request, which stays open as long as possible. If the server needs to communicate, it can use the open request to send its own "request" via response. It is a bit hacky, but essentially the idea behind websockets.
Mozilla has a nice article to help with websockets:
https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_servers
If you (like me) use vuejs or quasar, you might want have a look at vue-native-websocket.
https://github.com/nathantsoi/vue-native-websocket
Good luck
I have a client web application. The user clicks buttons, gives inputs, makes selections, etc... This is all done with Java using the Spring framework.
The client talks to another service that I have built in Java (with Vertx). This service talks to the database, handles caching, and the endpoints return values.
user navigates to web page--->client controller handles request mapping--->
--->request mapping uses controller method to return view--->
--->view's JS makes requests to service--->service returns model data
Now, I like the idea of control that Spring controllers offer. My client's pages use the Spring controllers to return views, and a small amount of model data.
However, what I am doing to call my service is: in my view's JS, I am making AJAX calls directly to the service. I mean, it works, and that is what was suggested for me to do, but I'm not sure if that is what I should be doing.
The alternative would be for my client to make JS calls to the client app's controller, and let the controller from my client app make requests to and receive responses from my service, then pass those responses back to my JS. I feel like this is probably the "cleaner" or "better" way to do this, but I am only about a year into programming with Java and don't know what the best way of doing this is. Essentially,
user navigates to web page--->client controller handles request mapping--->
--->request mapping uses controller method to return view--->client view's
JS makes requests to client controller--->client controller makes request
to service--->service returns data to client controller--->
client controller handles data and returns data to client view's JS
My gripes are that the JS exposes more than I would like it to in terms of the service's endpoints. Furthermore, I feel like using my client's controller to call the endpoints just seems... right.
I'd prefer input of experienced developers on what is right and/or wrong about these design patterns.
I've seen it successfully implemented both ways. Using a client controller is a fairly easy way of narrowing the exposed surface of the API (provided you lock-down access to the back-end services, otherwise you're just wasting your time). This method also allows you to turn the Client Controller into an adaptor to adapt the return values from the API into something more UI-centric (e.g. map codified values into text via an i18n file) and pare down any surplus data before it goes to the client.
However, there's an element of extra work or duplication as well as a performance overhead (hops, marshalling and unmarshalling).
If you ever suspect you're going to expose the underlying API or the client's usage of it is going to grow to the point where you've effectively created a shadow copy of your API, you should just put in place a robust Auth and Auth system to only allow calls from the appropriate places. This is not a trivial task, but would allow you to expose the API to other consumers.
The current and expected future usage of your API (but don't go all YAGNI on this!) should shape your decision. It may be appropriate to have the Client Controller layer doing filtering and shaping to avoid excessive client payload or not if you want the client to have a more transparent representation of your resources.
I'm building a Tornado based server that basically allows the user to upload an image, does some processing on the backend and returns some updates during and after the processing.
I've implemented a basic server using Handlers, which works nicely.
The problem is that the handler interface doesn't allow me to communicate with the client, but only to re-render the entire page.
I've considered using WebSockets, but from what I see they shouldn't be used for image uploading, so it kind of kills this option.
Is there any other way to communicate with a specific client from an Handler (i.e render only part of the page, trigger some js event and so on).
Thanks :)
Are you using POST and GET methods in your handlers?
If you're using a GET method to receive the image from your client, you can communicate with the client by returning data using the self.write(json_data) method. (http://tornado.readthedocs.org/en/latest/guide/structure.html) However, once the GET method returns the request is considered to be finished, so you might not be able to send multiple updates.
Also, can you also configure the client side? I'm assuming you're using a JSON GET method to make a call to the tornado server, and in that case you can just link certain responses to different js functions in the client-side code.
I am new to the REST world. I am writing an ASP.NET MVC application. My requirement is to make a few REST calls from the client. I can either choose to make these REST calls from Javascript or I can do that in the C# code in the Controller. Which method is recommended? According to my understanding, the Controller runs on the Web Server and the Javascript runs on the browser. So is there any perf degradation if the REST calls are made from the Web Server.
Can someone suggest me the general practice around this? Are there any security gotchas for the same?
Thanks
Let us consider the pros and cons of doing this Server side
PROs:
You can do other processing on the data using the power of the server
You are not subject to cross domain limitations like you would be in ajax
Generally you do not have to worry about your server being able to access the resource, whereas on client you are at the mercy of users net restrictions, firewalls etc
More control over your http response\request lifecycle
CONS:
You will have to consume more bandwidth sending the resulting data down to the client.
You may have to do more work to leverage good caching practices
Dependant on having certain server side libraries\framework elements
Now although we have a much bigger list of Pros than cons... in the majority of cases you will still want to do this on the client... because the issue of double handling the data is actually a very big one, and will cost you both time and money.
The only reason you should actually do it server side is if you need to do extensive processing on the data, or you cannot circumvent CORS (cross domain) restrictions.
If you are just doing something simple like displaying the information on a webpage, then client side is preferable.
It strongly depends on your situation. If you simple display this data in your page without any actions, you can get it from javascript. If you want to work with this data, transform it, join it with other data or else, i recommend do this operations on server so get this data on server too.
I have an API (1) on which I have build an web application with its own AJAX API (2). The reason for this is not to expose the source API.
However, the web application uses AJAX (through JQuery) go get new data from its AJAX API, the data retrieved is currently in XML.
Lately I have secured the main API (1) with an authorization algorithm. However, I would like to secure the web application as well so it cannot be parsed. Currently it is being parsed to get the hash used to call the AJAX API, which returns XML.
My question: How can I improve the security and decrease the possibility of others able to parse my web application.
The only ideas I have are: stop sending XML, but send HTML instead. Use flash (yet, this is not an option).
I understand that since the site is public, and no login can be implemented, it can be hard to refuse access to bots (non legitimate users). Also, Flash is not an option... it never is ;)
edit
The Web Application I am referring to: https://bikemap.appified.net/
This is somewhat of an odd request; you wish to lock down a system that your own web application depends on to work. This is almost always a recipe for disaster.
Web applications should always expect to be sidelined, so the real security must come from the server; tarpitting, session tokens, throttling, etc.
If that's already in place, I don't see any reason why should jump through hoops in your own web application to give robots a tougher time ... unless you really want to separate humans from robots ;-)
One way to reduce the refactoring pain on your side is to wrap the $.ajax function in a piece of code that could sign the outgoing requests (or somehow add fields to it) ... then minify / obscurify that code and hope it won't get decoded so fast.