Fastest way to send data from C# to javascript? [closed] - javascript

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have two separate apps. One of them is a UI web application written with pure JS, the other one is console application written by C#.
Currently I'm calculating some variables (which can not done in JS because of browser limitations) with C# console app, then it's writing results to a txt file.
Then I read the file with JS application to bring results to UI. But the variable often changes in milliseconds and writing results to disk and retrieving it again is pretty slow.
What can I do? Any suggestions?

The console application is effectively a server. Communicating between a web app and a server by means of a local text file is, well, unconventional! If this is not just for your own use on the one machine, it will be very difficult to deploy for another user. Write a small server application and communicate with it the usual way, i.e, by posting the data to the server's IP address and receiving the server's response. You can remove any connection latency (after the initial connection) by communicating over websocket.

As others pointed out WebSockets seem to be a great choice for this task.
Mozilla has a mini tutorial that seems perfect for this task: Writing a WebSocket server in C#.
RE: Comment:
Good point! There is also a MSFT guide for SignalR: Tutorial: Get started with ASP.NET Core SignalR

It's open to some debate what you mean by "fastest" - fastest for you to write or fastest in terms of performance of the app..
It's relatively simple to turn your C# code into an API - visual studio has templates for API type projects; your logic will then get a url and can be triggered simply by visiting it in any browser or having JavaScript do a fetch of the url. The url itself can be used to pass variable data, C# knows how to parse it and present it in code so a method like (attributes etc removed for clarity)
public class CalcController:ApiController{
public int Add(int a, int b){
return a+ b;
}
}
Can be triggered by visiting a url of `http://host/api/add/1/2 and you get JSON back, which JS understands out of the box. If your web app serves your js up the js can automatically talk to the web app without any CORS etc because it was served by the same server
See https://learn.microsoft.com/en-us/aspnet/core/tutorials/web-api-javascript?view=aspnetcore-5.0 for a full tutorial (it's pretty involved, uses databases and everything - you can boil it down a lot simpler, probably even just making a web app project from a SPA template in vs will create everything you need to have front end JS and back end C#
Another option to look at is Blazor; you can dump the JS entirely and put c# in the browser (or leave c# on the server and let Blazor handle transiting the UI changes between client and server) if you want, or you can interoperate with JS
Finally it's been commented about SignalR - a tech from MS that builds on top of websockets and handles the connectivity management, and finding and calling code on either end. It helps create event driven apps where the events happen at either end (like a chat app; one person speaks, it causes JS in their browser to call a method on the server and transits what they types to the server, then the server pushes that message out to a group of other connected clients). You set up events in your JS like "onChatReceived" so when the server pushes data, you respond to it. SignalR deals firewalls automatically; it either uses websockets (long lived bidirectional data flow), long polling (make a request and the server doesn't answer until a message it available to send) or repeated polling (any update? any update? any update?) automatically, so it can make your app very portable, especially if one day you want to host your calculations on a server you control, to protect you're intellectual property
Full MS tutorial on SignalR here - https://learn.microsoft.com/en-us/aspnet/core/tutorials/signalr?view=aspnetcore-5.0&tabs=visual-studio - you're essentially setting up a group of eventing mechanisms; your JS can trigger your server to do something and later your server can trigger your JS to do something.
This is slightly different to your JS requesting your server do something and waiting for the response - that model might work fine and be simpler for you to implement as its most close to what you already have, you're just swapping out the file reading for a network call, which JS is allowed to do autonomously

Related

I am curious as to how python is connected to websites [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am a new programmer and I saw that Google is written in python. I know that HTML, CSS, and JS are used to make websites, so how is python "linked" to this. This is probably a very basic question but I am new to all this.
So your code in browser is called front-end (FE). Sometimes it's all you need. However, sometimes you need to store some data on the server and/or retrieve it from there. That is where back-end (BE) comes into play.
BE is basically an app on some computer (maybe a server, maybe a Raspberry Pi, anything really) that listens to requests from the network. Let's say your code needs some data from the server. Your code on the front end makes an AJAX request to the network address of this server on some specific port. The BE, which may be written in Python, or any other language, receives the request and does something with it.
It can fetch data from the DB or anything really. Then it send a response to your FE back, sending some data, or confirmation that everything was done successfully, or an error if something went wrong.
Python is used for backend development. Backend is the part of your website that runs on your server, not browser. Backend is used for authentication and communicating with database and many more. There are some popular frameworks in python like django and flask.
Google in front of you is called front end which is written in HTML, CSS, and JS and usually interpreted by browsers. In the end, HTML, CSS, and JS are all codes, thus, string (or binary).
Python is used to generate those strings, the codes, in back end.
According to the Mozilla Developer Network (MDN),
HTML — Structuring the web
HTML is the language that we use to structure the different parts of our content and define what their meaning or purpose is. This topic teaches HTML in detail.
CSS — Styling the web
CSS is the language that we can use to style and layout our web content, as well as adding behavior like animation. This topic provides comprehensive coverage of CSS.
JavaScript — Dynamic client-side scripting
JavaScript is the scripting language used to add dynamic functionality to web pages. This topic teaches all the essentials needed to become comfortable with writing and understanding JavaScript.
Below is where you will find how Python is linked to HTML, CSS, and JS.
Server-side website programming
Even if you are concentrating on client-side web development, it is still useful to know how servers and server-side code features work. This topic provides a general introduction to how the server-side works and detailed tutorials showing how to build up a server-side app using two popular frameworks: Django (Python) and Express (Node.js).
(Ref.: https://developer.mozilla.org/en-US/docs/Learn)
Below, you can read more about
what clients and servers are,
how they are linked, and
how and what the clients request to the servers and the servers respond to the clients
Some of the useful keywords are HTTP verbs (HTTP request methods), Uniform Resource Identifier (URI), and HTTP status code.
An article about Back-End Web Architecture from Codecademy
Note: As someone who just started programming, it could be really overwhelming to look for satisfying/precise answers across the web. You could also start from some reliable learning sources and search for the keywords to get a specific result. Happy learning!
The old way to to do this is with cgi-bin, which is an interface between the web-server and a program installed on the same machine.
When a user requests a static page, the web-server returns the contents a file, with some headers prefixed. cgi-bin allows for dynamic pages. Here the web-server runs a local program, passing it the URL and any headers from the client (web-browser). The program then generates the headers and body of the reply, and the web-server passes them back to the client. The program then exits.
The program can be written in any language. Perl was traditional, however Python or a compiled program is frequently used nowadays.
It was common to have cgi-bin at the start of URL to denote this to the server, but it isn't really needed - the server can be told that any specific (or all) URLs are to be fetched via cgi-bin.

Processing file on front-end vs back-end

I am developing a web application with angularjs as the front-end and a CRUD service at the backend. One of the requirements is to allow the user to upload a csv file containing a list of items to be created. This can be implemented on the front-end by parsing the file in javascript and making create API call to the server for each item. However, I am an not sure if this approach is better than passing the file to the server and doing all the processing there. What are advantages/disadvantages of both these approaches? What is the common practice in such a scenario?
There are 4 things that I would use to make this decision:
Do you have very high load. If you parse it on the client you are using the clients CPU. Parsing it on the server could cost you by needing more CPU's.
Access to developer talent, is your team more productive programming it on the client or the server side.
If the answer to the above does not give a clear answer, then I would put it on the server side as it would be easier to test.
Will the "upload TSV" functionality be used by other parties/apps, who consume your API -- or is only the frontend using this functionality ?
Since I have implemented this scenario, couldn't resist from responding. I believe following things would be considered(Addition to points mentioned above) :
The Size of the file, (Huge files freeze UI, no brainer) it can even crash some not so modern browsers.
Does the file need parsing/sanitizing the contents?( you would not want the garbage to make its way to your server)
Does the User need a feedback of the load summary details after the upload?(Aync vs Sync) - This tied back to #1
Regardless, you'll end up using some variation of the bulk copy at the backend.
Well I think its advisable to parse files at the backend. You get so many options like
saving the file for reference
reducing the load on your user's resource (RAM and CPU depending on the size of the file and the operation being done on the file before pushing to backend)
Can re-initiate activity on file if there is an error during batch( if the error is code you can reproduce and help out client because you've got the file😉)
Unless files are alway say some <1mb csv or txt just do stuff backend
I hope this helps 😏.

Browser-based app needing IO control

This is a question about the best way to structure an app that has both server-side and client-side needs. Forgive the length -- I am trying to be as clear as possible with my vague question.
For a standalone non-web-connected art project, I'm creating a simple browser-based app. It could best be compared to a showy semi-complicated calculator.
I want the app to take advantage of the browser presentation abilities and run in a single non-reloading page. While I have lots of experience writing server-side apps in perl, PHP, and Python, I am newer to client-side programming, and neophyte at JavaScript.
The app will be doing a fair bit of math, a fair bit of I/O control on the Raspberry Pi, and lots of display control.
My original thought (and comfort zone) was to write it in Python with some JS hooks, but I might need to rethink that. I'd prefer to separate the logic layer from the presentation layer, but given the whole thing happens on a single non reloading html page, it seems like JavaScript is my most reasonable choice.
I'll be running this on a Raspberry Pi and I need to access the GPIO ports for both input and output. I understand that JavaScript will not be able to do I/O directly, and so I need to turn to something that will be doing AJAX-ish type calls to receive and sent IO, something like nodejs or socket.io.
My principle question is this -- Is there a clear best practice in choosing between these two approaches:
Writing the main logic of the app in client-side JavaScript and using server-side scripting to do I/O, or
Writing the logic of the app in a server-side language such as Python with calls to client-side Javascript to manage the presentation layer?
Both approaches require an intermediary between the client-side and server-side scripting. What would be the simplest platform or library to do this that will serve without being either total overkill or totally overwhelming for a learner?
I have never developed for the Raspberry Pi or had to access GPIO ports. But I have developed stand-alone web apps that acted like showy semi-complicated calculators.
One rather direct approach for your consideration:
Create the app as a single page HTML5 stand-alone web app that uses AJAX to access the GPIO ports via Node.JS or Python. Some thoughts on this approach based on my experience:
jQuery is a wonderful tool for keeping DOM access and manipulation readable and manageable. It streamlines JavaScript for working with the HTML page elements.
Keep your state in the browser local storage - using JavaScript objects and JSON makes this process amazingly simple and powerful. (One line of code can write your whole global state object to the local storage as a JSON string.) Always transfer any persistent application state changes from local variables to local storage - and have a page init routine that pulls the local storage into local variables upon any browser refresh or system reboot. Test by constantly refreshing your app as part of your testing as you develop to make sure state is managed the way you desire. This trick will keep things stable as you progress.
Using AJAX via jQuery for any I/O is very readable and reliable. It's asynchronous approach also keeps the app responsive as you perform any I/O. Error trapping and time-out handling is also easily accomplished.
For a back end, if the platform supports it, do consider Node.JS. It looks like there is at least one module for your specific I/O needs: https://github.com/EnotionZ/GpiO
I have found node to be very well supported and very easy to get started with. Also, it will keep you using JavaScript on both the front and back ends. Where this becomes most powerful is when you rely on JavaScript object literals and JSON - the two become almost interchangeable and allow you to pass complicated data structures to/from the back end via a few (or even one!) single object variable.
You can also keep your options open now on where you want to execute your math functions - since you can execute the exact same JavaScript functions in the browser or in the node back end.
If you do go the route of JavaScript and an HTML5 approach - do invest time in using the browser "developer tools" that offer very powerful debugging tools and dashboards to see exactly what is going on. You can even browse all the local storage key/value pairs with ease. It's quite a nice development platform.
After some consideration, I see the following options for your situation:
Disable browser security and directly communicate with GPIO. No standard libaries?
Use a JavaScript server environment with GPIO access and AJAX. Complexity introduced by AJAX
Use the familiar Python and use an embedded web browser If libraries are around, easy
Don't add too much complexity if you're not familiar with the tooling and language
Oh what a nice question! I'm thinking of it right now. My approach is a little difference:
With old MVC fashion, you consider the V(iew) layer is the rendered HTML page with Javascript CSS and many other things, and M and C will run on the server. And one day, I met Mr.AngularJS, I realized: Wow, some basic things may change:
AngularJS considers the View ( or the thing I believed is view ) is not actually view. AngularJS gave me Controllers, Data resources and even View templates in that "View", in another word: Client side itself can be a real Application. So now my approach is:
Server do the "server job" like: read and write data , sends data to the client, receive data from client ect....
And client do the "client job": interact with user, do the logic processes of data BEFORE IT WAS SENT such as validation, or format the information collected from user ect...
Maybe you can re-think of your approach: Ask your self what logic should run at client, what should at server. Client with javascript do its I/O, Server with server-side script do its I/O. The server will provide the needed resource for client and javascript use that resources as M(odel) of it's MVC. Hope you understand, my bad English :D
Well... it sounds like you've mostly settled on:
Python Server. (Python must manage the GPIO.)
HTML/JavaScript client, to create a beautiful UI. (HTML must present the UI.)
That seems great!
You're just wondering how much work to do on each side of the client/server divide... should be functionally equivalent.
In short: Do most of the work in whichever language you are more productive in.
Other notes come to mind:
Writing the entire server as standalone python is pretty
straightforwad.
You don't have to , but it's nice and
self-contained if you serve the page content itself from it.
If you
keep most of the state on the server/python side, you can make the
whole app a little more robust against page reloads (even though I
know you mentioned, that should never happen).

is json the answer to this: python program will talk and javascript will listen?

the same problem haunting me a month ago is still haunting me now. i know ive asked several questions regarding this on this site and i am truly sorry for that. your suggestions have all been excellent but the answer is still elusive. i now realize that this is a direct result of me not being able to phrase my question properly and for that i am sorry.
to give you guys a generalized view of things, here i go: the situation is like this, i have 2 server side scripts that i want to run.
a python program/script that continuously spouts some numbers
based on the output from that python script, a javascript script will perform some action on a webpage (e.g., change background color, display alert message, change some text)
ive studied the replies to my previous posts and have found that what i want to accomplish is more or less accomplished by json. it is my understanding that json transforms 'program-specific' variables into a format that is more 'standard or general or global'.
two different programs therefore now have the means to 'talk' with each other because they are now speaking the same 'language'.
the problem is then this, how do i actually facilitate their communication? what is the 'cellphone' between these server side scripts? do they even need one?
thank you!
If I understand what you're asking, the "cellphone" is TCP/IP. The javascript is not server-side; it runs on the client side, and alters what the client's browser displays based on json data that it downloads from the server -- data that in this case is generated by Python.
This question provides a relevant example, though it's a bit technical: JSON datetime between Python and JavaScript
Here's a very basic tutorial that explains how to create a dynamic webpage using python and javascript. It doesn't appear to use json, but it should familiarize you with the fundamentals. Once you understand what's there, using json to transport more complicated data should be fairly straightforward.
http://kooneiform.wordpress.com/2010/02/28/python-and-ajax-for-beginners-with-webpy-and-jquery/
I assume you mean: Python is on the web server, and Javascript is running in the client's web browser.
Because browsers are all different (IE6 is terrible, Chrome is great), there are a huge number of ways people found to "hack" this "cellphone" into place. These techniques are called AJAX and COMET techniques. There is no one "cellphone", but a whole bunch of them! Hopefully, you can find a library to select the right technique for the browser, and you just have to worry about the messages.
Comet is harder to do, but lets the server "push" messages to the client.
Ajax can be easier - you just periodically "pull" messages from the server.
Start with Ajax, then look at comet if you really need it. Just start by have the client (javascript) make a "GET" request, to see if the number has changed.
I don't know Javascript or json, but...
if you've ever seen an Unix-like operating system, you know about pipes. Like program1 | program2 | program3 ... Why don't you just connect Python and Javascript programs with pipes? The first one writes to stdout, and the next one reads from stdin.
This probably isn't the answer that you are looking for, and without links to your previous posts, I don't have much to go on, but nonetheless...
javascript is client side. I can interpret your question 2 different ways...
Your python script is running on your computer, and you want a script to actually alter your current browser window.
Not too sure, but writing a browser plugin may be the answer here.
Your python script is running on the server, and as a result of the script running, you want the display of your site to be changed for viewing persons.
In this case, you will could use ajax polling (or similar) on your site. Have your site be polling the server with ajax, call a server method that checks the output of the script (maybe written to a file?), and see if it has changed.
When 2 process need to communicate, they need to decide of a common/shared way to express things and a protocol to exchange those things.
In your case, since one of the processes is a browser, the protocol of choice is http. So the browser needs to do an http request or regular http request to your python process.
This python process Will need in Some way or another to be exposed via http.
There are several ways to build a web server in python. You should read this article : http://fragments.turtlemeat.com/pythonwebserver.php as a jumpstart.
Once you have this, your browser Will be able to issue HTTP GET requests to your server and your server can reply with a string.
This string can be whatever you like. Nevertheless if your answer contains structured data it can be a good start to use the XML notation or the json notation.
Json (stands for Javascript object notation) is very easy to use in javascript and this is why many people advised you to choose this notation.
I hope this will help you
Jérome wagner

Offline / Online Data Synchronization Design (Javascript) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I'm currently in the process of writing an offline webapp using all the html5 goodies for offline support. However I'm starting now to think about writing the sync module that will ensure that any offline data gets sent to the server and server data back to the client. Now I'm sure this has been done before, I mean its a pretty classic design issue that affects mobile devices and a plethora of other things. So I'm wondering can anyone point me to some good design resources for this kind of thing?
Now I really do not need to be too sophisticated with this, I mean I'm not handling multiple users accessing the same data and I'm happy not to merge conflicts (just take the latest) but still I would like a design that will allow me those options in the future.
Also, are there any open source projects implementing this type of thing? I'm not above ripping off someone else's code (if license allows) and I'm happy to port.
I had a similar problem. I decided to use a purely JSON in and out approach. The solution I'm taking on form submission is:
catch the form submit event
check whether or not the user is online
if user is online then submit the form as normal form POST
if user is offline then stringify a JSON request and store it locally (I decided to use Web SQL Database). Queue table is simply Uri and Payload.
Then I have global event hooks for the online / offline events. When the user comes back online, it checks the queue, and if the queue has items in it, it then sends them through as JSON POST requests.
If you are primarily interested in getting JSON data and caching it for offline usage, then take a look at jquery.offline.
The challenge with synchronizing in both direction is that you need to update the local cached lists with any CRUD work that you have queued.
I'd like to find a more generic way to do this.
My plan for a similar design (not yet tried) is to use something like PouchDB to store the data locally and then sync it with a remote couch instance.
Check out Derby, a Node MVC framework that has some pretty sweet synchronization and conflict resolution features. http://derbyjs.com/
in our team we have already developed app in offline/online mode.
we are using the next following libraries:
rack-offline
jquery
backbonejs
backbonejs-localStorage
backbonejs-queues
jammit
Using rack-offline we are caching all resources files and jst template for rendering content on the page. backbonejs and backbonejs-localStorage helps to make MVC app on the client. it's pretty awesome, you should try it. we are always using localstorage for saving data. when we create post for example model object and saving to the localStorage, we are triggering queues for syncing(also we have by timer background worker for auto running sync process). For each model we have separate sync class that should be run by queue sync trigger. if your navigator.onLine => true we are sending requests to the server with data for updating. if you close browser, anyway you don't loose your data because you have queues in the localStorage. in the next time client will sync data on the first loading with navigator.onLine => true.
How to use rack-offline you can check my small project in the github:
pomodoro-app
Good luck!
I faced the same problem and ended up using an XML-file for storage and git to track changes and commit them automatically, as soon as a connection is available. The sync is done with the usual git commit / push / pull commands in a shell script and a cronjob starting the script. This would also work if you store JSON in a textfile.
I'm currently working on similar webapp. I've decided to make such workflow:
Form isn't really submitted - "Submit" button actually saves serialized form data to localStorage (in some queue). This saves from troubles with submit capturing and from writing additional error processing code to handle disconnect during form submission.
Transport script is triggered after data saving. It checks online/offline state.
When online, it tries to send latest data from queue to server (AJAX request), and deletes it from queue on success (and continues to send next data from queue after short timeout).
It shedules re-check after some period of time (by setTimeout()).
If you are up for using the potentially heavy Ext JS / Sencha framework, it has a nice data API with offline (e.g. localStorage) support and a proxy approach for write-thru to local then server. I use Sencha Touch (mobile-specific).
For debugging web storage, check out Weinre.
DerbyJS were probably the best solution. However Derby is still in development and offline support is only in planning and has not yet been implemented. In the Google Group ( http://groups.google.com/group/derbyjs/browse_thread/thread/7e7f4d6d005c219c ) you can find aditional information about what is planned in the future.
I'd personally recommend you write a wrapper on top of the indexedDB API that checks whether you are online/offline.
if offline, just store in indexedDB and set persisted flag to false on all documents
if online, get all documents where persisted is false and store them in mongodb or something equivelant on the backend, then store new documents in both indexedDB and on the server with the persisted flag to true.
I've written a small one
You would have to augment the tunnel to set the persisted flag automatically and also tunnel the synchronization of these documents to the backend

Categories