What is the backend web framework role with AngularJS on the frontend? - javascript

Choosing the "right" web framework is quite challenging task, at least in Java we have a lot of them. But looking at JavaScript frameworks like AngularJS I doubt if we really need something heavy at server. Usually web framework is responsible for routing, templating, building pretty URLs and some other stuff. With AngularJS we can move all these responsibilities to client side. Then the backend becomes nothing more than REST listener and data validator. A thin layer between your application logic and client view. So why do we need web frameworks now if all we want is a REST listener?
At the moment I found two points which must be handled by server side: authentication/authorization and things requiring 'pushing' like Comet. Are these criterias enough to choose the "right" framework?

I'll give you one more capability that I've seen require back-end server support. It's those pages which generate a file. For example, they get a file from a third party and then hand it to the client as though they produced it directly, or they are generating a JPEG/PNG/GIF image on the fly, or perhaps a CSV/XLS dump of data. There may be ways to generate those on the fly from the front end and make them available for download, but sometimes the back-end is just easier for those.
Other than that, your assessment is 100% correct. You literally need less server for apps built with AngularJS than was needed with the previous request/response model of ASP/JSP/PHP/etc.
However, just because you need less doesn't mean you need nothing. Issues like data caching and how user sessions are handled can still come up even for smaller servers as you scale. But it has definitely opened things up for tech like Node.js to be considered that I would not have given much thought to a few years back.

Related

How to protect (obfuscate/DRM) trained model weights in Tensorflow.js?

I am working on a React-based web app that uses Tensorflow.js to run an AI model in realtime on the client in the browser. I've trained this AI model from scratch and I'd like to protect it from being intercepted and used in other projects. Are there any protections available to do this (obfuscation, DRM, etc.)?
From a business perspective, I'd only like the model to work on my web app, nowhere else.
The discussions (1 2 3) I've been able to find on this are more geared toward native apps, not web apps.
Here is an example open source web app that uses Tensorflow.js. These weights are an example of what I would like to protect in my app.
Client-side code obfuscation will never fully prevent it. Use a server instead.
Obfuscation
If your client-side application contains the model, then the user will be able to somehow extract it. You can make it harder for the user, but it will always be possible. Some techniques to make it harder are:
Obfuscating your code: That way the user will not be able to read your code and comments easily. Depending on your build tools, this might already be done for you when you produce a "production ready" build.
Obfuscating the library and its public API: Even if your code is obfuscated, the user might still be able to guess what is going on by seeing the public API calls of the library. Example: It would be rather easy to set a break point at the model.predict function and debug your code from there on. By also obfuscating libraries and their API, this will become harder.
Put "special checks" in your code: You could also check if the page the code is running on is your page (e.g. if the domain matches), etc. You also want to obfuscate this code as well.
Even if your code is perfectly obfuscated and well protected, your client-side code still contains your model somewhere. With these methods it will always be possible to somehow extract your model.
Server-side approach
To make it impossible to get your model, you need a different approach. Only put your "dumb logic" on the client. Exclude the part of code that you want to protect. Instead you offer a API on your server that executes the "protected part" of your code.
This way, instead of running model.predict on the client-side, you would make an AJAX request to your backend (with the parameters) and then return the results. That way the user only sees the input and the output and cannot extract the model itself.
Keep in mind that this means a lot more work, as you not only have to write the code for your client-side application but also for your server-side application, including the API. Depending on how your application looks like (e.g.: does it have a login?), this might be a lot more code.
Another way you can protect your model is to split the model into more than one blocks. Put some blocks at server side and some at client side. This method may also introduce a lot of engineering work, but once you do that you can trade off the computation loading and network latency between the server and client. Users can only get some model blocks which is useless without cooperating with server side blocks.

Social network architecture decision

As I can't orientate freely in the topic of building dynamic sites, it is quite hard to me to google this. So I'll try to explain the problem to you.
I'm developing a simple social network. I've built a basic PHP API represented by the files like "get_profile.php", "add_post.php", etc. with the POST method that is used to pass some data. Then I try to get the data using JS AJAX (php functions return it by JSON), which means I get all the data that I need to show on a page after the page is loaded. That causes decreasing of a page loading speed and I feel like this structure is really wrong.
I hope you'll explain me how to build a proper structure or at least give me some links to read. Thanks.
Populate the HTML with the (minimum) required data on the server side and load all other necessary data on the client side using AJAX (as you already do).
In any case, I would profile your application to find the most important bottle necks. Do you parallelize AJAX requests?
Facebook, for example, doesn't populate its HTML with the actual data on the server side, but provides the rough structure, which is later filled using AJAX requests.
If I understood your architecture right, it sounds ok.
Advices
Making your architecture similar to this allows you to deliver templates for the page structure that you then populate with data from your ajax request. This makes your server faster also since it doesn't have to render the HTML also.
Be careful with the amount of requests you make though, if each client makes a lot of them you will have a problem.
Try and break your application into different major pieces and treat each one in turn. This will allow you to separate them into modules later on. This practice is also referred as micro-services architecture.
After you broke them down try and figure user interaction and patterns. This will help you design your database and model in a way in which you can easily optimise for most frequest use-cases.
The way of the pros.
You should study how facebook is doing things. They are quite open about it.
For example, the BigPipe method is the fastest I have seen for loading a page.
Also, I think you should read a bit about RESTful applications and SOA type architectures.

Why Angular/Ember/Backbone and not a regular web framework?

So I'm afraid I might be missing something pretty fundamental here, but I really can't get my head around this - Why? Why would we want to use those JS MVC frameworks, instead of sticking with Rails, Django, PHP and so on?
What do these JS frameworks give us that can't be achieved by the old web frameworks? I read about SPA, and there's nothing I couldn't do there with ASP.NET MVC, right?
I'm really baffled by hearing all the people at work wanting to leave our current framework for these new ones, and it's much more than just for the sake of learning something new.
I am totally up for that, and I've always tried playing around with other frameworks to see what I'm missing, but perhaps these new technologies have something really big to offer that I simply cannot see?
Single page applications provide a better experience by having all page transitions be seamless. This means you never see the "page flash" between user actions, in addition to a few other user experience improvements.
Front-end frameworks also generally provide a common way to interface with APIs. So instead of writing an AJAX wrapper for every page in your site, you just say 'This page has this route (path), hooks data with this schema from that API endpoint and presents it with these templates and helpers.' There are many proponents of APIs, because there are many good reason to write you applications from a service standpoint. This talk sums up a lot of the points in favor of APIs. To summarize:
Orchestrating your web offerings as services makes them inherently decoupled. This means they are easily changed out. All the reasons behind strong Object Oriented design principles apply equally to the larger parts of an application. Treat each piece as an independent part, like a car, and the whole platform is more robust and healthy. That way, a defect in the headlights doesn't cause the motor to blow up.
This is very similar to how a SOAP WSDL works, except you have the auto creation tools right out of the box.
Having well defined touch points for each part of your application makes it easier for others to interface with. This may not ever factor into your specific business, but a number of very successful web companies (Google/Yahoo, Amazon AWS) have created very lucrative markets on this principle. In this way, you can have multiple products supported by the same touch points, which cuts a lot of the work out of product development.
As other point out, the front end framework is not a replacement for the backend, server technologies. How could it be? While this may seem like a hindrance ("Great, now we have two products to support!"), it is actually a great boon. Now your front and back ends can be changed and version with much less concern over inadvertently breaking one or the other. As long as you stick to the contract, things will "Just WorkTM".
To answer your additional question in the comment, that is exactly correct. You use a front end framework for handling all the customer interaction and a completely separate back-end technology stack to support it.
I'm forgetting a few good ones...
Angular, Ember, and Backbone are client-side JavaScript frameworks. They could be used interchangeably with a Rails, Django, or PHP backend. These JavaScript MVCs are only responsible for organizing JavaScript code in the browser and don't really care how their data is handled or persisted server-side.
Django/Rails etc are server-side MVC frameworks. Angular/Backbone etc are client-side Javascript MVC frameworks. Django/Rails and Angular/Backbone work together - in a single-page app, usually the server-side MVC will serve the initial HTML/JS/static assets once, and then once that is done, the client-side router will take over and handle all subsequent navigations/interactions with your app.
The difference here lies in the concept of what a "single-page application" is. Think about how a "regular" web Django/Rails website works. A user enters your app, the backend fetches data and serves a page. A user clicks on a link, which triggers the server to serve a new page, which causes the entire page to reload. These traditional types of websites are basically stateless, except for things like cookies/sessions etc.
In contrast, a single-page application is a stateful Javascript application that runs in the browser and appears to act like a traditional webapp in that you can click on things and navigate around as usual, but the page never reloads, instead, specific DOM nodes have their contents refreshed according to the logic of your application. To achieve a pure Javascript client-side experience like this in a maintainable fashion really requires that you start organizing your Javascript code for the same reasons you do on the server - you have a router which takes a URL path and interacts with a controller that often contains the logic for showing/hiding views for a particular URL, you have a model which encapsulates your data (think of a model as roughly one "row" of a database result) which your views consume. And because it's Javascript there are events going on, so you can have your view listen for changes in it's associated model and automatically re-render itself when the data is updated.
Also keep in mind that you don't just have one view on the client side, there are usually many separate views that make up a page and these views are often nested, not only for organizational purposes but because we want the ability to only refresh the parts of the UI that need to be refreshed.
The intro to Backbone is probably a good starter on the topic: http://backbonejs.org/#introduction
Check this article, there is well explained how a modern web application should looks like in the client side, server side and the communication between them.
By the way:
Client side -> Ember, Angular, Backbone, Knockout.
Server side -> Django, Node, Rails

Rails: Not ember, not JS responses, but something in-between

I am developing a standard rails application, and so far I haven't used any AJAX, just good ol' HTML. My plan is to iteratively add "remote" links and all that kind of stuff and support for JS responses, because I know that generating JS server side is very very evil, but I find it to be very handy as well, easy, fast and it makes the application snappy enough and i18n comes out-of-the-box.
Using a pure JSON approach would be lighter, but needs lots of client-side coding.
Now imagine that in this application users have a mailbox, and since the idea is that they will be able to do most or even all of the actions without reloading the page, the mailbox counter will never change unless they refresh the page manually.
So, here comes the question: Which is the best way to handle this?
I thought about using Ember (for data binding), and sharing views with rails, via some sort of handlebars implementation for ruby. That would be quite awesome, but not very transparent for the developer (me). Although I guess that I only need to write handlebars views that will be used by ember, the rest can still be written in their original format, no?
Another option might be to use some sort of event system (EventSource maybe?), and just go with handy the JS views approach, and listen to those events. I guess those should be JSON objects, and the client must be coded to be able to handle them. This seems a bit cumbersome, and I need a solution for heroku (faye?), which is where my app is hosted. Any hints?
I think that the ember approach is the more robust one, but seems to be quite complex as well, and I don't want to repeat myself server and client side.
EDIT:
I have seen this, which is more or less the option #2.
One of the advantages of using a JavaScript framework is that the whole application can be concatenated and compressed into one JavaScript file. Provided that modern browsers aggressively cache JavaScript, the browser would no longer need to request those assets after initial page load.
Another advantage of using a JavaScript framework is that it requires you to be a consumer of your own API. Fleshing out the application's API for your own consumption might lend to less work in the future if there is a possibility of mobile applications or 3rd parties having access to it.
If you do not need your application to respond to every request with an equivalent HTML response, I think a compelling case could be made for using a JavaScript framework.
Many of those benefits might be lost if your application needs to respond to every request with an equivalent HTML template. The Ember core has been relatively vocal and in opposition to supporting this style of progressive enhancement. Considering the tools for using a JavaScript framework in this way are relatively unstable and immature, I might be prone to using option 2 to accomplish this.

Decision about web application architecture

I am facing a decision about the web application architecture I am going to work on.
We are a small team and actually I will work on it alone (everybody work on something else).
This application will consist of front-end build on the ExtJS library
and it will use the model "load page, build GUI and never refresh".
On the web "desktop" there will be a lot of data windows, map views (using openlayers + GeoExt) and other stuff.
GUI should be flexible and allow every user to modify (and persist) the layout to fit his/her needs.
It should be possible to divide the application into modules / parts / ... and then let users in specific groups use only the specific modules. In other words, each group of users
can have different GUI available on the web "desktop".
Questions are:
First of all, is this approach good?
There will be a lot of AJAX calls from clients,
may be this could be a problem.
How to handle code complexity on client side?
So far I have decided to use dojo.require / dojo.provide feature and divide the client side code into modules
(for production they will be put together using dojo build system)
I am thinking about to use kind of IoC container on client side, but not sure which one yet.
It is very likely that I will write one for myself, it should not be difficult in dynamic language like JavaScript.
How to handle AJAX calls on server ?
Should I use WCF on server side ? Or just ordinary ashx handler ?
How to handle code complexity on server side ?
I want to use Spring.NET. May be this approach could help with modularity problem.
Data access - here I am pretty sure what to use:
For DAL classes I will use nHibernate. Then I compose them with business classes using Spring.NET.
I would really appreciate some advice about which way to go.
I know about a lot of technologies, but I have used only little part of it.
I don't have time to explore all of them and be fine with the decision.
We do this type of single page interface where I work on a pretty large scale for our clients. (Our site is not an internet site)
This seems to work pretty well for us. The more js you have the more difficult it gets to maintain, so have as many automated js tests as you can and try to break up your js logic in an mvc fashion. 4.0 is supposed to make this much easier.
Ext 4.0 has this built in if you are trying to limit the code you bring down. If you have the same users day after day, then I think it would be best to just bring all the source down (compressed and minified) and cache it.
We've found asmx to work really well. I have nothing against wcf, but last I looked it seemed like more trouble than it was worth. I know they have made many improvements recently. asmx just works though (with a few request header changes and managing the "d." on the client side).
Our Server side data access layer is pretty complex, but the interface for the ajax calls is pretty simple. You have not really given enough info to answer this part. I would start as simple as possible and refactor often.
We are also using nHibernate. Works fine for us. We have built a DDD model around it. It can take a lot of work to get that right though (not sure if we have it right after months of working at it).
If I were you I'd start with just extjs, your web service technology, and nHibernte.
I would recommend ASP.NET MVC 3 with Razor instead of a lot Javascript and calls to Service you can just do ajax calls to an Action in a Controller and that will let you have more maintainable code and use a IoC like Ninject. EF instead of NHibernate.
But it's your decision.
I would look into using a tool like Google Closure Compiler, especially if you're dealing with a very large project. I don't have too much experience with ExtJS, but large projects in JavaScript are hard and something like Closure Compiler tends to make it easier.

Categories