How to protect (obfuscate/DRM) trained model weights in Tensorflow.js? - javascript

I am working on a React-based web app that uses Tensorflow.js to run an AI model in realtime on the client in the browser. I've trained this AI model from scratch and I'd like to protect it from being intercepted and used in other projects. Are there any protections available to do this (obfuscation, DRM, etc.)?
From a business perspective, I'd only like the model to work on my web app, nowhere else.
The discussions (1 2 3) I've been able to find on this are more geared toward native apps, not web apps.
Here is an example open source web app that uses Tensorflow.js. These weights are an example of what I would like to protect in my app.

Client-side code obfuscation will never fully prevent it. Use a server instead.
Obfuscation
If your client-side application contains the model, then the user will be able to somehow extract it. You can make it harder for the user, but it will always be possible. Some techniques to make it harder are:
Obfuscating your code: That way the user will not be able to read your code and comments easily. Depending on your build tools, this might already be done for you when you produce a "production ready" build.
Obfuscating the library and its public API: Even if your code is obfuscated, the user might still be able to guess what is going on by seeing the public API calls of the library. Example: It would be rather easy to set a break point at the model.predict function and debug your code from there on. By also obfuscating libraries and their API, this will become harder.
Put "special checks" in your code: You could also check if the page the code is running on is your page (e.g. if the domain matches), etc. You also want to obfuscate this code as well.
Even if your code is perfectly obfuscated and well protected, your client-side code still contains your model somewhere. With these methods it will always be possible to somehow extract your model.
Server-side approach
To make it impossible to get your model, you need a different approach. Only put your "dumb logic" on the client. Exclude the part of code that you want to protect. Instead you offer a API on your server that executes the "protected part" of your code.
This way, instead of running model.predict on the client-side, you would make an AJAX request to your backend (with the parameters) and then return the results. That way the user only sees the input and the output and cannot extract the model itself.
Keep in mind that this means a lot more work, as you not only have to write the code for your client-side application but also for your server-side application, including the API. Depending on how your application looks like (e.g.: does it have a login?), this might be a lot more code.

Another way you can protect your model is to split the model into more than one blocks. Put some blocks at server side and some at client side. This method may also introduce a lot of engineering work, but once you do that you can trade off the computation loading and network latency between the server and client. Users can only get some model blocks which is useless without cooperating with server side blocks.

Related

Is there any difference between making DOM on the server/client side? (speed perspective) [duplicate]

I've done some web-based projects, and most of the difficulties I've met with (questions, confusions) could be figured out with help. But I still have an important question, even after asking some experienced developers: When functionality can be implemented with both server-side code and client-side scripting (JavaScript), which one should be preferred?
A simple example:
To render a dynamic html page, I can format the page in server-side code (PHP, python) and use Ajax to fetch the formatted page and render it directly (more logic on server-side, less on client-side).
I can also use Ajax to fetch the data (not formatted, JSON) and use client-side scripting to format the page and render it with more processing (the server gets the data from a DB or other source, and returns it to the client with JSON or XML. More logic on client-side and less on server).
So how can I decide which one is better? Which one offers better performance? Why? Which one is more user-friendly?
With browsers' JS engines evolving, JS can be interpreted in less time, so should I prefer client-side scripting?
On the other hand, with hardware evolving, server performance is growing and the cost of sever-side logic will decrease, so should I prefer server-side scripting?
EDIT:
With the answers, I want to give a brief summary.
Pros of client-side logic:
Better user experience (faster).
Less network bandwidth (lower cost).
Increased scalability (reduced server load).
Pros of server-side logic:
Security issues.
Better availability and accessibility (mobile devices and old browsers).
Better SEO.
Easily expandable (can add more servers, but can't make the browser faster).
It seems that we need to balance these two approaches when facing a specific scenario. But how? What's the best practice?
I will use client-side logic except in the following conditions:
Security critical.
Special groups (JavaScript disabled, mobile devices, and others).
In many cases, I'm afraid the best answer is both.
As Ricebowl stated, never trust the client. However, I feel that it's almost always a problem if you do trust the client. If your application is worth writing, it's worth properly securing. If anyone can break it by writing their own client and passing data you don't expect, that's a bad thing. For that reason, you need to validate on the server.
Unfortunately if you validate everything on the server, that often leaves the user with a poor user experience. They may fill out a form only to find that a number of things they entered are incorrect. This may have worked for "Internet 1.0", but people's expectations are higher on today's Internet.
This potentially leaves you writing quite a bit of redundant code, and maintaining it in two or more places (some of the definitions such as maximum lengths also need to be maintained in the data tier). For reasonably large applications, I tend to solve this issue using code generation. Personally I use a UML modeling tool (Sparx System's Enterprise Architect) to model the "input rules" of the system, then make use of partial classes (I'm usually working in .NET) to code generate the validation logic. You can achieve a similar thing by coding your rules in a format such as XML and deriving a number of checks from that XML file (input length, input mask, etc.) on both the client and server tier.
Probably not what you wanted to hear, but if you want to do it right, you need to enforce rules on both tiers.
I tend to prefer server-side logic. My reasons are fairly simple:
I don't trust the client; this may or not be a true problem, but it's habitual
Server-side reduces the volume per transaction (though it does increase the number of transactions)
Server-side means that I can be fairly sure about what logic is taking place (I don't have to worry about the Javascript engine available to the client's browser)
There are probably more -and better- reasons, but these are the ones at the top of my mind right now. If I think of more I'll add them, or up-vote those that come up with them before I do.
Edited, valya comments that using client-side logic (using Ajax/JSON) allows for the (easier) creation of an API. This may well be true, but I can only half-agree (which is why I've not up-voted that answer yet).
My notion of server-side logic is to that which retrieves the data, and organises it; if I've got this right the logic is the 'controller' (C in MVC). And this is then passed to the 'view.' I tend to use the controller to get the data, and then the 'view' deals with presenting it to the user/client. So I don't see that client/server distinctions are necessarily relevant to the argument of creating an API, basically: horses for courses. :)
...also, as a hobbyist, I recognise that I may have a slightly twisted usage of MVC, so I'm willing to stand corrected on that point. But I still keep the presentation separate from the logic. And that separation is the plus point so far as APIs go.
I generally implement as much as reasonable client-side. The only exceptions that would make me go server-side would be to resolve the following:
Trust issues
Anyone is capable of debugging JavaScript and reading password's, etc. No-brainer here.
Performance issues
JavaScript engines are evolving fast so this is becoming less of an issue, but we're still in an IE-dominated world, so things will slow down when you deal with large sets of data.
Language issues
JavaScript is weakly-typed language and it makes a lot of assumptions of your code. This can cause you to employ spooky workarounds in order to get things working the way they should on certain browsers. I avoid this type of thing like the plague.
From your question, it sounds like you're simply trying to load values into a form. Barring any of the issues above, you have 3 options:
Pure client-side
The disadvantage is that your users' loading time would double (one load for the blank form, another load for the data). However, subsequent updates to the form would not require a refresh of the page. Users will like this if there will be a lot of data fetching from the server loading into the same form.
Pure server-side
The advantage is that your page would load with the data. However, subsequent updates to the data would require refreshes to all/significant portions of the page.
Server-client hybrid
You would have the best of both worlds, however you would need to create two data extraction points, causing your code to bloat slightly.
There are trade-offs with each option so you will have to weigh them and decide which one offers you the most benefit.
One consideration I have not heard mentioned was network bandwidth. To give a specific example, an app I was involved with was all done server-side and resulted in 200Mb web page being sent to the client (it was impossible to do less without major major re-design of a bunch of apps); resulting in 2-5 minute page load time.
When we re-implemented this by sending the JSON-encoded data from the server and have local JS generate the page, the main benefit was that the data sent shrunk to 20Mb, resulting in:
HTTP response size: 200Mb+ => 20Mb+ (with corresponding bandwidth savings!)
Time to load the page: 2-5mins => 20 secs (10-15 of which are taken up by DB query that was optimized to hell an further).
IE process size: 200MB+ => 80MB+
Mind you, the last 2 points were mainly due to the fact that server side had to use crappy tables-within-tables tree implementation, whereas going to client side allowed us to redesign the view layer to use much more lightweight page. But my main point was network bandwidth savings.
I'd like to give my two cents on this subject.
I'm generally in favor of the server-side approach, and here is why.
More SEO friendly. Google cannot execute Javascript, therefor all that content will be invisible to search engines
Performance is more controllable. User experience is always variable with SOA due to the fact that you're relying almost entirely on the users browser and machine to render things. Even though your server might be performing well, a user with a slow machine will think your site is the culprit.
Arguably, the server-side approach is more easily maintained and readable.
I've written several systems using both approaches, and in my experience, server-side is the way. However, that's not to say I don't use AJAX. All of the modern systems I've built incorporate both components.
Hope this helps.
I built a RESTful web application where all CRUD functionalities are available in the absence of JavaScript, in other words, all AJAX effects are strictly progressive enhancements.
I believe with enough dedication, most web applications can be designed this way, thus eroding many of the server logic vs client logic "differences", such as security, expandability, raised in your question because in both cases, the request is routed to the same controller, of which the business logic is all the same until the last mile, where JSON/XML, instead of the full page HTML, is returned for those XHR.
Only in few cases where the AJAXified application is so vastly more advanced than its static counterpart, GMail being the best example coming to my mind, then one needs to create two versions and separate them completely (Kudos to Google!).
I know this post is old, but I wanted to comment.
In my experience, the best approach is using a combination of client-side and server-side. Yes, Angular JS and similar frameworks are popular now and they've made it easier to develop web applications that are light weight, have improved performance, and work on most web servers. BUT, the major requirement in enterprise applications is displaying report data which can encompass 500+ records on one page. With pages that return large lists of data, Users often want functionality that will make this huge list easy to filter, search, and perform other interactive features. Because IE 11 and earlier IE browsers are are the "browser of choice"at most companies, you have to be aware that these browsers still have compatibility issues using modern JavaScript, HTML5, and CSS3. Often, the requirement is to make a site or application compatible on all browsers. This requires adding shivs or using prototypes which, with the code included to create a client-side application, adds to page load on the browser.
All of this will reduce performance and can cause the dreaded IE error "A script on this page is causing Internet Explorer to run slowly" forcing the User to choose if they want to continue running the script or not...creating bad User experiences.
Determine the complexity of the application and what the user wants now and could want in the future based on their preferences in their existing applications. If this is a simple site or app with little-to-medium data, use JavaScript Framework. But, if they want to incorporate accessibility; SEO; or need to display large amounts of data, use server-side code to render data and client-side code sparingly. In both cases, use a tool like Fiddler or Chrome Developer tools to check page load and response times and use best practices to optimize code.
Checkout MVC apps developed with ASP.NET Core.
At this stage the client side technology is leading the way, with the advent of many client side libraries like Backbone, Knockout, Spine and then with addition of client side templates like JSrender , mustache etc, client side development has become much easy.
so, If my requirement is to go for interactive app, I will surely go for client side.
In case you have more static html content then yes go for server side.
I did some experiments using both, I must say Server side is comparatively easier to implement then client side.
As far as performance is concerned. Read this you will understand server side performance scores.
http://engineering.twitter.com/2012/05/improving-performance-on-twittercom.html
I think the second variant is better. For example, If you implement something like 'skins' later, you will thank yourself for not formatting html on server :)
It also keeps a difference between view and controller. Ajax data is often produced by controller, so let it just return data, not html.
If you're going to create an API later, you'll need to make a very few changes in your code
Also, 'Naked' data is more cachable than HTML, i think. For example, if you add some style to links, you'll need to reformat all html.. or add one line to your js. And it isn't as big as html (in bytes).
But If many heavy scripts are needed to format data, It isn't to cool ask users' browsers to format it.
As long as you don't need to send a lot of data to the client to allow it to do the work, client side will give you a more scalable system, as you are distrubuting the load to the clients rather than hammering your server to do everything.
On the flip side, if you need to process a lot of data to produce a tiny amount of html to send to the client, or if optimisations can be made to use the server's work to support many clients at once (e.g. process the data once and send the resulting html to all the clients), then it may be more efficient use of resources to do the work on ther server.
If you do it in Ajax :
You'll have to consider accessibility issues (search about web accessibility in google) for disabled people, but also for old browsers, those who doesn't have JavaScript, bots (like google bot), etc.
You'll have to flirt with "progressive enhancement" wich is not simple to do if you never worked a lot with JavaScript. In short, you'll have to make your app work with old browsers and those that doesn't have JavaScript (some mobile for example) or if it's disable.
But if time and money is not an issue, I'd go with progressive enhancement.
But also consider the "Back button". I hate it when I'm browsing a 100% AJAX website that renders your back button useless.
Good luck!
2018 answer, with the existence of Node.js
Since Node.js allows you to deploy Javascript logic on the server, you can now re-use the validation on both server and client side.
Make sure you setup or restructure the data so that you can re-use the validation without changing any code.

What is the backend web framework role with AngularJS on the frontend?

Choosing the "right" web framework is quite challenging task, at least in Java we have a lot of them. But looking at JavaScript frameworks like AngularJS I doubt if we really need something heavy at server. Usually web framework is responsible for routing, templating, building pretty URLs and some other stuff. With AngularJS we can move all these responsibilities to client side. Then the backend becomes nothing more than REST listener and data validator. A thin layer between your application logic and client view. So why do we need web frameworks now if all we want is a REST listener?
At the moment I found two points which must be handled by server side: authentication/authorization and things requiring 'pushing' like Comet. Are these criterias enough to choose the "right" framework?
I'll give you one more capability that I've seen require back-end server support. It's those pages which generate a file. For example, they get a file from a third party and then hand it to the client as though they produced it directly, or they are generating a JPEG/PNG/GIF image on the fly, or perhaps a CSV/XLS dump of data. There may be ways to generate those on the fly from the front end and make them available for download, but sometimes the back-end is just easier for those.
Other than that, your assessment is 100% correct. You literally need less server for apps built with AngularJS than was needed with the previous request/response model of ASP/JSP/PHP/etc.
However, just because you need less doesn't mean you need nothing. Issues like data caching and how user sessions are handled can still come up even for smaller servers as you scale. But it has definitely opened things up for tech like Node.js to be considered that I would not have given much thought to a few years back.

Browser-based app needing IO control

This is a question about the best way to structure an app that has both server-side and client-side needs. Forgive the length -- I am trying to be as clear as possible with my vague question.
For a standalone non-web-connected art project, I'm creating a simple browser-based app. It could best be compared to a showy semi-complicated calculator.
I want the app to take advantage of the browser presentation abilities and run in a single non-reloading page. While I have lots of experience writing server-side apps in perl, PHP, and Python, I am newer to client-side programming, and neophyte at JavaScript.
The app will be doing a fair bit of math, a fair bit of I/O control on the Raspberry Pi, and lots of display control.
My original thought (and comfort zone) was to write it in Python with some JS hooks, but I might need to rethink that. I'd prefer to separate the logic layer from the presentation layer, but given the whole thing happens on a single non reloading html page, it seems like JavaScript is my most reasonable choice.
I'll be running this on a Raspberry Pi and I need to access the GPIO ports for both input and output. I understand that JavaScript will not be able to do I/O directly, and so I need to turn to something that will be doing AJAX-ish type calls to receive and sent IO, something like nodejs or socket.io.
My principle question is this -- Is there a clear best practice in choosing between these two approaches:
Writing the main logic of the app in client-side JavaScript and using server-side scripting to do I/O, or
Writing the logic of the app in a server-side language such as Python with calls to client-side Javascript to manage the presentation layer?
Both approaches require an intermediary between the client-side and server-side scripting. What would be the simplest platform or library to do this that will serve without being either total overkill or totally overwhelming for a learner?
I have never developed for the Raspberry Pi or had to access GPIO ports. But I have developed stand-alone web apps that acted like showy semi-complicated calculators.
One rather direct approach for your consideration:
Create the app as a single page HTML5 stand-alone web app that uses AJAX to access the GPIO ports via Node.JS or Python. Some thoughts on this approach based on my experience:
jQuery is a wonderful tool for keeping DOM access and manipulation readable and manageable. It streamlines JavaScript for working with the HTML page elements.
Keep your state in the browser local storage - using JavaScript objects and JSON makes this process amazingly simple and powerful. (One line of code can write your whole global state object to the local storage as a JSON string.) Always transfer any persistent application state changes from local variables to local storage - and have a page init routine that pulls the local storage into local variables upon any browser refresh or system reboot. Test by constantly refreshing your app as part of your testing as you develop to make sure state is managed the way you desire. This trick will keep things stable as you progress.
Using AJAX via jQuery for any I/O is very readable and reliable. It's asynchronous approach also keeps the app responsive as you perform any I/O. Error trapping and time-out handling is also easily accomplished.
For a back end, if the platform supports it, do consider Node.JS. It looks like there is at least one module for your specific I/O needs: https://github.com/EnotionZ/GpiO
I have found node to be very well supported and very easy to get started with. Also, it will keep you using JavaScript on both the front and back ends. Where this becomes most powerful is when you rely on JavaScript object literals and JSON - the two become almost interchangeable and allow you to pass complicated data structures to/from the back end via a few (or even one!) single object variable.
You can also keep your options open now on where you want to execute your math functions - since you can execute the exact same JavaScript functions in the browser or in the node back end.
If you do go the route of JavaScript and an HTML5 approach - do invest time in using the browser "developer tools" that offer very powerful debugging tools and dashboards to see exactly what is going on. You can even browse all the local storage key/value pairs with ease. It's quite a nice development platform.
After some consideration, I see the following options for your situation:
Disable browser security and directly communicate with GPIO. No standard libaries?
Use a JavaScript server environment with GPIO access and AJAX. Complexity introduced by AJAX
Use the familiar Python and use an embedded web browser If libraries are around, easy
Don't add too much complexity if you're not familiar with the tooling and language
Oh what a nice question! I'm thinking of it right now. My approach is a little difference:
With old MVC fashion, you consider the V(iew) layer is the rendered HTML page with Javascript CSS and many other things, and M and C will run on the server. And one day, I met Mr.AngularJS, I realized: Wow, some basic things may change:
AngularJS considers the View ( or the thing I believed is view ) is not actually view. AngularJS gave me Controllers, Data resources and even View templates in that "View", in another word: Client side itself can be a real Application. So now my approach is:
Server do the "server job" like: read and write data , sends data to the client, receive data from client ect....
And client do the "client job": interact with user, do the logic processes of data BEFORE IT WAS SENT such as validation, or format the information collected from user ect...
Maybe you can re-think of your approach: Ask your self what logic should run at client, what should at server. Client with javascript do its I/O, Server with server-side script do its I/O. The server will provide the needed resource for client and javascript use that resources as M(odel) of it's MVC. Hope you understand, my bad English :D
Well... it sounds like you've mostly settled on:
Python Server. (Python must manage the GPIO.)
HTML/JavaScript client, to create a beautiful UI. (HTML must present the UI.)
That seems great!
You're just wondering how much work to do on each side of the client/server divide... should be functionally equivalent.
In short: Do most of the work in whichever language you are more productive in.
Other notes come to mind:
Writing the entire server as standalone python is pretty
straightforwad.
You don't have to , but it's nice and
self-contained if you serve the page content itself from it.
If you
keep most of the state on the server/python side, you can make the
whole app a little more robust against page reloads (even though I
know you mentioned, that should never happen).

Business logic in JavaScript. Fat client vs thin client

Is it a good idea to implement business logic on the client side with JavaScript?
What kind of logic should be there? Validation Logic? Related to GUI?
What would you do if the same logic want to be used in another application (exposed) implementing it in JavaScript would mean you can't reuse that logic.
On the other hand having all logic on the server side would mean more requests to the server.
What do you think?
One should never ever trust the client. Thus, any validation you do on the client side with JavaScript can only be to improve user convenience and usability. You always have to validate incoming data on your server later to make sure nobody injects data etc.
You can create reusable Javascript modules so there's no intrinsic barrier to resuing logic in several different rich uis. However, as has already been pointed out, you probably end up with duplication between the JavaScript and whatever you're using on the server (Java, PHP ...) - in the case of validation that's a trade-off between giving a performant user experience and complexity due to duplication.
I can imagine scenarios where you would choose to duplicate more than just validation. Consider computing a total order value: do we really want to make a server-side round trip for that? Sorting a list - we tend to do that happily in JavaScript, but we sorting can get interesting, specialised comparator functoions? Drawing the boundary may be quite tricky, computing discounts and sales tax?
In the end it's a judgement call, followed by careful understanding of consequences. If you duplicate logic then can you devise a test strategy that ensures consistency? With low volume systems you may be inclined to favour more server interactions and less duplication, but you may well make different decisions for a larger or more demanding user-base.
It's convenient to implement validation logic in the javascript from a performance perspective, as the user doesn't have to wait for server calls, but you still have to validate all the data sent to the server.
If you don't, you will end up with malicious people corrupting your back system.
'Couple (possibly opiniated) notes from 2013:
Web applications shouldn't be developed differently than any other application.
Take any 2+ tier application (any normal client-server model would do); does it make sense to process things on the client or on the server?
Performance considerations
You have to take into account processing power, network latency, network bandwidth, memory and storage constraints. Depending on the application, you may choose different trade-offs.
A fat client will usually allow you to process more on the client and offload the server, serialize more efficient message payloads, and minimize roundtrips, all at the cost of processing power, memory efficiency, and possibly storage space.
Security considerations
Security is transient, regardless of the model used, each party (not just the server) will always have to verify and possibly sanitize the data it receives from the other to some extent. For many web applications, this means validating entities with business logic, but not always. This depends on what the data is, and who has authority over it (and it's not always the server).
Since the web browser already verifies a lot of information, the client-side considerations are fewer, but shouldn't be forgotten (especially in an client that makes XHRs or uses WebSockets, where there is less hand-holding).
Sometimes, this means that indeed both the server and the client will validate the same data. This is OK. If you develop software on both sides, you may extract your validation code to a module used by both the client and the server (like all these "common" modules in traditional software packages). Because your choice of language is limited on the client-side in a web environment, you may have to compromise. That being said, you can execute Javascript on the server, or compile many languages down to Javascript using things like Emscripten (also see amd.js), or even run native code in an uncertain future using things like NaCl/PNaCl.
Conclusion
I find that it helps to think about web application clients as 'immediately-installed', 'zero-conf' and 'continuously-updated' clients. We don't use this terminology for the web because these properties were always intrinsic to classical web-based software, but they weren't for classical native software. Similarly, we don't use terms like "Single-page applications" when developing native software because there was never any requirement to restart the entire application whenever we needed to switch to a new screen with classical software.
Embrace the convergence, and keep an open mind; people coming from different communities are going to learn a lot from each other in the coming years.
One way of attempting to do what you're looking for is to use some type of web service/ web method acess and have your javascript make ajax calls to the methods, do the validation on the business logic and then send a return back to the front end.
Now the front end would be chatty with the server, but you would have the ablity to share that business logic validation with other appliations within the same domain easily. A second benefit of this approach is that all of the business logic and validation is on the server, and not exposed in a way where a malicous person could easily view or manipulate the code.
Good luck, and hope this helps some.
Javascript should be used to enrich the user's experience in the GUI but your site/webapp should still work without it.
Parameters sent to your server can be manipulated by the user. If you rely on Javascript to validate or create these values you're basically asking your users to try and do naughty things. (And they will)
Javascript for validation is fine, it will reduce the amount of requests to your server for users who use the application normally. But that still falls under enriching their experience. You still need to validate server-side for the 1% of l33t h#x0rs who will try to create problems.
I've done a lot of AJAX work in the past few years and my take on it is this:
Put the business logic in the client to augment the more important server side validations. I've worked for some financial institutions
and they always had very good security because it was done in depth. Client side validations, server side validations, framework security, etc..., but they always had it in each section of the applications. They never assumed anything was safe and they built their intranet apps as if they were internet apps.
Other business logic can be put in as well, but keep the idea of a thin client at all times. The other main reason I would put business logic in the client is for performance.
For example, once I had a topmost dropdown that aftected five other controls below it on the page. Rather than doing a server-side trip for each of the controls I realized that the topmost control made one call and controlled the data display on all the subsequent controls in a cascading fashion. The other controls inter-operated between themselves with the same data, unless the topmost dropdown was changed. So I created a manager that cached/handled the data and the performance was excellent! Most of the user interactions were based on that initial dropdown selection, kind of an 80-20 rule of use. Most of the time they just made one selection and got what they wanted.
Use Presentation Logic in the client. By this I mean if you have sorting on a dropdown that you can do in a GUI Widget via a property, then by all means do it. When I've worked with GWT in the MVP (Model View Presenter) paradigm you were never to put any business logic in the view, but you were allowed to put presentation logic there. It's not business logic perse, but in ties in well with the other.
Business logic should be as consumer agnostic as possible. When designed properly, your client and server code should be able to consume your business logic in a reusable fashion (assuming that both client and server can consume javascript).
Consuming your business logic from a client (browser, etc.) can prevent needless hits to the server, assuming a malicious user isn't bypassing your UI to hit your endpoints. This same business logic can then be consumed by your server as your last line of defense.
In addition, if designed properly, you can extend your business logic to encompass more complicated workflow logic that needs to perform well, run within a transaction context, etc; generally things that would be difficult to facilitate via the client.
There are many design patterns that you can rely on to help you design reusable business logic.
There are also micro-frameworks available, such as peasy-js, that help you to rapidly create business logic that is easily reusable, extendable, maintainable, and testable.

What is the best practice to decouple GUI design from server-side development when developing modern web applications?

We are currently developing a few web applications and are letting our designers convert signed off paper prototypes into static web pages. In addition to having hyperlinks between pages the designers have started adding jquery calls to update elements on the pages by fetching data from static json files. Once the designers are finished and handoff the completed web pages, CSS, and JavaScript files; the server-side developers then edit the pages and replace the references to the local static json documents with references to live json urls that return the same json data structures.
My question:
What are efficient ways to decouple GUI design from serverside dev and reduce the integration time and effort? Some examples:
Do you have the developers manually change every json reference in the designers' prototype web pages?
Do you add a global variable somewhere to enable the designers' pages to be easily switched back and forth between using static and dynamic data?
Do you make the web pages self-aware of when they are running from a web server or just being served from a folder somewhere?
It depends on how much state information does your application need to manage. The examples you have given mention only read operations from the server. Are there any writes? Both read and write operations can potentially fail. Do your designers take care of those cases, or the server-side team jumps in and patches up the GUI later on?
I think it's best to provide a mock server-side implementation of your services for the designers. The server mock could simulate real life behavior as in throw errors and exceptions beyond just the happy path. It really is much less of a hassle to install a simple web server nowadays and a single script that acts as a mock service, rather than trying to create processes for later integration.
One of the things I have seen work fairly successfully for the whole "url/path" problem (dev vs test vs production, etc) is to create a subdomain for your project of "proto" (so proto.mycoolapp.company.int) and have that be the basis for all urls used anywhere in the app.
On top of that using whatever framework you are using set up global page handlers to identify section/page/function+[url data] for all of your data files (i.e. json calls etc).
This allows you to segregate your app into sections and build up functionality as you go and allows both the UI team and the dev team to work on hints for the other team.
For example say I have an app with three sections (login, account management, preview) I would have a login url service (data, files, whatever) at proto.mycoolapp.company.int/frontend/login/securelogin?user=GrayWizardX. The UI team can add these as they identify a need for data and your app dev team can see what functions are being requested (through server logs, etc) and make sure everything is matching up.
The good part is that when you move to production, the only change is to find "proto" and replace it. If that is in your global var, then its a quick and simple change.
It's not real clear what exactly your environment is (plain html only? or something like php?), so this is a bit of a shot in the dark...
If it's plain old HTML + javascript, you could probably use a javascript include on each page to get the right set of addresses. When the include file with all this environment specific information (i.e. use local or use server) remains in the same relative place, you don't need to worry about modifying the actual page. Just tell the GUI guys to always use this same set of variables for the address information and define the address information in that include file. The variable names don't change whether your getting locally or from the server, just the values stored in the variables.
I'm low on Vitamin Coffee, so hopefully that makes sense.

Categories