I have the following scenario: I'm doing a publication lookup tool so users can find documents through a search field and filters. Right now we are working with a small budget, so all the data is stored in a json file (~60 records). If the project is successfull, we will have a server with a database and a couple of thousand records.
I want to develop all the lookup solution using breeze, so later I will don't have to make many modifications. The problem is that I can't find information about querying a json file directly (without a server).
Do you think that this is possible?
Actually, it is possible. But I can't think of a way that is as simple as setting up a simple server. That's like following off a log with Visual Studio. Maybe you're coming from a different environment? I'd like to know. Even there, it's usually pretty easy to spin something up with some kind of http API that can return JSON.
If you only have 60 records, I'm guessing this is a prototype that you're trying to stand up in a hurry. You're so much in a hurry that you don't even want to use a server ... which is kind of odd because you need something to serve the HTML, CSS, and JavaScript files, right?
You could do it with node.js / express very easily; almost as simple as setting up an express route that reads and returns the JSON file. But that still involves a server running somewhere (the client's own machine?) and you'd have to learn some elementary node.js
You can do it entirely with HTML and JS script files and no server other than the file system.
Off the top of my head, I think I'd begin by writing a custom Breeze ajax adapter that is actually a mock: no matter what you ask of it, it returns the JSON data in its entirety.
You call this once at application start to load the entities into an EntityManager cache. Then make all subsequent queries be local queries. You can set the EntityManager default query strategy to turn all queries into local queries by default.
No matter what you do, you'll have to define metadata to describe the entity types in your JSON data. I'm guessing you only have one type so that should be simple and quick.
You'll also have to do something to tell Breeze what kind of entity you're querying. Adding .toType('Foo'); to the end of your queries may be sufficient. You can always delve into the JsonResultsAdapter if you need something fancier at a lower level of the stack.
None of this is hard. But none of it is Breeze 101 either. You're not following what we've thought of as a typical application development path. Maybe we're missing something. I'll be curious to see if people can relate to your situation.
Related
I am working on a React-based web app that uses Tensorflow.js to run an AI model in realtime on the client in the browser. I've trained this AI model from scratch and I'd like to protect it from being intercepted and used in other projects. Are there any protections available to do this (obfuscation, DRM, etc.)?
From a business perspective, I'd only like the model to work on my web app, nowhere else.
The discussions (1 2 3) I've been able to find on this are more geared toward native apps, not web apps.
Here is an example open source web app that uses Tensorflow.js. These weights are an example of what I would like to protect in my app.
Client-side code obfuscation will never fully prevent it. Use a server instead.
Obfuscation
If your client-side application contains the model, then the user will be able to somehow extract it. You can make it harder for the user, but it will always be possible. Some techniques to make it harder are:
Obfuscating your code: That way the user will not be able to read your code and comments easily. Depending on your build tools, this might already be done for you when you produce a "production ready" build.
Obfuscating the library and its public API: Even if your code is obfuscated, the user might still be able to guess what is going on by seeing the public API calls of the library. Example: It would be rather easy to set a break point at the model.predict function and debug your code from there on. By also obfuscating libraries and their API, this will become harder.
Put "special checks" in your code: You could also check if the page the code is running on is your page (e.g. if the domain matches), etc. You also want to obfuscate this code as well.
Even if your code is perfectly obfuscated and well protected, your client-side code still contains your model somewhere. With these methods it will always be possible to somehow extract your model.
Server-side approach
To make it impossible to get your model, you need a different approach. Only put your "dumb logic" on the client. Exclude the part of code that you want to protect. Instead you offer a API on your server that executes the "protected part" of your code.
This way, instead of running model.predict on the client-side, you would make an AJAX request to your backend (with the parameters) and then return the results. That way the user only sees the input and the output and cannot extract the model itself.
Keep in mind that this means a lot more work, as you not only have to write the code for your client-side application but also for your server-side application, including the API. Depending on how your application looks like (e.g.: does it have a login?), this might be a lot more code.
Another way you can protect your model is to split the model into more than one blocks. Put some blocks at server side and some at client side. This method may also introduce a lot of engineering work, but once you do that you can trade off the computation loading and network latency between the server and client. Users can only get some model blocks which is useless without cooperating with server side blocks.
I'm currently working on a project that involves parsing through a user-supplied file, doing computations with that data, and visualizing the results using graphing utilities. Right now, I'm stuck with using Python as the back-end because it has scientific libraries unavailable in JavaScript, but I want to move the entire tool to a web server, where I can do much slicker visualizations using D3.js.
The workflow would be something like: obtain the file contents from the browser, execute the Python script with the contents, return jsonified objects of computed values, and plot those objects using D3. I already have the back-end and front-end working by themselves, but want to know: How can I go about bridging the two? From what I've gathered, I need to do something along the lines of launching a server, sending AJAX requests to the server, and retrieving data from the server. But with the number of frameworks out there (Flask, cgi, apache, websockets, etc.), I'm not really sure where to start. This will likely only be a very simple web app with just a file submit page and a data visualization page. Any help is appreciated!
Apache is a web server, flask is a web framework in python, websockets are a protocol, and cgi is something totally different, and none of them help you on the front end.
You could deploy a simple backend in flask or django or pylons or any other python web framework. I like django, but it may be a little heavy for your purpose, flask is a bit more lightweight. You deploy them to a server with a web server installed and use something like apache to distribute.
Then you need a front end and a way of delivering your front end. Flask / Django are both fully capable of doing so in conjunction with a web server, or you could use a static file server like Amazon S3.
On your front end, you need to load D3 and probably some kind of utility like jQuery to load and parse your data from the back end, then use D3 however you like to present it on screen.
Flask is easy to get up and running and is Python based. It works well with REST APIs and data sent by JSON (or JSON API).
This is one solution with which I have some experience and which seems to work well and is not hard to get up and running (and natural to work with Python). I can't tell you whether it is the best solution for your needs, but it should work.
If you are overwhelmed and don't know where to start, you can pick one of the options and google search for a tutorial. With a decent tutorial, you should have an example up and running by the end of the tutorial, and then you will know if you are comfortable working with it and have an idea whether it will meet your needs.
Then you could do a proof-of-concept; make a small app that just handles one small part (the one you are most concerned about handling, perhaps) and write something which will do it.
By then, you can be pretty sure you have a good tool for the purpose (unless you were convinced otherwise by the proof-of-concept -- in which case, try again with another option :-))
As I can't orientate freely in the topic of building dynamic sites, it is quite hard to me to google this. So I'll try to explain the problem to you.
I'm developing a simple social network. I've built a basic PHP API represented by the files like "get_profile.php", "add_post.php", etc. with the POST method that is used to pass some data. Then I try to get the data using JS AJAX (php functions return it by JSON), which means I get all the data that I need to show on a page after the page is loaded. That causes decreasing of a page loading speed and I feel like this structure is really wrong.
I hope you'll explain me how to build a proper structure or at least give me some links to read. Thanks.
Populate the HTML with the (minimum) required data on the server side and load all other necessary data on the client side using AJAX (as you already do).
In any case, I would profile your application to find the most important bottle necks. Do you parallelize AJAX requests?
Facebook, for example, doesn't populate its HTML with the actual data on the server side, but provides the rough structure, which is later filled using AJAX requests.
If I understood your architecture right, it sounds ok.
Advices
Making your architecture similar to this allows you to deliver templates for the page structure that you then populate with data from your ajax request. This makes your server faster also since it doesn't have to render the HTML also.
Be careful with the amount of requests you make though, if each client makes a lot of them you will have a problem.
Try and break your application into different major pieces and treat each one in turn. This will allow you to separate them into modules later on. This practice is also referred as micro-services architecture.
After you broke them down try and figure user interaction and patterns. This will help you design your database and model in a way in which you can easily optimise for most frequest use-cases.
The way of the pros.
You should study how facebook is doing things. They are quite open about it.
For example, the BigPipe method is the fastest I have seen for loading a page.
Also, I think you should read a bit about RESTful applications and SOA type architectures.
I apologize if this question is not specific enough, but I do need some help understanding this concept. I've been researching many Javascript libraries including JQuery, MooTools, Backbone, Underscore, Handlebars, Mustache, etc - also Node.js and Meteor (I know all those serve different purposes). I have a basic idea of what each does, but my question is mainly focused on the templating libraries.
I think the general idea is that the template will be filled by a JSON object that's retrieved from the server. However, i'm confused by how that JSON object is formed, and if it can go the other way to the backend to update the database. Please correct me if this is incorrect.
For a more solid example, let's say I have Apache running on Linux, and am using MongoDB as the database and python as my primary language. How do all these components interact with the templating library and each other?
For example, if I have an HTML file with a form in it and the action will be set to some python script; will that script have to retrieve the fields, validate them, and then update them in the DB? If it's MySQL I'd have to write a SQL statement to update it, but with Mongo wouldn't it be different/easier since it's BSON/JSON based?
And for the other example, let's say I have a view-account.html page that will need to pull up user information from the DB, in what form will it pull the information out and how will it fill it into the template? I'm guessing i'd have to have a python script that pulls the information from the DB, create a JSON object, and use it to populate the fields in the html template.
I am aware there are web frameworks that will ease this process, and please suggest any that you would recommend; however, I'm really interested in understanding the concepts of how these components interact.
Thanks!!
There are obviously many ways this can all work together, but it sounds like you have the (a) right idea. Generally the frontend deals with JSON, and the server provides JSON. What's consuming or providing those responses is irrelevant; you shouldn't need to worry that Mongo is your database, or that underscore is handling your templates.
Think of your frontend and backend as two totally separate applications (this is pretty much true). Ignore the fact that your frontend code and templates are probably delivered from the same machine that's handling the backend. Your backend is in the business of persisting data, and your frontend in the business of displaying it.
RE: Mongo using JSON/BSON; The fact it uses the same language as your frontend to communicate is a red herring. Your DB layer should abstract this away anyway, so you're just using Python dicts/tuples/etc to talk to the database.
I'm guessing i'd have to have a python script that pulls the information from the DB, create a JSON object, and use it to populate the fields in the html template.
Spot on :)
I'm working on a project and there is some battle between how some JS filtering should be implemented and I would like to ask you guys some input on this.
Today we have this site that displays a long list of repeated entries of data and some JS filtering would be nice for the users to navigate through. The usual stuff: keyword, order, date, price, etc. The question is not the use of JS, which is obvious, but the origin of the data. One person defends that the HTML itself should be used and that the JS should parse through it making the user's desired filtering. Another person defends that we should use a JSON generated in the server, and that JSON should be the data's origin.
What you guys think on this? What are the pros and cons?
As a final request, I would like you to be the most informative as possible since your answers will be used and referenced for all us in the company. (Yes, that is how we trust you!:)
The right action is matter of taste and system architecture as well as utility.
I would go with dynamically generated pages with JS and JSON -- These days I think you can safely assume that most browsers has Javascript enabled -- however you may need to make provisions for crawler (GoogleBot, Bing, Ask etc) as they may not fully execute all JS and hence may not index the page if you do figure out some kind of exception for supporting those.
Using JS+JSON also means that you make your code work so that support for mobile diveces is done client side, without the webserver having to create anything special.
Doing DOM manipulation as the alternative would not be my best friend, as the logic of the page control and layout is split-up in two places -- partly in the View controller on the webserver, and partly in the JavaScript -- it is in my opinion better to have it in one place and have the view controller only generate JSON and server the root pages etc.
However this is a matter of taste, and im not sure that I would be able to say that there is one correct and best solution.
I think it's a lot cleaner if the data is delivered in JSON and then the presentation HTML or view of that data is generated from that JSON with javascript.
This fits the more classic style of keeping core data structures separate from views. In this manner you can generate all types of views without having to constantly munge/revise the way you store, access and manipulate the data. You can even build classes and methods to develop a clean interface on your data that is entirely independent of how that data is displayed.
The only issue I see with that is if the browser doesn't support javascript and that browser is a desired viewer. In that case, you have to include a default HTML version from the server that will obviously not be manipulated and the JSON will be ignored.
The middle ground is that you include both JSON and the "default", initial HTML view of that data in rendered HTML. The view comes up quickly and non-JS browsers can see something useful. But, then any future manipulation of the view (sorting, for example) uses the JSON data and generates a new clean view from the JSON data. No data is then ever "parsed" from the HTML view.
In larger projects, this also can facilitate the separation of presentation from data manipulation so different people may work on creating HTML views vs. manipulate the data (like sorting).
I would make the multiple ajax calls to the server and have it return the sorted/filtered data. If you server backend is fast than it won't be very taxing and you could even cache the data between requests.
If you only have 50-100 items than it would be reasonable to send it all to the client and have javascript sort and filter it.
Some considerations to help make the decision
Is the information sensitive and unique? (this voids and benefit to caching in my first point)
What is the most common request that will happen and are you optimizing for that?
How much data is there? (tens of rows, hundreds, thousands, millions)?
Does you site have to work with JavaScript turned off? (supporting older browsers?)
Is your development team more comfortable doing this in the front-end or back-end?
The answer is that it depends on your situation.