Working on a simple HTML application that makes calls to an API using jQuery. Would like to avoid someone getting the API address (via looking at the source and/or network traffic) and making calls directly to the API without the HTML pages getting loaded first. Thinking about adding a header or cookie that the API can validate to ensure this request is coming from the Javascript embedded in that page, and not someone calling the API direct. However, even this could be replicated by someone looking to talk to the API directly. Would like to keep this a simple HTML project as much as possible (realize creating a Node.js version of the application, or using Thymeleaf in Spring Boot would give more options to obscure the API). It feels like this is a limitation of doing an HTML/Javscript solution. But, maybe we are missing something obvious. Any tips?
Related
I am designing an application which uses an API key provided to me by a company.
In order for me to make my application public, I need to hide the API key in my released product, because currently it is sitting in JS code and visible to all users.
My app basically provides real-time statistics, and is completely functional, but only at the JS level.
From my understanding, in order to hide my key, I need to do the following:
Client uses web app --> AJAX call to MY server --> Gets my API key --> AJAX call to company's server --> Return only the object from the company's server to the client web-page.
I have written PHP, JS, SQL, and HTML before so I'm FAMILIAR with the languages, but this chain of events seems a bit over my head.
Usually I buy books to understand this type of thing but it seems like a pretty specific example that some of you guys would be able to help me with.
Can anyone explain how this might be done, using layman's terms? I'm not completely stupid but my biggest roadblock here are the following two concepts:
How can you AJAX call to a PHP page, and tell it to make another AJAX call to an external server?
How do you execute that second AJAX call? Is it just another (hidden) js file?
The stats are provided in real time, so you type a name in, and it generates a graph on the page within less than a second. I want to keep it like this and not have the user refresh the page.
Thanks so much. Let me know if you need any more information from me.
As I can't orientate freely in the topic of building dynamic sites, it is quite hard to me to google this. So I'll try to explain the problem to you.
I'm developing a simple social network. I've built a basic PHP API represented by the files like "get_profile.php", "add_post.php", etc. with the POST method that is used to pass some data. Then I try to get the data using JS AJAX (php functions return it by JSON), which means I get all the data that I need to show on a page after the page is loaded. That causes decreasing of a page loading speed and I feel like this structure is really wrong.
I hope you'll explain me how to build a proper structure or at least give me some links to read. Thanks.
Populate the HTML with the (minimum) required data on the server side and load all other necessary data on the client side using AJAX (as you already do).
In any case, I would profile your application to find the most important bottle necks. Do you parallelize AJAX requests?
Facebook, for example, doesn't populate its HTML with the actual data on the server side, but provides the rough structure, which is later filled using AJAX requests.
If I understood your architecture right, it sounds ok.
Advices
Making your architecture similar to this allows you to deliver templates for the page structure that you then populate with data from your ajax request. This makes your server faster also since it doesn't have to render the HTML also.
Be careful with the amount of requests you make though, if each client makes a lot of them you will have a problem.
Try and break your application into different major pieces and treat each one in turn. This will allow you to separate them into modules later on. This practice is also referred as micro-services architecture.
After you broke them down try and figure user interaction and patterns. This will help you design your database and model in a way in which you can easily optimise for most frequest use-cases.
The way of the pros.
You should study how facebook is doing things. They are quite open about it.
For example, the BigPipe method is the fastest I have seen for loading a page.
Also, I think you should read a bit about RESTful applications and SOA type architectures.
We are working on a RoR project implementing an LMS. We need to send data to an external REST service provided by an external server. The data is sent when certain events are accomplished, it is possible that some of those are not triggered by the client (clicks, etc.).
Also, we need to keep consistency in our rails models, because we need to keep record of the user activities.
There is a library provided to work with the API, written in JavaScript. It makes most of the work easy, so we would like to use instead of creating our own implementation for the API requests.
What are the differences between each of the following approaches? Would one be preferable to another?
Use javascripts to send the data, inserting the snippets in the
views, from the client, but having the client execute this might have
some serious implications (scores changed, false success, etc).
Use a NodeJS server to execute the Javascript but we don't really know how to communicate with our main server (Rails)
And finally, use a HTTP client from the Rails app to send the requests to the service. However we don't know exactly how to do it, also there is the question of where this code goes in the MVC pattern.
Option #1, as you've likely realized, is out of the question. For the client to make API calls on your behalf, you would need to send them your secret key/token/whatever you need to authenticate with the API. But once they have that, they could just use a script console to make whatever API calls they want "as you". This would be pretty disastrous.
Option #2 might be prohibitively complex -- I'm personally not sure how you'd go about it. It is possible, using a library like therubyracer, to execute JavaScript code from Ruby code, but there is some degree of sandboxing and this may break code that requires network access.
That leaves you with Option #3, writing your own Ruby library to interact with the API. This could be easy or difficult, depending on how hairy the API is, but you already have a JavaScript version on hand (and hopefully docs for the REST service itself), so combined with something like RestClient or HTTParty the path forward should be clear.
As for where the API calls would fit in your Rails code: If you have models that are basically mirroring the resources you're interacting with through the REST service, it might make sense to add the relevant API calls as methods or callbacks on those models. Otherwise it might be fine to put them in the relevant controller actions, but keep an eye on your code complexity and extract to a separate class or module if things are getting ugly.
(In cases where you don't need to wait for the response from the API before sending something back to the user, you may want to use DelayedJob or similar to queue your API calls in the background.)
I have a website A with a database and a search engine of some object, user can create account on my website then add comment for these objects.
I need to create an api with something like a plugin, it will result on having the seach engine on another website B.
I have planned to do like fb or twitter plugins : the dev who want to use my api will just need to add a line of js, and a line of html on website B, then it will load the plugin. But I'm wondering how to organize it.
Here it what I've guessed : I create a page on my website A, put the search engin on it. I create a js that will load this page whithin an iframe, on the dev's page (website B), under the div he added to have my plugin. Then I implement OAuth 2 (with a provider and so, so people can do post requests to alter my db), and people who is one the website B will have the ability to post comment on the objects of the search engine on website B.
Actually it seems to be the same as fb comment plugin process, but it seem too complicated to do all that stuff. Is it the right way? Can anyone detail the problems that I should face during implementation?
You need to develop a decent API which can return search result in JSON (and XML if you want to please everyone). That already would offer other developer the ability to use your site functionalities, that's mostly back end work tho. So they can develop their own widget.
To develop your own widget as a search widget you don't need that much, you just need either a set target (maybe a required element) or/and an initialization method (more flexibility for the dev) to which you pass a target.
Bind the search button, grab terms search, call your API and when you get your result display them or/and execute a custom callback pass the result as an argument, flexibility)
If you do your javascript well you can create a little API there too which facilitate the usage of your API via javascript. And then even easily port it to a jQuery plugin or something similar.
When serving JSON always remember to set your headers for your API to allow for crossdomain or go for jsonp instead.
Your question implies an architectural direction, but the requirements are too broad for such a choice. I would narrow down your requirements first. OAuth 2.0 could potentially help satisfy your needs, but ask yourself at least the following:
What user data needs to be protected?
What 3rd-party data access is needed? What functionality?
If you go with OAuth 2.0, are you prepared to follow a spec which is still changing? Are you willing to be the authentication provider?
What server-side languages/platforms are acceptable?
What other security considerations are important to you? (Such as social sharing, level of 3rd-party app trust...)
How much are you willing to rely on 3rd-party tools? Or write your own?
I agree that modeling your design off FB or Twitter or Google is not a bad idea, as they have done this sort of thing.
You might have a look at the new book Getting Started with OAuth 2.0.
Here are two simple ways of making custom search available to users.
The simplest option is to do what Google does - the search on your site would follow a simple, well defined API - so that
www.mysite.com/search?q=keyword1+keyword2
returns a list of results in HTML.
Then you'd tell your users to include a snippet of HTML:
<form action="http://www.mysite.com/search" method="get">
<input name="q" type="text" value="Search">
</form>
That would do, though at this juncture you may want to improve things with better search options, a javascript wrapper for the search form, a JSON or XML format for the data returned, security, a better worked out API that takes all these into account.
Another way is to use javascript and provide the data with a callback facility, so the URL:
www.mysite.com/js-search?q=keyword1+keyword2&callback=someHandle
will return a javascript file containing JSON data and a call to "someHandle" when it is loaded. The developer using your API have to write their own way of making the request and the handler. Bear in mind that because of XSS, the queries would probably come from your partners' servers. The simplest is probably to make your own search offer simple and well-documented so others can exploit it.
OAuth 2 could be helpful just if you would allow the website B to make POST request to the website A in background.
Instead if you want allow the users that visit the website B to post a comment then the iframe with a form that point to the website A is enough.
The easier bet, yet not necessarily the wisest, is to create some JS which calls on your website using JSONP.
iFrames are not W3C standard, try avoiding them if possible. Code a Javascript with some events that will do some JSONP calls into your own server and return the results in Javascript accordingly, so it would be able to interact with the page.
I am trying to make a Firefox extension which will use a webservice. I was looking online to find a way to do this. I was wondering if someone could explain what the following objects/methods do:
service.useService(___, ___);
service.<Service Name>.callService();
If there is an alternative that does not include these objects, I would be happy to hear about it.
Thank you very much
It appears you are using IE specific code to call the webservice, and according to this response it may not be supported in newer browsers now:
http://www.eggheadcafe.com/software/aspnet/31984555/you-can-use-xmlhttpreques.aspx
For more on the service.useservice function you may find this page helpful:
http://www.15seconds.com/Issue/040708.htm
If you have control over the web service then you can get it to either reply with a JSON or use a REST web service, as well as a SOAP web service, as javascript can work well with REST, or with a form-based (POST/GET) web service, as opposed to SOAP.