My application interacts with my users' websites via a javascript snippet that they paste into their site....like Google Analytics, Stripe, AppInsights...etc.
I'd like for my web app to be able to ping the remote site, to verify the pasted snippet exists. If it does exists, I'd like the remote javascript method to respond.
I feel like this should be easy but I'm having trouble putting it all together. For the remote script, I made it so it checked the querystring for a trigger value...if found, it would execute a 'self check' and respond.
I wasn't quite sure how to make the script respond to my ajax call and I also didn't necessarily like the idea of checking the querystring every window.load
Can anyone point me in the right direction or suggest a more elegant approach? Thanks
I think you have to think it all the other way around. How these kinds of scripts are working is that they have an API key generated just for that website and when the website is loaded it is the JavaScript code snipped that does send the API key to the API generated server to show that the snippet is available and gives the information about the page that it is loaded in.
This way you don't need to check if the snipped is available in a website. Of course if you need consistent dialog with the snipped you have to use Web Socket or other ways of communications (for the older browsers like long polling and ...)
For sure not using Web Socket and two-way communication gives the trust to the user that you are not doing any malicious work underneath and the first approach should help you enough. But if you persist on using the two-way communication please see MDN WebSocket documentation .
The first approach (not using two-way communication) is just an ajax call you can do it with JQuery: jQuery AJAX documentation, or do not use any library just pure JavaScript which is John Shipp on Vanilla JS AJAX requests .
Related
First of all, all of this might be a newbie stupid question.
I am developing a web application with Laravel but ended up using tons and tons of Jquery/javascript. I tried to think of all the possible security risks as I was developing but the more I research this topic, the more I am concerned about usage of Jquery/javascript. It seems that dynamic content loading using Jquery/javascript is overall a very bad idea...But I don't want to rework everything since that would take weeks of extra developing of what is already developed. A quick example
Let's say I have a method attached to my div like so
<div class="img-container" id="{{$file->id}}" onmouseover="showImageButtons({{$file->id}})"></div>
And then a part of Javascript
function showImageButtons(id)
{
console.log(id);
}
When I open this in browser, I am able to alter the value of parameter sent to javascript through the chrome inspector.
from this
to this
And it actually gets executed, I can see "some malicious code" being printed to console.
What if I had an ajax call to server with that parameter? Would it pass?
Is there something I don't understand or is this seriously so easy to manipulate?
There are two basic aspects you need to consider regarding web security -
The connection between the browser and your server should be secure (i.e. https), that way, assuming you configured your server correctly, no one can intercept the client-server communication and you can share data through AJAX.
On the server side, you should treat information coming from the client as hostile and sanitize it; That is since anyone can send you anything through your webpage, even if you do input validation on the client side since the your javascript code is executed by the client and therefore in complete control of the attacker. While implanting "malicious code" in the webpage alone is not an actual attack, if an attacker gets you to store that malicious code in the server and send it to other clients she can run her javascript on your other clients browsers and that is bad (lookup "cross site scripting / XSS").
I have a C++ project for windows, using MiniBlink as embedded browser. (MiniBlink is a smaller Blink, which is close to chromium). I use this embedded browser to show responsive and nice looking dialogs with Quasar.js (wrapper for vue.js).
Problem:
Mostly a browser is just the passive backend. In my case, both the backend (project with embedded browser) and the frontend (dialog) are active and thus I need some communication. At the moment I use a local server to catch HTTP send from the frontend to the backend.
But is there a way to communicate from the backend to the frontend? At the moment I could only think about catching cookies or using a permanent loop in JS to send http queries to check for a possible response.
And is there no other way to send information to a backend? Everything is local, I dont need nor really want to send it into the network.
Thanks!
Idea 1: Use a local temp file to save on one side and read on other (can be also used both way)
Idea 2 (similar to question author solution): Local server with both side communication (GET/POST request into one side, text/json other way around)
Idea 3: Use launch parameter to pass though data directly into links for example: instead of using browserprocess.exe file.html, use browserprocess.exe file.html#showsomething
There are also other ways which like catching for example: checking window title of process with certain binary name from running tasks by other side; we didin't get good enough info about your background becouse you coud either use it in same process or other process, if thats same process you coud also just directly use variables both ways directly in code of miniblink and do action when they meet if statement
As CertainPerformance added as a comment, WebSockets might be the best way to go.
If one does not like to implement a websocket server, because a http server is already running, long polling requests might be the best workaround to simulate this behaviour.
Long polling: The client sends a request, which stays open as long as possible. If the server needs to communicate, it can use the open request to send its own "request" via response. It is a bit hacky, but essentially the idea behind websockets.
Mozilla has a nice article to help with websockets:
https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_servers
If you (like me) use vuejs or quasar, you might want have a look at vue-native-websocket.
https://github.com/nathantsoi/vue-native-websocket
Good luck
I am currently designing a GUI for a piece of hardware. We want the GUI to be able to be accessed over a browser. The browser would be displaying a map generated from C++ code, but I also need to send some params to the code that generates the map from buttons on the JS front end? Is there anyway to accomplish this. I have done a little research so far and know about web sockets and AJAX, but I am not entirely sure it is what I am looking for. In an ideal world I would be able to just send UDP packets, but my research tells me that is not possible, not is TCP. Is this correct?
Thank's in advance for any help!
Your C++ code could setup a server that listens for requests. You could use libhttpserver, for example. Then, in your JavaScript code, you can use XMLHttpRequest or the Fetch API (for newer browsers) to make an HTTP request to the server, which would then return a new static page (with the generated map embedded into the page).
I try to wrap my head around how to really secure ajax calls of any kind that are publicly available.
Let’s say the JavaScript on a public page (so no user authentication of any kind) contains an AJAX call to a PHP script (REST API or just a script, it doesn’t matter) that does a lot of heavy lifting. So any user can just look into the source code, find the AJAX call, rebuild and execute it, and execute it again a million times in a second and DDoS your site that way - not so great. At first I thought a HTTP_REFERER check could be helpful, but as any header field, also this is manipulable (just use a curl request) so the gain of security wouldn’t be too high.
The next approach was about a combination of using session ids, cookies, etc. to build some kind of access key for every page viewer and when someone exceeds the limit the AJAX call would run into an error. Sounds great so far, but just by cleaning the cookies, etc. everything will be reseted. So also no real solution. But, of course! Use the IP! Great idea! Users in public networks, that use only one IP for internet access will be totally happy, if one miscreant will block the service for all of them by abusing the call... not. So, also no great solution.
So, I’m really stuck here and can’t think of any great answer for my problem.
I also thought about API keys, or something alike. But that is an information that is also extractable from the JavaScript source. So how to prevent other servers using your service in a proxy kind of manner serving your data to their users? (e.g. you implemented the GMaps API in your website (or any other API) and someone uses your script accessing the API with your key)
tl;dr
Is there any good way to really secure your publicly viewable AJAX calls from abusing them for DDoSing your site, presenting your data on other sites, etc.
I think you're overthinking what AJAX is. When your site makes an ajax request, server side, it's the same as any other page request (even if some scripts are more process intensive). You need to protect your entire site, and not just specific scripts. If your server does not have any DDoS protection, it can be attacked through any page. Look into services like CloudFare
As #Sage Mentioned it is similar to normal http request. You can use normal authentication as the http headers/cookie information will be passed on to the server every time you make ajax call. For clear view you can look into developer console on browser. Its the same as exposing you website root url. Just make sure you have authentications checks for ajax calls too.
I'm looking for a method to scrape a website from server side (which uses javascript) and save the output after analyzing data into a mysql database. I need to navigate from page to page by clicking links and submitting data from the database,without session expiring . Is this possible using phpquery web browser plugin? . I've started doing this using casperjs. I would like to know the pros and cons of both methods. I'm a beginner in the coding space. Please help.
I would recommend that you use PhantomJS or CasperJS and parse the DOM with JavaScript selectors to get the parts of the pages you want back. Don't use phpQuery as it's based on PHP and would require a separate step in your processing versus using just JavaScript DOM parsing. Also, you won't be able to perform click events using PHP. Anything client side would need to be run in PhantomJS or CasperJS.
It might even be possible to write a full scraping engine using just PHP if that's your server side language of choice. You would need to reverse engineer the login process and maintain a cookie jar with your cURL requests to keep your login valid with each request. Once you've established a session with the the website, you can then setup your navigation path with an array of links that you would like to crawl. The idea behind web crawling is that you load a page from some link and process the page and then move to the next link. You continue this process until all pages have been processed and then your crawl is complete.
I would check out Google's guide Making AJAX Applications Crawlable the website you're trying to scrap might have adopted the scheme (making their site's content crawlable).
You want to look for #! in the URL's hash fragment, this indicates to the crawler that the site supports the AJAX crawling scheme.
To put it simply, when you come across a URL like this.
www.example.com/ajax.html#!key=value you would modify it to www.example.com/ajax.html?_escaped_fragment_=key=value. The server should respond with a HTML snapshot of that page.
Here is the Full Specification