How to send a pre-built request - javascript

What I want:
I'm using a website (that I wish to remain anonymous) to buy securities. It is quite complex and as far as I can see coded in JavaScript.
What I would like to do with this website is to 'inject' a request to buy something from a separate process. So instead of having to search for what I want to buy manually and get in there and manually fill out the form, click buy and confirm the 'are you sure you want to place the order?' popup I would just like to send whatever command/request is being sent to the server when the confirm-button is pressed directly.
To be extra clear: I simply don't want to go through the manual hassle but rather just send a pre-built request with the necessary parameters embedded.
I'm certainly not looking to do anything malicious, just make my order input faster and smoother. It is not necessary to automate login or anything like that.
I understand that this is not much to go on but I'm throwing it out there and ask the question: Can it be done?
I really don't know how this stuff works behind the scenes, maybe the request is somehow encrypted to some custom format that is next to impossible to reverse engineer, or maybe not.

"Injecting" is probably the wrong term. Most people will think of sql injection or javascript injection which is usually malicious activity. That doesn't seem to be what you want.
What you are looking for is an automation tool. There are plenty of tools available. Try a google search for "web automation tool." Selenium http://www.seleniumhq.org/ and PhantomJS http://phantomjs.org/ are popular ones.
Additionally, you may be able to recreate the request that is actually buying the security. If you use Chrome you can open Developer Tools and look at what appears on the Network tab as you go through the site. Firefox and Edge have similar tools as well. When you make the purchase you will see the actual network request that placed it. Then, depending on how the site is implemented you may just be able to replicate that request using a tool like Postman.
However before you do any of the above, I would recommend that you take a look at the TOS for the site you mention. They may specifically prohibit that kind of activity.

I want to elaborate my comment and Michael Ratliff answer.
On example.
We got some services. The administration of this services could be done via web-interface. But only in manual mode, there is no API (yes, 2016 year and no API). So at first there was not much work with administration and we done it manually.
But time passed and the amount of administration work grow exponentially so we come to situation where this work must be automated (still no API even few new versions was released).
What have we done:
We opened pages we need in browser, open Inspect Element (in Firefox), open Network, fill the web-form, press button we need. In the left part we see all requests to service, by pressing any request on the right side appears full description of what was send/get, all requests and their parameters. Then we took that parameters, change them and send back to server. Kind of reverse-engineering though.
For automation we used PHP and CURL. For now almost all work with the services is automated.
And yes, we have used Selenium (before PHP and CURL). You can open form you need. Press Rec do some stuff on the web-form, Selenium collects this data and then you can change parameters in Selenium script and re-run it.

Related

Is it possible to scrape a website with a login using purely javascript - on client side

I've managed to scrape websites that require no login using js only and a little help from websites that allow me to pass the CORS issues(like allorigins), but I just couldn't manage to get pass through the login problem.
I've seen many posts discussing of doing it using node.js and python beautifulsoup, but none on how to do it with javascript.
So how do I go about it?
Is it even possible doing it purely on client-side?
I'm willing to do all the learning and searching needed, but I need some direction in this vast subject.
Assuming you meant using in-browser JavaScript, how did you get around CORS? And if you did, then once the page refreshed after a successful login you code would stop running anyway unless you were a browser extension.
If you mean on your computer, then Node is what you're looking for, but unless you use a project like Headless Chrome then you'll run into the issue of saving the cookies between requests which is what keeps track of your session and actually keeps you logged in.
Login requires a direct interaction with your browser, like saving a cookie, returning a security token etc.
If you use JavaScript from a html page, it would theoretically require to visit the other page, at least inside a iFrame. There is a limit of how much you can do with javascript inside a iFrame.
With other words you try to imitate something like Selenium. Give it a try. It works with Java. You can control you browser, telling what to do, like a real user, and fetch the results, make even screenshots.

How would I capture the timestamp of when a link is clicked in an Email?

This may be a bit of a tricky one (for me at least, but you guys may be smarter). I need to capture the timestamp of exactly when a reader clicks a link in an email. However, this link is not a hyperlink to another webpage. It is a link formatted as a GET request with querystrings that will automatically submit a form.
Here is the tricky part....The form processing is not handled by PHP or .NET or any other server side language. It is a form engine that is hosted and managed by a cloud based marketing platform that captures and displays the form submission data (So i have no access to the code behind the scenes).
Now, if this wasn't an email I'd say it is simple enough to just use Javascript. However, javascript doesn't work so well with email, if at all (I'm just assuming there are some email clients out there that support javascript).
How would you go about capturing the timestamp for when the link is clicked without using any type of scripting? Is this even possible?
The best solution i could come up with was to have the link point to an intermediate page with javascript to capture timestamp and then redirect to the form submission. Only problem with that is that it will only capture timestamp of page load and not of the actual click activity.
There is no way to do what you want "without any type of scripting". If no scripting is done, no functionality may be added or changed.
The best option is the very one you suggested: use an intermediary page that records the request time. Barring unusual circumstances (such as a downed server), the time between a link being clicked and the request reaching the server will be less than 1 second.
Do you really need a higher resolution or accuracy than ~1s? What additional gain is there from having results on the order of milliseconds or microseconds? I can't imagine a scenario in which you'd have tangible benefits from such a thing, though if you do have one I'd love to hear it.
My initial thought was to say that what you're trying to do can't be done without some scripting capability, but I suppose it truly depends on what you're trying to accomplish overall.
While there is ambiguity in what you're trying to accomplish from what you have written, I'm going to make an assumption: you're trying to record interaction with a particular email.
Depending on the desired resolution, this is very possible--in fact--something that most businesses have been doing for years.
To begin my explanation of the technique, consider this common functionality in most mail clients (web-based or otherwise):
Click here to display images below
The reason for this existing is that the images that are loaded into the message that you're reading often come from a remote server not hosted by the mail client. In the process of requesting that image, a great deal of information about yourself is given to that outside server via HTTP headers in your request including, among other things, a timestamp for the request. Thus the above button is used to prevent that from happening without your consent.
That said, its also important to note how other mail client providers, most notably gmail, are approaching this now. The aforementioned technique is so common (used by advertisers and by other, more nefarious parties for the purpose of phishing, malware, etc) that Google has decided to start caching all mail images themselves. The result is that the email looks exactly the same, but all requests for images are instead directed at Google's cached versions.
Long story short, you can get a timestamp to note interaction with an email via image request, but such metric collection in general, regardless if its done in the manner I've outlined, is something mail clients try to prevent, at least at some level.
EDIT - To relate this back to what you mention in your question and your idea of having some intermediary page, you could skip having that page and instead you would point an image request towards a server you control

Tracking the use of my Javascript

Is there a way to track if my javascript code is being used on another site?
I work for a software development company and although I'm not a developer as such I do get involved with some of the more simple Javascript requests we get from our customers.
However, sometimes our customers want to see the Javascript working before agreeing to pay for it. My problem here is that although they are not going to be very technical they may have enough knowledge to look at the page source and effectively 'steal' the script.
Can I either prevent them from doing this or add some kind of tracking to my code somewhere so if they do a simple copy / paste then I can receive notification somehow of the script being used on another site?
Thank you
A few things you can do:
Obfuscate your code so it'll be harder to find out what to copy for non technical people.
Add a line that checks the domain name of the page and throws an exception or does some other trick to terminate if the domain is not your demo server.
Add an Ajax query to your server to validate that the script is allowed to run and terminate if there is no validation.
All said here will only protect against non-technical people. Javascript is an interpreted language and as such the entire code is sent to the browser. A skilled programmer will know to go around your blockings.
it is not easy to track your script over all www but there are ways to protect your js codes. there are plenty of sites for encoding and obfuscation like the site below:
http://javascriptobfuscator.com/default.aspx
They would still be able to use your codes but you can hide some protection codes in obfuscated version like trial timeout values or even posting some values like site url to your server for tracking.
our customers want to see the Javascript working before agreeing to pay for it.
You can achieve a good level of security by setting up a demo machine. Have the users remote into a session to provide a demo of the product. Ideally, a shared session where you can "walk them through it" (aka watch what they are doing).
Similar to a video conference, but this way they can use the browser. Don't make the site public, run the webserver local only (close port 80 on the firewall). Take the remote desktop server down after the demo and change the password.
Use the DOM API to a <script> tag that points to a server-side script on your server and append it to the <head>.
Using jQuery:
$.getJSON('http://yourserver.com/TrackScript', { url: document.location });

Hide urls in html/javascript file

I am using ajax in my website and in order to use the ajax, I habe to write the name of the file for example:
id = "123";
$.getJSON(jquerygetevent.php?id=" + id, function(json)
{
//do something
});
how can I protect the url? I dont want people to see it and use it...
that is a limitation of using client side scripts. there is no real way to obfuscate it from the user there are many ways to make it less readable (minify etc) but in the end an end-user can still view the code
Hi Ron and welcome to the internet. The internet was (to quote Wikipedia on the subject)
The origins of the Internet reach back to research of the 1960s, commissioned by the United States government in collaboration with private commercial interests to build robust, fault-tolerant, and distributed computer networks. The funding of a new U.S. backbone by the National Science Foundation in the 1980s, as well as private funding for other commercial backbones, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The commercialization of what was by the 1990s an international network resulted in its popularization and incorporation into virtually every aspect of modern human life.
Because of these origins, and because of the way that the protocols surrounding HTTP resource identification (like for URLs) there's not really any way to prevent this. Had the internet been developed as a commercial venture initially (think AOL) then they might have been able to get away with preventing the browser from showing the new URL to the user.
So long as people can "view source" they can see the URLs in the page that you're referring them to visit. The best you can do is to obfuscate the links using javascript, but at best that's merely an annoyance. What can be decoded for the user can be decoded for a bot.
Welcome to the internet, may your stay be a long one!
I think the underlying issue is why you want to hide the URL. As everyone has noted, there is no way to solve the actual resolved URL. Once it is triggered, FireBug gives you everything you need to know.
However, is the purpose to prevent a user from re-using the URL? Perhaps you can generate one-time, session-relative URLs that can only be used in the given HTTP Session. If you cut/paste this URL to someone else, they would be unable to use it. You could also set it to expire if they tried to Refresh. This is done all the time.
Is the purpose to prevent the user from hacking your URL by providing a different query parameter? Well, you should be handling that on the server side anyways, checking if the user is authorized. Even before activating the link, the user can use a tool like FireBug to edit your client side code as much as they want. I've done this several times to live sites when they're not functioning the way I want :)
UPDATE: A HORRIBLE hack would be to drop an invisible Java Applet on the page. They can also trigger requests and interact with Javascript. Any logic could be included in the Applet code, which would be invisible to the user. This, however, introduces additional browser compatibility issues, etc, but can be done. I'm not sure if this would show up in Firebug. A user could still monitor outgoing traffic, but it might be less obvious. It would be better to make your server side more robust.
Why not put some form of security on your php script instead, check a session variable or something like that?
EDIT is response to comment:
I think you've maybe got the cart before the horse somehow. URLs are by nature public addresses for resources. If the resource shouldn't be publicly consumable except in specific instances (i.e. from within your page) then it's a question of defining and implementing security for the resource. In your case, if you only want the resource called once, then why not place a single use access key into the calling page? Then the resource will only be delivered when the page is refreshed. I'm unsure as to why you'd want to do this though, does the resource expose sensitive information? Is it perhaps very heavy on the server to run the script? And if the resource should only be used to render the page once, rather than update it once it's rendered, would it perhaps be better to implement it serverside?
you can protect (hide) anything on client, just encrypt/encode it into complicated format to real human

how can i track users without cookies

ok... im looking to have a good round of brainstorming here...
say i was google... the adword/adsense/analytics division. i would be getting a little worried about the future, when users start to disable cookies (or at least delete them on a regular basis), use private browsing, roam on multiple devices. how could google alternatively track users without the benefits of cookies?
some ideas to get started (please elaborate on these and any others):
-track users using some other persistent local/client side storage
-use user-agent string fingerprinting
-test cache response - if user 304's an image, they were here
-track mac address
-any random/out of the box ideas?
Take a look at http://samy.pl/evercookie/, it's a JS API for ultra-persistent cookies, but you can take idea(s) from it's mechanism to find storage for your data.
I think you could do it using custom urls. You would basically ecrypt a cookie and attach it as part of the URL you send to the browser. When it returns, your web server would be smart enough to decode it and track whoever sent it.
I believe the Spring framework can do this in fact.
If your site requires user tracking, then I would have it fail to work if cookies are disabled. Then focus your time and effort on making it a fantastic site for the vast majority of your visitors, and don't worry about the ones who, for whatever reason, have made the explicit decision to disable cookies.
(Made this a CW answer because this is a subjective question that's likely to be closed.)
Information about browser/system/display through js and IP of cause;
Java Applet provide a lot of info about user;
Flash also (e.g. installed fonts);
Modern browsers also provide a lot of information about users (e.g. installed extensions) and provide new ways to save information on client-side (e.g. html5 storage).
altogether: http://panopticlick.eff.org/
you can always resort back to good ol way, the HIT COUNTER.
on page, use tag and link to external image on your server
on your server, when image is fetched, redirect it to php script through .htaccess and record header info about device id etc. {similar code as disabling the hotlinking of image}
Now you have all info, use php_session() to keep a track of it
you can always use js for the same purpose, but using tag will ensure that js is not required and the script will run on all browsers

Categories