Tracking the use of my Javascript - javascript

Is there a way to track if my javascript code is being used on another site?
I work for a software development company and although I'm not a developer as such I do get involved with some of the more simple Javascript requests we get from our customers.
However, sometimes our customers want to see the Javascript working before agreeing to pay for it. My problem here is that although they are not going to be very technical they may have enough knowledge to look at the page source and effectively 'steal' the script.
Can I either prevent them from doing this or add some kind of tracking to my code somewhere so if they do a simple copy / paste then I can receive notification somehow of the script being used on another site?
Thank you

A few things you can do:
Obfuscate your code so it'll be harder to find out what to copy for non technical people.
Add a line that checks the domain name of the page and throws an exception or does some other trick to terminate if the domain is not your demo server.
Add an Ajax query to your server to validate that the script is allowed to run and terminate if there is no validation.
All said here will only protect against non-technical people. Javascript is an interpreted language and as such the entire code is sent to the browser. A skilled programmer will know to go around your blockings.

it is not easy to track your script over all www but there are ways to protect your js codes. there are plenty of sites for encoding and obfuscation like the site below:
http://javascriptobfuscator.com/default.aspx
They would still be able to use your codes but you can hide some protection codes in obfuscated version like trial timeout values or even posting some values like site url to your server for tracking.

our customers want to see the Javascript working before agreeing to pay for it.
You can achieve a good level of security by setting up a demo machine. Have the users remote into a session to provide a demo of the product. Ideally, a shared session where you can "walk them through it" (aka watch what they are doing).
Similar to a video conference, but this way they can use the browser. Don't make the site public, run the webserver local only (close port 80 on the firewall). Take the remote desktop server down after the demo and change the password.

Use the DOM API to a <script> tag that points to a server-side script on your server and append it to the <head>.
Using jQuery:
$.getJSON('http://yourserver.com/TrackScript', { url: document.location });

Related

How to send a pre-built request

What I want:
I'm using a website (that I wish to remain anonymous) to buy securities. It is quite complex and as far as I can see coded in JavaScript.
What I would like to do with this website is to 'inject' a request to buy something from a separate process. So instead of having to search for what I want to buy manually and get in there and manually fill out the form, click buy and confirm the 'are you sure you want to place the order?' popup I would just like to send whatever command/request is being sent to the server when the confirm-button is pressed directly.
To be extra clear: I simply don't want to go through the manual hassle but rather just send a pre-built request with the necessary parameters embedded.
I'm certainly not looking to do anything malicious, just make my order input faster and smoother. It is not necessary to automate login or anything like that.
I understand that this is not much to go on but I'm throwing it out there and ask the question: Can it be done?
I really don't know how this stuff works behind the scenes, maybe the request is somehow encrypted to some custom format that is next to impossible to reverse engineer, or maybe not.
"Injecting" is probably the wrong term. Most people will think of sql injection or javascript injection which is usually malicious activity. That doesn't seem to be what you want.
What you are looking for is an automation tool. There are plenty of tools available. Try a google search for "web automation tool." Selenium http://www.seleniumhq.org/ and PhantomJS http://phantomjs.org/ are popular ones.
Additionally, you may be able to recreate the request that is actually buying the security. If you use Chrome you can open Developer Tools and look at what appears on the Network tab as you go through the site. Firefox and Edge have similar tools as well. When you make the purchase you will see the actual network request that placed it. Then, depending on how the site is implemented you may just be able to replicate that request using a tool like Postman.
However before you do any of the above, I would recommend that you take a look at the TOS for the site you mention. They may specifically prohibit that kind of activity.
I want to elaborate my comment and Michael Ratliff answer.
On example.
We got some services. The administration of this services could be done via web-interface. But only in manual mode, there is no API (yes, 2016 year and no API). So at first there was not much work with administration and we done it manually.
But time passed and the amount of administration work grow exponentially so we come to situation where this work must be automated (still no API even few new versions was released).
What have we done:
We opened pages we need in browser, open Inspect Element (in Firefox), open Network, fill the web-form, press button we need. In the left part we see all requests to service, by pressing any request on the right side appears full description of what was send/get, all requests and their parameters. Then we took that parameters, change them and send back to server. Kind of reverse-engineering though.
For automation we used PHP and CURL. For now almost all work with the services is automated.
And yes, we have used Selenium (before PHP and CURL). You can open form you need. Press Rec do some stuff on the web-form, Selenium collects this data and then you can change parameters in Selenium script and re-run it.

How can Javascript code can be executed in order to get useragent?

I have an email marketing website with user-tracking capability and this is normally what I do.
I ask to my customers to add this code to their websites in order to track behavior of their customers.
var _ssprt=('https:'==document.location.protocol?'https':'http');
var ig = navigator.userAgent.toLowerCase().indexOf('googlebot') > 0;
document.write('<img height="1" width="1" src="'+_ssprt+'://www.myurl.com/system/sitecode.php?t='+document.title+'&adres='+document.location.href+'&ua='+ig+'" hspace="0" />');
Normally if one enters to the website through a browser, I can detect its user-agent easily.
However, if this is GoogleBot , since it executes the website as source code, it wouldn't send any datas to my main url. It cannot execute the php as well.
Thus, I cannot see if any googlebots enter to the website.
I use this code in order to get the user-agent
var ig = navigator.userAgent.toLowerCase().indexOf('googlebot') > 0;
I thought I can redirect my sitecode.php to js via htaccess.
So it will behave as sitecode.js and I will include it with script src code
I am wondering if I had done this, googlebot would have executed that JS.
I am trying to do this with this piece of code but I couldn't succeed. Also, I am not sure if Google would execute this and send me the user-agent data.
RewriteEngine on
RewriteRule ^sitecode.js$ sitecode.php [QSA,L]
My interpetation is that you do want to detect GoogleBot hits on your page? Or at least detect them so you can filter them out in your own code?
GoogleBot can interpet some JavaScript, but it does not execute it as a browser. And Google is quiet about what GoogleBot is doing when it comes to interpeting scripts. The same problem exists when the user has disabled JavaScript, then you will not see their visits.
There are ways to make AJAX content crawlable by GoogleBot, but it will also require some server code.
Unfortunatly, the safest way to make sure you track all visitors is to use server side code.
Optionally - and I suggest this with some reservation as I have not tried it myself - you could try adding an img or a hidden link to your php-page on your side and then check the user agent and referer to get the page that the user was visiting - but I'm not entirely sure it works that way and that GoogleBot will send the refered-header. Maybe someone else have tried this out?
navigator.userAgent is available only in a browser environment. Googlebot just does a HTTP fetch, it does not run client side JavaScript. Its like fetching a page with wget or curl - you just download the page content (source/HTML) but not execute the scripts within.
To track googlebot accesses, you'll need to put in some server side solution, but depending on what server side technology your customers use, you'll need to provide snippets to support multiple server side technologies.
I found the answer on this website which answers to my question.
http://searchengineland.com/is-googlebot-skewing-google-analytics-data-22313
Postscript: Google Analytics posted a response in the comments:
“The official Google bot does not execute Google Analytics JavaScript. We’re not sure what it is exactly, it could be anyone’s bot, some intern’s experiment, or other such traffic.”
I agree with this comment in that the official Googlebot reads JavaScript but does not execute it. Besides, it does not store and send cookies, which means that Paves/Visit would be exactly 1 and time on site exactly 0. Lastly, If the officiall Googlebot did execute JavaScript, we would have seen massive ammounts of visits.
It is also important to note that although we used Google Analytics as an example, we mean all JavaScript based solutions, including Omniture, Yahoo Web Analytics, WebTrends and others.
Please note that this issue requires additional investigation both in regards to Google Analytics and to how Google Search uses the Googlebot.

Short message encryption with only javascript to generate it in a URL

I'd like to present an idea to you that I think might help the privacy of the average user. I would appreciate any comment or suggestion on this.
I've been struggling for quite some time now with the need for a simple tool that I could share and use with my contacts who are only average users and not familiar at all with any cryptographic technology or the current tools available.
I'm planning to create a solution where one can easily encrypt a text message or a file with a single password and send it in email or chat or through whatever channel to somebody else. The solution should be entirely platform independent and usable without the need to install any extra softwares.
There are some text encryption websites out there that run client side encryption from JavaScript entirely. I find this approach currently the only possible solution. Also, there are libs for JS that already implement encryption:
http://crypto.stanford.edu/sjcl/
http://code.google.com/p/crypto-js/
Though the mentioned approaches store the message on their server, requiring you and your contact to trust it entirely. Because the server might present a different JS code to the user when visiting it after he gets the message by steeling the password and so revealing the secret.
While many think that it's not a good idea to do anything regarding cryptographic tasks in JS, I believe there is a need for a tool that is really platform independent (can be used on any tablet or PC) and still incredibly easy to use. The idea behind this is that I believe something is better than nothing. Sending information in plain text in email for decades with our current technology is wrong in most cases. There are times when we do need to share sensitive info via email and the other side might have any kind of system.
I intend to avoid the use of public key cryptography for the following reasons:
- it is very complicated to setup including the signing of each others' keys
- complicated to use it
- the user can loose his keys
- most of the time it needs and external software to be used and installed too
- a single password can be easily shared personally one time with my contact and he or she can keep it written on a paper wherever
The solution I came up with could be the following:
First of all, the browser and the operating system under it should be considered trusted.
There would be a static index.html page with embedded JavaScript. The page shows a textarea for the message and a textbox for the password. When hitting enter, the JS code generates a URL that itself will contain the encrypted message in base64 encoding. After digging I figured that 2000 bytes can be used for URLs just fine in every cases, so 1600 or 800 characters could be enough for short messages. This still needs planning.
So the encrypted message would travel with the URL. The website serving the index.html would of course use SSL with a valid certificate. While it seems an easy taks, of course it is not. The JS implementation should be carefully created to avoid easy attacks on it.
(URL shortener services could be used for it too).
Also, the question stands: How can I make sure that my contact can be certain about the origin of my message?
Well, the other side has to check if the domain is correct. Beside this, the implementation must avoid the rest of the attacks. If the URL gets changed during the travel of the email, then maximum the other side won't be able to decode the message with the password. That's what I believe. That it can be implemented this way.
About the file sharing. The solution should have a possibility to browse for a file, then encrypt it, then put it out for download to the user. This is just for him to be able to create the encrypted form of the file without the need for external tools. Then he could upload it to the cloud of his choice wherever (Google drive, Skydrive etc) and use that link in the URL of the JS solution to send it to his contact.
So if another link travels with the link, then the file from the remote host gets downloaded, decrypted and sent for download. All in his browser. If it's an encrypted message in base64 form, then it gets printed on the page after decryption (by the user providing his password of course).
Pros compared to other solutions:
- no need to implement a storage because no message nor file will be stored on the server, so the big players' services could be used
- therefore no need to reimplement the wheel regarding the storage question
- no need to trust a 3rd party because the server could easily be ours because it would be extremely easy to set up and serve it
- easy with even a free provider to host the static index.html
- because of its simplicity, the server can be hardened much better
- easy to encrypt with it in practice
- if one needs it, he could use the index.html by clicking on it from his desktop too, but that's not part of the original idea
My questions to you all are:
Do you find any flaw in my theory above? Could this really serve the average people by providing a usable tool for them that is more than nothing in times when they do need to send sensitive info to others?
Or does anything like that exist yet? Are there any better approaches? Different technology maybe?
Thank You.

Hide urls in html/javascript file

I am using ajax in my website and in order to use the ajax, I habe to write the name of the file for example:
id = "123";
$.getJSON(jquerygetevent.php?id=" + id, function(json)
{
//do something
});
how can I protect the url? I dont want people to see it and use it...
that is a limitation of using client side scripts. there is no real way to obfuscate it from the user there are many ways to make it less readable (minify etc) but in the end an end-user can still view the code
Hi Ron and welcome to the internet. The internet was (to quote Wikipedia on the subject)
The origins of the Internet reach back to research of the 1960s, commissioned by the United States government in collaboration with private commercial interests to build robust, fault-tolerant, and distributed computer networks. The funding of a new U.S. backbone by the National Science Foundation in the 1980s, as well as private funding for other commercial backbones, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The commercialization of what was by the 1990s an international network resulted in its popularization and incorporation into virtually every aspect of modern human life.
Because of these origins, and because of the way that the protocols surrounding HTTP resource identification (like for URLs) there's not really any way to prevent this. Had the internet been developed as a commercial venture initially (think AOL) then they might have been able to get away with preventing the browser from showing the new URL to the user.
So long as people can "view source" they can see the URLs in the page that you're referring them to visit. The best you can do is to obfuscate the links using javascript, but at best that's merely an annoyance. What can be decoded for the user can be decoded for a bot.
Welcome to the internet, may your stay be a long one!
I think the underlying issue is why you want to hide the URL. As everyone has noted, there is no way to solve the actual resolved URL. Once it is triggered, FireBug gives you everything you need to know.
However, is the purpose to prevent a user from re-using the URL? Perhaps you can generate one-time, session-relative URLs that can only be used in the given HTTP Session. If you cut/paste this URL to someone else, they would be unable to use it. You could also set it to expire if they tried to Refresh. This is done all the time.
Is the purpose to prevent the user from hacking your URL by providing a different query parameter? Well, you should be handling that on the server side anyways, checking if the user is authorized. Even before activating the link, the user can use a tool like FireBug to edit your client side code as much as they want. I've done this several times to live sites when they're not functioning the way I want :)
UPDATE: A HORRIBLE hack would be to drop an invisible Java Applet on the page. They can also trigger requests and interact with Javascript. Any logic could be included in the Applet code, which would be invisible to the user. This, however, introduces additional browser compatibility issues, etc, but can be done. I'm not sure if this would show up in Firebug. A user could still monitor outgoing traffic, but it might be less obvious. It would be better to make your server side more robust.
Why not put some form of security on your php script instead, check a session variable or something like that?
EDIT is response to comment:
I think you've maybe got the cart before the horse somehow. URLs are by nature public addresses for resources. If the resource shouldn't be publicly consumable except in specific instances (i.e. from within your page) then it's a question of defining and implementing security for the resource. In your case, if you only want the resource called once, then why not place a single use access key into the calling page? Then the resource will only be delivered when the page is refreshed. I'm unsure as to why you'd want to do this though, does the resource expose sensitive information? Is it perhaps very heavy on the server to run the script? And if the resource should only be used to render the page once, rather than update it once it's rendered, would it perhaps be better to implement it serverside?
you can protect (hide) anything on client, just encrypt/encode it into complicated format to real human

how can i track users without cookies

ok... im looking to have a good round of brainstorming here...
say i was google... the adword/adsense/analytics division. i would be getting a little worried about the future, when users start to disable cookies (or at least delete them on a regular basis), use private browsing, roam on multiple devices. how could google alternatively track users without the benefits of cookies?
some ideas to get started (please elaborate on these and any others):
-track users using some other persistent local/client side storage
-use user-agent string fingerprinting
-test cache response - if user 304's an image, they were here
-track mac address
-any random/out of the box ideas?
Take a look at http://samy.pl/evercookie/, it's a JS API for ultra-persistent cookies, but you can take idea(s) from it's mechanism to find storage for your data.
I think you could do it using custom urls. You would basically ecrypt a cookie and attach it as part of the URL you send to the browser. When it returns, your web server would be smart enough to decode it and track whoever sent it.
I believe the Spring framework can do this in fact.
If your site requires user tracking, then I would have it fail to work if cookies are disabled. Then focus your time and effort on making it a fantastic site for the vast majority of your visitors, and don't worry about the ones who, for whatever reason, have made the explicit decision to disable cookies.
(Made this a CW answer because this is a subjective question that's likely to be closed.)
Information about browser/system/display through js and IP of cause;
Java Applet provide a lot of info about user;
Flash also (e.g. installed fonts);
Modern browsers also provide a lot of information about users (e.g. installed extensions) and provide new ways to save information on client-side (e.g. html5 storage).
altogether: http://panopticlick.eff.org/
you can always resort back to good ol way, the HIT COUNTER.
on page, use tag and link to external image on your server
on your server, when image is fetched, redirect it to php script through .htaccess and record header info about device id etc. {similar code as disabling the hotlinking of image}
Now you have all info, use php_session() to keep a track of it
you can always use js for the same purpose, but using tag will ensure that js is not required and the script will run on all browsers

Categories