I was asked an odd question this morning--We have customers who have a number of PDF reports hosted on our services which are accessed through https (not a direct file link, but accessed by a rest url).
Customers often print these reports on a daily basis.
The request is that these reports automatically print (could be default printer) each morning.
My initial reaction is to laugh, because obviously this is impossible without some user action. But the more I think about this, the more I am inclined to believe there may be ways. For example, if there is some generic "print" user with limited access, and if the browser is left open, and some Javascript on the login screen checks the time of day, it might be possible to retrieve files at a specified time and print. I don't believe I can automatically print--but at least I can open a print dialog.
I know the request to automatically send things to a far away printer from our web services is impossible. Wondering if I can provide the next best thing, and if anyone has a good idea to share here.
Related
What I want:
I'm using a website (that I wish to remain anonymous) to buy securities. It is quite complex and as far as I can see coded in JavaScript.
What I would like to do with this website is to 'inject' a request to buy something from a separate process. So instead of having to search for what I want to buy manually and get in there and manually fill out the form, click buy and confirm the 'are you sure you want to place the order?' popup I would just like to send whatever command/request is being sent to the server when the confirm-button is pressed directly.
To be extra clear: I simply don't want to go through the manual hassle but rather just send a pre-built request with the necessary parameters embedded.
I'm certainly not looking to do anything malicious, just make my order input faster and smoother. It is not necessary to automate login or anything like that.
I understand that this is not much to go on but I'm throwing it out there and ask the question: Can it be done?
I really don't know how this stuff works behind the scenes, maybe the request is somehow encrypted to some custom format that is next to impossible to reverse engineer, or maybe not.
"Injecting" is probably the wrong term. Most people will think of sql injection or javascript injection which is usually malicious activity. That doesn't seem to be what you want.
What you are looking for is an automation tool. There are plenty of tools available. Try a google search for "web automation tool." Selenium http://www.seleniumhq.org/ and PhantomJS http://phantomjs.org/ are popular ones.
Additionally, you may be able to recreate the request that is actually buying the security. If you use Chrome you can open Developer Tools and look at what appears on the Network tab as you go through the site. Firefox and Edge have similar tools as well. When you make the purchase you will see the actual network request that placed it. Then, depending on how the site is implemented you may just be able to replicate that request using a tool like Postman.
However before you do any of the above, I would recommend that you take a look at the TOS for the site you mention. They may specifically prohibit that kind of activity.
I want to elaborate my comment and Michael Ratliff answer.
On example.
We got some services. The administration of this services could be done via web-interface. But only in manual mode, there is no API (yes, 2016 year and no API). So at first there was not much work with administration and we done it manually.
But time passed and the amount of administration work grow exponentially so we come to situation where this work must be automated (still no API even few new versions was released).
What have we done:
We opened pages we need in browser, open Inspect Element (in Firefox), open Network, fill the web-form, press button we need. In the left part we see all requests to service, by pressing any request on the right side appears full description of what was send/get, all requests and their parameters. Then we took that parameters, change them and send back to server. Kind of reverse-engineering though.
For automation we used PHP and CURL. For now almost all work with the services is automated.
And yes, we have used Selenium (before PHP and CURL). You can open form you need. Press Rec do some stuff on the web-form, Selenium collects this data and then you can change parameters in Selenium script and re-run it.
I want to know if my website user click in a link in another website so I can show him a thank you message.
I want to get this click in another website link. Is it possible? How can I do something like this?
Thank you
Your question doesn't state whether you control the link you are trying to monitor, or if you are trying to monitor a link controlled by a third-party website. I'm going to assume the later, but if you control the link, then see the first comment to your answer.
The short version is that there is no way to independently monitor a user's action on another website from within your own. To allow this would violate some of the fundamental tenants that networking and the Internet are based on. For example, if I host the website www.reallyCoolRocksToBuy.com and I want to know whether or not you just purchased a really cool rock on Amazon after viewing it on my site, there is no way to directly get this data even though both my website and Amazon's are open in your browser at the same time.
The highest level object you can access via Java or HTML normally (there are always some exceptions) is the Window object of your own page. There used to be a way to have some control over a third-party page that was launched in a window that you spawned, but this is no longer possible, and even it it was, you still wouldn't be able to monitor any links from that site.
The only way to achieve what you want is for the third-party site to be involved in the communication. Many sites have APIs for sending and receiving referral or link information. For example, Amazon has an API that you can use when someone clicks on an Amazon link from your site. There are a number of ways this is achieved, but basically your link sends a specially encoded string to Amazon identifying your site as the referrer. Amazon can this use this string to create and share session information from the visitor. Depending on what your relationship was with Amazon, you might be able to use this session information to find out if your user purchased a pretty rock from Amazon, but it would be entirely up to Amazon to share this information.
Cookies and other local data can also be used to achieve similar results, but again, you have to have the cooperation of the site.
This question has cropped up a few times in various guises, but I've not seen an answer that satisfies my requirement or fills me with much confidence. Let me set the scene.
We currently have a web application, which allows users to submit responses to pre-set questions where the data ends up in an SQL Server database, we also have a Windows application that does the same thing but works in an offline capacity; i.e. it connects to the SQL Server, downloads the questions, allows the user to complete them offline and when they next have a network connection they can synchronise the data, uploading it to the SQL Server. Great!
As part of our development strategy, given HTML 5's offline capabilities and local storage,it seems perfectly sensible to attempt to consolidate these products into a single web application. This would mean we're able to work on a single code base, and this would also enable the application to run in a browser on most devices; platform independent.
Looking into this I see a couple of potential problems, I'd really appreciate a steer on these:
Users need the ability to login, in offline and on-line modes. This could mean we download the hash's of the all users usernames and passwords, or just those that have logged in whilst in on-line mode. However, even doing this there needs to be a way to check these and given that the Javascript is readable someone could easily reverse engineer their credentials. Yes you can obfuscate the code but this isn't infallible.
The data that needs to be stored locally could be highly sensitive; contain personal information etc. Therefore this also needs encrypting, at minimum AES 256.
Am I hoping for utopia? Is this something that's just not possible at this time? Do I need to be looking at another solution and dismissing this for the time being?
Any help from you lovely people would be much appreciated.
First the easier question 2: that is perfectly possible. You can generate the key on the device and on sync send it via https to your server which can decrypt the data then.
As for question 1 I'd say an offline login is not really feasible BUT do you actually need one? Once the questionaire is downloaded (which requires online mode, so requiring login is fine) you only need to transmit it on sync, where online is again required and you can ask the user for his login there, too. I'd not recommend to download any sensible user data (e.g. hashes) to the device.
What you can do is to cache the current user only after logging in online. This would mitigate the risk of enumerating the users in your local DB.
You then need to encrypt the user's data on the front-end, I'd go with a library that does the job for you (for example, RxDB). RxDB accepts a password (which you can generate on the fly) and based on it, encrypts your DB data. The user then fills in the form (does whatever he wants) and if all of the sudden the internet is gone, the user is still able to continue his work and that work must be added as pending requests in order to do the sync. (which you already have)
When the internet is restored, you're going to check whether the session has expired for the user and if so, prompt the user to log in again and do the sync if it was the same user. If it's still there, perform the sync.
My advice based on my personal experience for the offline part.
You can create a local variable that allows the user to login once using internet for the first time then he will be able to auto login for several times as much as you decided in the local variable and when the value is 0 he will need an internet again to get another offline access for the same value you decided before.
so, in a small words. offline counter that will need an internet Only after many offline logins (when the counter decrease to 0)
Flowcharts
This may be a bit of a tricky one (for me at least, but you guys may be smarter). I need to capture the timestamp of exactly when a reader clicks a link in an email. However, this link is not a hyperlink to another webpage. It is a link formatted as a GET request with querystrings that will automatically submit a form.
Here is the tricky part....The form processing is not handled by PHP or .NET or any other server side language. It is a form engine that is hosted and managed by a cloud based marketing platform that captures and displays the form submission data (So i have no access to the code behind the scenes).
Now, if this wasn't an email I'd say it is simple enough to just use Javascript. However, javascript doesn't work so well with email, if at all (I'm just assuming there are some email clients out there that support javascript).
How would you go about capturing the timestamp for when the link is clicked without using any type of scripting? Is this even possible?
The best solution i could come up with was to have the link point to an intermediate page with javascript to capture timestamp and then redirect to the form submission. Only problem with that is that it will only capture timestamp of page load and not of the actual click activity.
There is no way to do what you want "without any type of scripting". If no scripting is done, no functionality may be added or changed.
The best option is the very one you suggested: use an intermediary page that records the request time. Barring unusual circumstances (such as a downed server), the time between a link being clicked and the request reaching the server will be less than 1 second.
Do you really need a higher resolution or accuracy than ~1s? What additional gain is there from having results on the order of milliseconds or microseconds? I can't imagine a scenario in which you'd have tangible benefits from such a thing, though if you do have one I'd love to hear it.
My initial thought was to say that what you're trying to do can't be done without some scripting capability, but I suppose it truly depends on what you're trying to accomplish overall.
While there is ambiguity in what you're trying to accomplish from what you have written, I'm going to make an assumption: you're trying to record interaction with a particular email.
Depending on the desired resolution, this is very possible--in fact--something that most businesses have been doing for years.
To begin my explanation of the technique, consider this common functionality in most mail clients (web-based or otherwise):
Click here to display images below
The reason for this existing is that the images that are loaded into the message that you're reading often come from a remote server not hosted by the mail client. In the process of requesting that image, a great deal of information about yourself is given to that outside server via HTTP headers in your request including, among other things, a timestamp for the request. Thus the above button is used to prevent that from happening without your consent.
That said, its also important to note how other mail client providers, most notably gmail, are approaching this now. The aforementioned technique is so common (used by advertisers and by other, more nefarious parties for the purpose of phishing, malware, etc) that Google has decided to start caching all mail images themselves. The result is that the email looks exactly the same, but all requests for images are instead directed at Google's cached versions.
Long story short, you can get a timestamp to note interaction with an email via image request, but such metric collection in general, regardless if its done in the manner I've outlined, is something mail clients try to prevent, at least at some level.
EDIT - To relate this back to what you mention in your question and your idea of having some intermediary page, you could skip having that page and instead you would point an image request towards a server you control
I am using ajax in my website and in order to use the ajax, I habe to write the name of the file for example:
id = "123";
$.getJSON(jquerygetevent.php?id=" + id, function(json)
{
//do something
});
how can I protect the url? I dont want people to see it and use it...
that is a limitation of using client side scripts. there is no real way to obfuscate it from the user there are many ways to make it less readable (minify etc) but in the end an end-user can still view the code
Hi Ron and welcome to the internet. The internet was (to quote Wikipedia on the subject)
The origins of the Internet reach back to research of the 1960s, commissioned by the United States government in collaboration with private commercial interests to build robust, fault-tolerant, and distributed computer networks. The funding of a new U.S. backbone by the National Science Foundation in the 1980s, as well as private funding for other commercial backbones, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The commercialization of what was by the 1990s an international network resulted in its popularization and incorporation into virtually every aspect of modern human life.
Because of these origins, and because of the way that the protocols surrounding HTTP resource identification (like for URLs) there's not really any way to prevent this. Had the internet been developed as a commercial venture initially (think AOL) then they might have been able to get away with preventing the browser from showing the new URL to the user.
So long as people can "view source" they can see the URLs in the page that you're referring them to visit. The best you can do is to obfuscate the links using javascript, but at best that's merely an annoyance. What can be decoded for the user can be decoded for a bot.
Welcome to the internet, may your stay be a long one!
I think the underlying issue is why you want to hide the URL. As everyone has noted, there is no way to solve the actual resolved URL. Once it is triggered, FireBug gives you everything you need to know.
However, is the purpose to prevent a user from re-using the URL? Perhaps you can generate one-time, session-relative URLs that can only be used in the given HTTP Session. If you cut/paste this URL to someone else, they would be unable to use it. You could also set it to expire if they tried to Refresh. This is done all the time.
Is the purpose to prevent the user from hacking your URL by providing a different query parameter? Well, you should be handling that on the server side anyways, checking if the user is authorized. Even before activating the link, the user can use a tool like FireBug to edit your client side code as much as they want. I've done this several times to live sites when they're not functioning the way I want :)
UPDATE: A HORRIBLE hack would be to drop an invisible Java Applet on the page. They can also trigger requests and interact with Javascript. Any logic could be included in the Applet code, which would be invisible to the user. This, however, introduces additional browser compatibility issues, etc, but can be done. I'm not sure if this would show up in Firebug. A user could still monitor outgoing traffic, but it might be less obvious. It would be better to make your server side more robust.
Why not put some form of security on your php script instead, check a session variable or something like that?
EDIT is response to comment:
I think you've maybe got the cart before the horse somehow. URLs are by nature public addresses for resources. If the resource shouldn't be publicly consumable except in specific instances (i.e. from within your page) then it's a question of defining and implementing security for the resource. In your case, if you only want the resource called once, then why not place a single use access key into the calling page? Then the resource will only be delivered when the page is refreshed. I'm unsure as to why you'd want to do this though, does the resource expose sensitive information? Is it perhaps very heavy on the server to run the script? And if the resource should only be used to render the page once, rather than update it once it's rendered, would it perhaps be better to implement it serverside?
you can protect (hide) anything on client, just encrypt/encode it into complicated format to real human