How to add a script to a page on the public internet? - javascript

I would like to test what would happen if I were to add a script that I wrote myself to a page on the public internet that I'm viewing in a web browser, such as Internet Explorer (in this specific case).
This is not something I would want to do in a production system, but I would like to test a page-enhancing script with many existing pages. I do not want to modify the page in any way for other viewers, I just want to see what would happen if my script were to become part of the page.
It would be fine if there were some tool that could be used to intercept the page from the server before loading it into the browser and add the script tag there.
It would also be good to be able to modify the page in the browser itself, though this would probably be less desirable as there might be a different way to do this in each browser.
I do realize that I could simply download a page manually with all its related resources and then run a modified copy from a local server, but that would be rather cumbersome.

You can use Fiddler to manipulate responses between the server and your client / browser, adding in arbitrary javascript (for example) through "FiddlerScript".
See: http://docs.telerik.com/fiddler/knowledgebase/fiddlerscript/modifyrequestorresponse for more info
For example, you can replace a specific Javascript with another via:
if (oSession.PathAndQuery=="/version1.js") {
oSession["x-replywithfile"] ="version2.js";
}
In the OnBeforeResponse event

Related

How to run tampermonkey script that loads multiple pages and waits for their content?

I want to create a tampermonkey script that is registered on one page (call it A). From this page (it is an overview page), it extracts a series of links (say [B, C, D]). This is working so far.
Now, I want to do the following:
Navigate to location B.
Wait for the DOM to become ready, so I can extract further information
Parse some information from the page and store them in some object/array.
Repeat steps 1 through 3 with the URLs C and D
Go back to address A
Copy the content of out to the clipboard
The tasks 1 I can achieve by window.open or window.location. But I am failing at steps 2 and 3 currently.
Is this even possible? I am unsure if waiting for another page will terminate and unload the current script.
Can you point me into the correct direction to get that issue solved?
If you have any better idea, I am willing to hear them. The reason I am using the browser with tampermonkey is that the pages use some sort of CSRF protection means that will not allow me to use e.g. curl to extract the relevant data.
I have seen this answer. As far as I understand it, this will start a new script on each invocation and I had to pass all information using URL parameters manually. It might be doable (unless the server is messing with the params) but seems to be some effort. Is there a simpler solution?
To transfer information, there are a few options.
URL parameters, as you mentioned - but that could get messy
Save the values and a flag in Tampermonkey's shared storage using GM_setValue
If you open the windows to scrape using window.open, you can have the child windows call .postMessage while the parent window listens for messages (including for those from other domains). (BroadcastChannel is a nice flexible option, but it's probably overkill here)
It sounds like your userscript needs to be able to run on arbitrary pages, so you'll probably need // #match *://*/*, as well as a way to indicate to the script that the page that was automatically navigated to is one to scrape.
When you want to start scraping, open the target page with window.open. (An iframe would be more user-friendly, but that will sometimes fail due to the target site's security restrictions.) When the page opens, your userscript can have the target page check if window.opener exists, or if there's a URL parameter (like scrape=true), to indicate that it's a page to be scraped. Scrape the information, then send it back to the parent using .postMessage. Then the parent can repeat the process for the other links. (You could even process all links in parallel, if they're on different domains and it won't overload your browser.)
Waiting for the DOM to be ready should be trivial. If the page is fully populated at the end of HTML parsing, then all your script needs is to not have #run-at document-start, and it'll run once the HTML is loaded. If the page isn't fully populated at the end of HTML parsing, and you need to wait for something else, just have a timeout loop until the element you need exists.
protection means that will not allow me to use e.g. curl to extract the relevant data.
Rather than a userscript, running this on your own server would be more reliable and somewhat easier to manage, if it's possible. Consider checking if something more sophisticated can curl could work - for example, puppeteer, which can emulate a full browser.

How do I create a Local html file to open webpage and inject a JavaScript function

I have a legacy web application that we are not allowed to modify yet. We need to add a new function to the application in the short term. We have been told that we may modify the webpage with any local scripts we want but we have to wait 4 months before they will unlock the application.
So my goal is to create a webpage locally, click on that local html file and have it open the url for the legacy application, and then inject the new JavaScript function to the application.
On "your" page, use an iFrame to "import" the page you cannot edit, on your page add whatever modifications you need/want.
If there is no server side scripting on the page, then copy the page source to your page, and add whatever you want to it. It is difficult to give you a focused answer without having access to or more information about the actual legacy page.
It can't be done directly since browsers prevent cross site scripting so injecting js from local machine will complain with same origin errors the only workaround i know is to use developer tools and open console then you can type your JavaScript there and run it directly

Changes in Javascript not being reflected while debugging Web page

I am working on a web page development using netbeans IDE and use Firefox for debugging/testing. Whenever i do changes to Javascript, these changes are not getting reflected on the web page,the source code reveals the obsolete code.
Everytime i make changes, i ensure to restart my nginx server before opening browser, PHP seems to work fine this way, but Javascript is not in sync with my changes to the code.
Pls provide me a solution to encounter this problem.
The problem is that your browser is caching your files, you can clean browser caching or set the browser to stop caching files.
Another way to avoid browser caching is append something (timestamp or id) with a '?' at the end of your HTML file reference.
<script src='script.js?0001'><script>
Any time you want the browser request your file again, just change this value.
For avoiding the caching of files, its better to handle it programatticlly by adding proper headers like Cache-Control and max-age. However, these headers are different for different browser like IE ,firefox etc.
Best way is to trick browser by adding the randow query parameter so that browser will belive this is different request.
<script src='myScript.js?dummyParam=12001>
Here,12001 should be generated different after every change by using timestamp or someother random value.

How do you keep content from your previous web page after clicking a link?

I'm sorry if this is a newbie question but I don't really know what to search for either. How do you keep content from a previous page when navigating through a web site? For example, the right side Activity/Chat bar on facebook. It doesn't appear to refresh when going to different profiles; it's not an iframe and doesn't appear to be ajax (I could be wrong).
Thanks,
I believe what you're seeing in Facebook is not actual "page loads", but clever use of AJAX or AHAH.
So ... imagine you've got a web page. It contains links. Each of those links has a "hook" -- a chunk of JavaScript that gets executed when the link gets clicked.
If your browser doesn't support JavaScript, the link works as it normally would on an old-fashioned page, and loads another page.
But if JavaScript is turned on, then instead of navigating to an HREF, the code run by the hook causes a request to be placed to a different URL that spits out just the HTML that should be used to replace a DIV that's already showing somewhere on the page.
There's still a real link in the HTML just in case JS doesn't work, so the HTML you're seeing looks as it should. Try disabling JavaScript in your browser and see how Facebook works.
Live updates like this are all over the place in Web 2.0 applications, from Facebook to Google Docs to Workflowy to Basecamp, etc. The "better" tools provide the underlying HTML links where possible so that users without JavaScript can still get full use of the applications. (This is called Progressive Enhancement or Graceful degradation, depending on your perspective.) Of course, nobody would expect Google Docs to work without JavaScript.
In the case of a chat like Facebook, you must save the entire conversation on the server side (for example in a database). Then, when the user changes the page, you can restore the state of the conversation on the server side (with PHP) or by querying your server like you do for the chat (Javascript + AJAX).
This isn't done in Javascript. It needs to be done using your back-end scripting language.
In PHP, for example, you use Sessions. The variables set by server-side scripts can be maintained on the server and tied together (between multiple requests/hits) using a cookie.
One really helpful trick is to run HTTPFox in Firefox so you can actually monitor what's happening as you browse from one page to the next. You can check out the POST/Cookies/Response tabs and watch for which web methods are being called by the AJAX-like behaviors on the page. In doing this you can generally deduce how data is flowing to and from the pages, even though you don't have access to the server side code per se.
As for the answer to your specific question, there are too many approaches to list (cookies, server side persistence such as session or database writes, a simple form POST, VIEWSTATE in .net, etc..)
You can open your last closed web-page by pressing ctrl+shift+T . Now you can save content as you like. Example: if i closed a web-page related by document sharing and now i am on travel web page. Then i press ctrl+shift+T. Now automatic my last web-page will open. This function works on Mozilla, e explorer, opera and more. Hope this answer is helpful to you.

Disabling loading specific JavaScript files with Firefox

I am looking for a way to prevent loading a specific JavaScript file on a website for any website of choice, with Firefox.
For example:
Say I don't want to load jQuery (when loading the page, not afterwards 'disabling' it). I then want to be able to set that
http://ajax.googleapis.com/ajax/libs/jquery/1.5.2/jquery.min.js
should not be loaded. The browser should complete ignore this to debug other JavaScript on the website. I don't have access to the domain directly, so that is why I am trying to do this via the browser.
So for clarity: :) I don't want to disable scripts from a certain domain, but want to be able to disable certain scripts. It can be that 10 scripts are on 1 domain, so killing all 10 of them is not what I want; in that case I want to prevent loading only one.
Is there a way to do so?
Several options:
Use the Addon "Adblock Plus". It will probably still accesses the js but does not execute it.
Use the Addon "Greasemonkey", which - when cofigured right - does not even touch the js-url. But its generally harder to configure right. ;)
Have a look at Firefox's buildin security policies: http://kb.mozillazine.org/Security_Policies Here you can block javascript on an url or even function-level
Go to your hosts file C:\Windows\System32\drivers\etc (Windows) or /etc/hosts (Linux).
Add:
127.0.0.1 ajax.googleapis.com (separated by a tab)
And reopen your browser
This way the jQuery file will fail to load.

Categories