function publish(text) {
$('#helpdiv').prepend(text);
}
function get_help(topic) {
$.get(topic, publish);
}
<p>Hi. click here for more help.</p>
<div id="helpdiv"></div>
I've inherited this chunk of HTML and javascript above (snippet). It is/was going to be used as local help. Currently it is online only and it works fine. However, when I copy the files locally, I get "Permission Denied" in Internet Explorer and in Chrome doesn't do anything when I "click here for more help". What it's supposed to do is load the help content from inline-help.html and display it in the helpdiv div. Now here is the kicker, if I take the same files and copy them to inetpub on my PC and load them as http://localhost/hello.html it functions perfectly.
Presumably this is a security thing where the "local" zone isn't allowing me to load files off of the user's HD? But I'm not really sure what's going on and would like to understand this problem further and potentially come up with a workaround.
Any insight is greatly appreciated.
jquery's "get" uses xmlHttpRequest, which doesn't work on local files, unfortunately. If you really need to be able to fetch local data (or data from a different domain) asynchronously, you should use dynamic script tags. However that means the data file has to be reformatted as JSON data.
I don't think your browser is allowing you to run javascript locally (using the file:/// access method). But when you load it from http://localhost/ it works fine.
You need to either develop on a website, or use your localhost server.
Related
I'm launching Firefox via command line and I'd like to launch a specific Firefox Profile with a proxy. According to this answer on Stackoverflow, Firefox proxy settings are stored in pref.js in the Firefox Profile folder and it is necessary to edit this file to launch FF with a proxy.
I've edited the file as follows:
user_pref("network.proxy.ftp", "1.0.0.1");
user_pref("network.proxy.ftp_port", 00000);
user_pref("network.proxy.gopher", "1.0.0.1");
user_pref("network.proxy.gopher_port", 00000);
user_pref("network.proxy.http", "1.0.0.1");
user_pref("network.proxy.http_port", 22222);
user_pref("network.proxy.no_proxies_on", "localhost, 1.0.0.1");
user_pref("network.proxy.socks", "1.0.0.1");
user_pref("network.proxy.socks_port", 00000);
user_pref("network.proxy.ssl", "1.0.0.1");
user_pref("network.proxy.ssl_port", 00000);
user_pref("network.proxy.type", 1);
Note: the IP address and port used above are for demonstration purposes.
However, I'm encountering two problems:
1) Firefox completely ignores these settings and launches FF without any proxy at all
2) When Firefox exits the text modification is reverted/deleted
Note: When I edited the text file above, Firefox was not running. I know there's a disclaimer at the top of prefs.js:
If you make changes to this file while the application is running, the
changes will be overwritten when the application exits.
But there were no live instances of Firefox running at the time I edited the above file.
Manually creating different FF Profiles (as suggested by another user) with different proxies is not an option as everything needs to be done programmatically, without manual intervention.
Does Firefox still support linking proxy via pref.js? If not, what is the current working solution to launch Firefox via command line with a proxy in Java?
Thanks
A proxy-autoconfig file is what you are looking for.
Docs here.
Define a file name.pac, that contains the javascript function
function FindProxyForURL(url, host)
Inside the file you can use any javscript you'd like to decide what proxy to use. Set the path to your .pac file in the firefox settings, under auto-config proxy. Remember to use a file url.
To setup automatic file switching, simply configure firefox to point towards a single file, and overwrite the file programmatically every time you want it to change. You could keep copies of all options, and simply copy an option file into the target file right before running.
An example of a super simple pac file is this:
function FindProxyForURL (url, host) {
return 'PROXY proxy.example.com:8080; DIRECT';
}
It will always return the identical proxy for all endpoints.
Passwords are not explicitly supported by the pac standard, but there are different ways to approach this. Firefox will prompt you for a login if it thinks it needs one, and you could also embed the password into the url (username:password#proxy.example.com). Additionally, a tool like proxy login automator could allow you to use passwords and to dynamically set the proxy without having to fight with firefox.
I have the following simple jQuery:
$.get('Data.csv', function(data) {
alert(data);
});
Data.csv is stored in the same folder as the html file which accesses it.
If I run this in all browsers when the url is a domain (i.e. www.mysite.com/path/to/file), then the alert will display a string value of the contents of Data.csv.
If I create a hosts file link to the local folder (i.e. host.mysite.com/path/to/file) then alert will display a string value of the contents of Data.csv in all browsers.
If I run this in IE 9 when the url opens the file locally (i.e c:\path\to\file) then the alert will display a string value of the contents of Data.csv.
However, if I run this in FF or Chrome when the url opens the file locally (i.e file:///c:/path/to/file) then the alert will display [object XMLDocument].
Does anyone know why this is and how to open the local file as a string in FF and Chrome?
n.b. - I have tested this in order to rule out cross-platform-security issues. I don't think that that is the cause because otherwise it would not assign the content of the csv file at all.
Thanks in advance.
You're running into issues with the Same Origin Policy. If you look into your browser's console, you'll see something along the lines of Origin null is not allowed by Access-Control-Allow-Origin.
There's two possible fixes:
Run your stuff on a local web server like XAMPP, MAMP or the like.
Disable all web security on Chrome Startup, which you obviously don't want to do in real life. Wouldn't work in FF, either. So, stick with 1. ;)
I embed a .swf file in HTML and then embed this HTML itself in another HTML.
In the .swf file I call
ExternalInterface.call(" function(){ return window.location.toString();}";
The problem is sometimes I seem to get window location of the embedded HTML and sometimes I get address of the main HTML (see picture).
All I am after is reliability. I want to get the same address everytime. I haven't even been able to understand when it get which location. I wonder if it is some sort of browser related mystery!
Thanks for any help
Cheers!
Ali
Using window.top.location.href should always give you the address the user sees in their location bar. Be wary of using .toString() on DOM objects in older versions of Internet Explorer.
I have a html page on my localhost - get_description.html.
The snippet below is part of the code:
<input type="text" id="url"/>
<button id="get_description_button">Get description</button>
<iframe id="description_container" src="#"/>
When the button is clicked the src of the iframe is set to the url entered in the textbox. The pages fetched this way are very big with lots of linked files. What I am interested in the page is a block of text contained in a <div id="description"> element.
Is there a way to mitigate downloading of resources linked in the page that loads into the iframe?
I don't want to use curl because the data is only available to logged in users and the steps to take with curl to get the content is too complicated. The iframe is simple as I use this on a box which sends the right cookies to identify the request as coming from a logged in user, but the problem is that it is very wasteful to get nearly 1 MB of data to keep 1 KB of it and throw out the rest.
Edit
If the proposed method just works in Firefox it is fine, so I added Firefox tag. Also, it is possible that the answer actually is from the realm of Firefox add-on techniques, so I added that tag as well.
The problem is not that I cannot get at what I'm looking for, rather, the problem is the easy iframe method is wasteful.
I know that Firefox does allow loading only the text of a page. If you open a page and press Ctrl+U you are taken to 'view page source' window, There links behave as normal and are clickable, if you click on a link in source view, the source of the new page is loaded into the view source window, without the linked resources being downloaded, exactly what I'm trying to get. But I don't know how to access this behaviour.
Another example is the Adblock add-on. It somehow kills elements before they get loaded. With plain Javascript this is not possible. Because it only is triggered too late to intervene in good time.
The Same Origin Policy forbids any web page to access contents of any other web page in a different domain so basically you cannot do that.
However it seems that with some browsers it is allowed to access web pages content if you are trying to access it from a local web page which seems to be your case.
Safari, IE 6/7/8 are browser that allow a local web page to do so via XMLHttpRequest (source: Google Browser Security Handbook) so you may want to choose to use one of those browsers to do what you need (note that future versions of those browsers may not allow to do so anymore).
A part from this solution I only see two possibities:
If the web pages you need to fetch content from are somehow controlled by you, you can create a simpler interface to let other web pages to get the content you need (for example allowing JSONP requests).
If the web pages you need to fetch content from are not controlled by you the only solution I see is to fetch content server side logging in from the server directly (I know that you don't want to do so, but I don't see any other possibility if the previous I mentioned are not practicable)
Hope it helps.
Actually I've seen Cross Domain jQuery .load request before, here: http://james.padolsey.com/javascript/cross-domain-requests-with-jquery/
The author claims that codes like these found on that page
$('#container').load('http://google.com'); // SERIOUSLY!
$.ajax({
url: 'http://news.bbc.co.uk',
type: 'GET',
success: function(res) {
var headline = $(res.responseText).find('a.tsh').text();
alert(headline);
}
});
// Works with $.get too!
would work. (The BBC code might not work because of the recent redesign, but you get the idea)
Apparently it is using YQL wrapped into a jQuery plugin to do the trick. Now I cannot say I fully understand what he is doing there but it appears to work, and fits the bill. Once you load the data I suppose it is a simple matter of filtering out the data that you need.
If you prefer something that works at the browser level, may I suggest Mozilla's Jetpack framework for lightweight extensions. I've not yet read the documentations in its entirety but it should contain the APIs needed for this to work.
There are various ways to go about this in AJAX, I'm going to show the jQuery way for brevity as one option, though you could do this in vanilla JavaScript as well.
Instead of an <iframe> you can just use a container, let's say a <div> like this:
<div id="description_container"></div>
Then to load it:
$(function() {
$("#get_description_button").click(function() {
$("#description_container").load($("input").val() + " #description");
});
});
This uses the .load() method which takes a string in this format: .load("url selector"), then takes that element in the page and places it's content inside the container you're loading, in this case #description_container.
This is just the jQuery route, mainly to illustrate that yes, you can do what you want, but you don't have to do it exactly like this, just showing the concept is getting what you want from an AJAX request, rather than in an <iframe>.
Your description sounds like you are fetching pages from the same domain (you said that you need to be logged in and have session credentials) so have you tried to use async request via XMLHttpRequest? It might complain if the html on a page is particularly messed up but you chould still be able to get raw text via .responseText and extract what you need with a regex.
What would I need to do to get this example running on my machine?
http://www.w3schools.com/ajax/tryit.asp?filename=tryajax_httprequest_js (page no longer available)
I'm looking to access the XML file hosted on w3schools (and not move it to my machine), but run the HTML and Javascript code on my machine. I tried changing the third to last line from:
<button onclick="loadXMLDoc('note.xml')">Get XML</button>
to:
<button onclick="loadXMLDoc('http://www.w3schools.com/ajax/note.xml')">Get XML</button>
thinking this would make it work, but it didn't seem to help. Any suggestions?
Just put the full URL into your browser window which will let your browser get it, then copy/paste and save locally. Javascript won't fetch stuff from outside the domain it's served from (without a fair bit of extra work), due to the Same Origin policy ( a security feature).
You can't go cross domain using AJAX. You should move the XML file to the same server that you have the site files stored on and call it that way.
https://developer.mozilla.org/en/Same_origin_policy_for_JavaScript
You need to use the following code in the function that does the AJAX:
try {
netscape.security.PrivilegeManager.enablePrivilege("UniversalPreferencesRead");
} catch (e) {
alert("error");
}
This only works for Firefox! There are other options which can be passed to enablePrivilege that may be useful.