I've read several of the questions on this but am still a little confused.
For example: OK, I can't post examples because of hyperlink limitations
Here is my exact situation.
I have a site at mydomain.com
One of the pages has an iframe to another page at sub.mydomain.com
I am trying to prepare an onload script that if the page is not in an iframe or the parent domain of the page containing the iframe is not mydomain.com then redirect to mydomain.com.
After the initial permission issues I realised the problem with sub domains counting as separate domains.
One of the posts above says that "could each use either foo.mydomain.com or just mydomain.com"
So I tried (for testing):
onload="document.domain='mydomain.com';alert(parent.location.href);"
This produced the error (http replaced with lar
Error: Permission denied for <http://sub.mydomain.net> (document.domain=<http://mydomain.net>) to get property Location.href from <http://mydomain.net> (document.domain has not been set).
Source File: http://sub.mydomain.net/?pageID=1&framed=1
Line: 1
Removing the alert produces no errors.
Maybe I am going about this the wrong way since I do not need to interact with the parent just read its domain if there is one.
A nice simple top.domain. For read only there must be a way so that people can prevent their own pages being used within other people's sites.
You can't (easily) do this because of security restrictions.
This answer from #2771397 might point you in the right direction.
OK, while looking at the error console I still had open when I got home a wee lightbulb lit up. I am pretty new to javascript (can you tell ;) but I thought "If it has try/catch"...
well here is a hack at least to get the name of the top domain and an example of how I will use it in my site to show content only if the page is a frame in the correct domain.
Firstly the header will have the following partially PHP generated function:
function getParentDomain()
{
try
{
var wibble=top.location.href;
}
catch(err)
{
if (err.message.indexOf('http://mydomain.com')!=-1)
{
createCookie('IAmAWomble','value')
}
}
}
Basically the value will be something based on the PHP session I think. This will be executed at page load.
If the page is not within the proper site or if javascript is not enabled then the cookie will not be created.
PHP will then attempt to read the correct value from the cookie and show the content or an error message as appropriate.
I do see a slight flaw in this for first visit since page load will run after PHP has generated the content but I'm sure I can work around this somehow. I thought I'd post because this is at least what I was initially asking for and that is a way to read the URL of a parent site if it is in a different domain to the site in the frame.
IIUC you want to use the window.parent attribute: “A reference to the parent of the current window or subframe.”
Assumably, window.parent.document.location.host contains the container page URL domain name.
Related
I am trying to make a script that will compile statistics of my TikTok profile on my WordPress site. TikTok ironically sucks at giving you data about your profile, in my case I can't find a reliable way to check the total views I have on my profile from the native analytics page.
So I figured I would write a script that would take my TikTok page, scan through the html, find each page element that displays the view count on each post thumbnail, and add the value within that element to an array. The write a function that takes care of the math from there.
I thought that would be fairly easy, but I am a victim of the Dunning-Kruger effect as a freshman in Software Engineering.
From what I've looked at the answer seems to lay in jQuery. I have written this so far.
var views= []
jQuery.get("https://www.tiktok.com/#triplicata.html",function() {
//the view count element seems to change on different occasions but the element seems to always be "== $0"
jQuery.each("$0",function(){
var temp = jQuery.text();
views.push(temp);
});
});
When I try to run it in a tester I check the F12 Console and it says something around the lines of:
Access to XMLHttpRequest at 'https://www.tiktok.com/#triplicata.html' from origin 'https://fiddle.jshell.net' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
I can't even test the rest of the code cause I seem to be cut off at the gate just trying to get the HTML in the first place. I don't really know much about jQuery but everything seems to be correct from what I've seen.
I don't know why you would do this, ever heard of viewsource?
<script src="https://cdn.jsdelivr.net/gh/Parking-Master/viewsource/vs.js"></script>
<script>
console.log(getSource('https://example.com'));
</script>
You can't access another website's source code for a good reason. And the jquery get() method is only for JSON files.
Would it be possible to load an external page inside a container and replace text elements?
We work with ad campaigns and earn a percentage whenever a user signs up.
Can a script replace certain words? For instance “User” to “Usuario” or “Password” to “Contraseña” without affecting the original website or its functions.
Note: These links always pass through a redirection.
Example:
http://a2g-secure.com/?E=/0yTeQmWHoKOlN6zUciCXQwUzfnVGPGN&s1=
Note 2: Using an iframe is out of the question due to “Same-origin policy”.
I'm not sure if this answers your question, but you might find it useful.
(Perhaps you might give a step-by-step example of what you're trying to accomplish?)
If we assume that a browser attempts to retrieve page P from a proxy which first retrieves the content of page P from its actual home and then performs some transformation on its content before returning that page content to the browser, what you're describing is a Reverse HTTP Proxy and is a very well-known page serving technique.
Rather than performing complex transformations at the server (which require specialized knowledge of the page layout), this technique is usually used to inject a single line into the retrieved source that calls a JavaScript file to actually perform the required transformation at the browser.
So in essence:
Browser requests Page P from Proxy 1.
Proxy 1 retrieves the actual Page P from its real home, Server 2.
Proxy 1 adds the line <script src="//proxy1.com/transform.js"></script> to the source of Page P.
Proxy 1 then returns the modified source of Page P to Browser.
Once the Browser has received the page content, the JavaScript file is also retrieved, which can then modify the page contents in any way required.
This technique can be used to solve your "Same origin policy" issue by loading an iframe from a URL that points to the same server as that which provided the parent or owning page of the iframe which acts as proxy, like:
http://example.com/?proxy_target=//server2.com/pageP.html
Thus, the browser only "sees" content from a single server.
You would need to load the external page server-side, and then you can do whatever you want with it. You can do serverside string replacement, or you can do it later in javascript.
But, remember that as soon as you add a whole webpage into for example a div in your own page, the css from your page will affect it.
Plus, you would need to manipulate all the links in the documents, to have absolute urls. If the page depends on ajax, there is pretty much no way to accomplish what you want to do.
If on the other hand the pages you will be loading are static html, it is possible, though there are a lot of things you need to take care of before you can actually present the page to the user, like adjusting links, urls to stylesheets and so on.
It seems you are trying to localize a website on the fly, using your server as a proxy for that content. Does it make sense? If that's the case, depending on the size of your operation, there are several proxy translation services out there (I'll name them if needed).
Basically, they scrape a website, providing a way for you to translate and host the translated content. Of course, this depends on your relationship with the content providers. You should also take this into consideration, since modifying content, even for translation, can be a copyright problem.
All things considered, if you trust the provider's javascript, the solution involves scraping the content, as mentioned in other answers, and serving that modified content. You really need to trust the origin...
update per request
http://www.easyling.com
http://www.smartling.com
http://www.motionpoint.com
http://www.lionbridge.com/solutions/translation-proxy/
http://www.sajan.com/translation-proxy-technology-and-traditional-website-translation-understanding-your-options/
They are all aimed at enterprise-grade projects, but I would say Easyling is the most accessible.
Hope this helps.
Using the .load() callback function, this will replace the text
$(function(){
$("#Content").load("http://example.com?user=Usuario",function() {
$(this).html($(this).html().replace("user", +get param value+));
});
redirection u can use
// similar behavior as an HTTP redirect
window.location.replace("url");
// similar behavior as clicking on a link
window.location.href = "url";
The answer is NO, not without using a server-side proxy. For a really good overview of how to use a proxy, see this YUI page: https://developer.yahoo.com/javascript/howto-proxy.html (Be patient, as it will take time to load, but the illustrations are worth it!)
When I try to do this in jsfiddle to see what data that the 3 parameters contain, then the error below appears:
$(function() {
$(this).load('https://stackoverflow.com/questions/36003367/load-external-page-and-replace-text', function(responseText, textStatus, jqXHR){
debugger;
});
});
ERROR:
XMLHttpRequest cannot load Load external page and Replace text.
No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'https://fiddle.jshell.net' is therefore not allowed access.
I am trying to integrate with the FireShot API to given a URL, grab HTML of another web page into a div then take a screenshot of it.
Some things I will need to do after getting the HTML
grab <link> & <script> from <head>
grab <body> into <div>
But 1st, it seems when I try to do a
$.get("http://google.com", function(data) { ... });
I get a 200 in firebug colored red. I think it has to do with sites not allowing you to grab their page with JS? Then is opening a window the best I can do? But how might I control the other page with jQuery or call fsapi on that page?
UPDATE
I tried to do something like below to do something when the new window is ready, but FireBug says "Permission denied to access property 'document'"
w = window.open($url.val());
setTimeout(function() { // if I dont do this, I always get about:blank, is there a better way around this?
$(w.document).ready(function() {
console.log(w.document.body);
});
}, 1000);
I believe the cross-site security setup within Javascript is basically blocking this. You'd likely have to proxy the content through your own domain.
There are a couple other options I think for break the cross-site security constraints, but I'm not sure I'd promote them.
If the "another page" locates within the same domain of your hosting page, yes, you can. Please refer to jQuery's $().load() API.
Otherwise, you're disallowed to do so by the browser's Cross-Site Security Policy. At this moment, you can choose to use iFrame instead of DIV.
Some jQuery plugins, e.g. thickbox provides ability to load pages to appropriate container automatically.
Unless I am correct, I do not believe you can AJAX a page cross domain (e.g. from domain1.com to domain2.com). To get around this, you can have a PHP "proxy" script that does the "getting" of the page and then pass it to JS.
For example, in JS you would get() http://mydomain.com/get/?domain=http://google.com and then do what you need to do!
<script>
function test() {
alert("this should only be called after the browser is fully redirected?");
}
window.location = "http://google.com";
test();
</script>
I'm about redirecting the user guys to a page and I want to do something (call a function) only after the browser is fully redirected but I can't get it to work. Is there any way for me to do so?
Is there any way for me to do so?
Nope. When the page has opened google.com, you no longer have any control over the browser window.
Once URL changes, all execution of the current page is stopped.
Not really. You'd have to put the page you were redirecting in a frame and keep the script in another frame, then watch for the content frame to get updated. But you'd also run into cross-domain issues because of the Same Origin Policy (which governs access to one document's contents [the new page] from another document [the one containing the script you're running]). So basically, you can't do this.
If you post a separate question saying what you're trying to achieve by running more code afterward, it may be that people can help you with alternative approaches.
i don't believe this would work. it's a form of XSS/injection and therefore a security risk. i don't think the W3C allowed this sort of thing because it's very dangerous. as soon as the user is loading a different page, the browser ignores the previous one.
http://www.coderanch.com/t/439675/HTML-JavaScript/Javascript-call-AFTER-redirect
see that guy's answer for a visual
I have a html page on my localhost - get_description.html.
The snippet below is part of the code:
<input type="text" id="url"/>
<button id="get_description_button">Get description</button>
<iframe id="description_container" src="#"/>
When the button is clicked the src of the iframe is set to the url entered in the textbox. The pages fetched this way are very big with lots of linked files. What I am interested in the page is a block of text contained in a <div id="description"> element.
Is there a way to mitigate downloading of resources linked in the page that loads into the iframe?
I don't want to use curl because the data is only available to logged in users and the steps to take with curl to get the content is too complicated. The iframe is simple as I use this on a box which sends the right cookies to identify the request as coming from a logged in user, but the problem is that it is very wasteful to get nearly 1 MB of data to keep 1 KB of it and throw out the rest.
Edit
If the proposed method just works in Firefox it is fine, so I added Firefox tag. Also, it is possible that the answer actually is from the realm of Firefox add-on techniques, so I added that tag as well.
The problem is not that I cannot get at what I'm looking for, rather, the problem is the easy iframe method is wasteful.
I know that Firefox does allow loading only the text of a page. If you open a page and press Ctrl+U you are taken to 'view page source' window, There links behave as normal and are clickable, if you click on a link in source view, the source of the new page is loaded into the view source window, without the linked resources being downloaded, exactly what I'm trying to get. But I don't know how to access this behaviour.
Another example is the Adblock add-on. It somehow kills elements before they get loaded. With plain Javascript this is not possible. Because it only is triggered too late to intervene in good time.
The Same Origin Policy forbids any web page to access contents of any other web page in a different domain so basically you cannot do that.
However it seems that with some browsers it is allowed to access web pages content if you are trying to access it from a local web page which seems to be your case.
Safari, IE 6/7/8 are browser that allow a local web page to do so via XMLHttpRequest (source: Google Browser Security Handbook) so you may want to choose to use one of those browsers to do what you need (note that future versions of those browsers may not allow to do so anymore).
A part from this solution I only see two possibities:
If the web pages you need to fetch content from are somehow controlled by you, you can create a simpler interface to let other web pages to get the content you need (for example allowing JSONP requests).
If the web pages you need to fetch content from are not controlled by you the only solution I see is to fetch content server side logging in from the server directly (I know that you don't want to do so, but I don't see any other possibility if the previous I mentioned are not practicable)
Hope it helps.
Actually I've seen Cross Domain jQuery .load request before, here: http://james.padolsey.com/javascript/cross-domain-requests-with-jquery/
The author claims that codes like these found on that page
$('#container').load('http://google.com'); // SERIOUSLY!
$.ajax({
url: 'http://news.bbc.co.uk',
type: 'GET',
success: function(res) {
var headline = $(res.responseText).find('a.tsh').text();
alert(headline);
}
});
// Works with $.get too!
would work. (The BBC code might not work because of the recent redesign, but you get the idea)
Apparently it is using YQL wrapped into a jQuery plugin to do the trick. Now I cannot say I fully understand what he is doing there but it appears to work, and fits the bill. Once you load the data I suppose it is a simple matter of filtering out the data that you need.
If you prefer something that works at the browser level, may I suggest Mozilla's Jetpack framework for lightweight extensions. I've not yet read the documentations in its entirety but it should contain the APIs needed for this to work.
There are various ways to go about this in AJAX, I'm going to show the jQuery way for brevity as one option, though you could do this in vanilla JavaScript as well.
Instead of an <iframe> you can just use a container, let's say a <div> like this:
<div id="description_container"></div>
Then to load it:
$(function() {
$("#get_description_button").click(function() {
$("#description_container").load($("input").val() + " #description");
});
});
This uses the .load() method which takes a string in this format: .load("url selector"), then takes that element in the page and places it's content inside the container you're loading, in this case #description_container.
This is just the jQuery route, mainly to illustrate that yes, you can do what you want, but you don't have to do it exactly like this, just showing the concept is getting what you want from an AJAX request, rather than in an <iframe>.
Your description sounds like you are fetching pages from the same domain (you said that you need to be logged in and have session credentials) so have you tried to use async request via XMLHttpRequest? It might complain if the html on a page is particularly messed up but you chould still be able to get raw text via .responseText and extract what you need with a regex.