Is it possible to get last page URL from the history object? I've come accross history.previous but that's either undefined or protected from what I've seen.
Not from the history object, but from document.referrer. If you want to get the last actual page visited, there is no cross-browser way without making a separate case based on support for each property.
You cant get to history in any browser. That would be a serious security violation since that would mean that anyone can snoop around the history of their users.
You might be able to write a Browser Helper Object for IE and other browsers that give you access to that. (Similar to the google toolbar et al). But that will require the users to allow that application to run on their machine.
There are some nasty ways you can get to some history using some "not-so-nice" ways but I would not recommend them. Look up this link.
Of course, as people have said, its not possible. However what I've done in order to get around this limitation is just to store every page loaded into localStorage so you can create your own history ...
function writeMyBrowserHistory(historyLength=3) {
// Store last historyLength page paths for use in other pages
var pagesArr = localStorage.myPageHistory
if (pagesArr===null) {
pagesArr = [];
} else {
pagesArr = JSON.parse(localStorage.myPageHistory);
pagesArr.push(window.location.pathname) // can use whichever part, but full url needs encoding
}
if (pagesArr.length>historyLength) {
// truncate the array
pagesArr = pagesArr.slice(pagesArr.length-historyLength,pagesArr.length)
}
// store it back
localStorage.myPageHistory = JSON.stringify(pagesArr);
// optional debug
console.log(`my page history = ${pagesArr}`)
}
function getLastMyBrowserHistoryUrl() {
var pagesArr = localStorage.myPageHistory
var url = ""
if (pagesArr!==null) {
pagesArr = JSON.parse(localStorage.myPageHistory);
// pop off the most recent url
url = pagesArr.pop()
}
return url
}
So then on a js in every page call
writeMyBrowserHistory()
When you wanna figure out the last page call
var lastPageUrl = getLastMyBrowserHistoryUrl()
Note: localStorage stores strings only hence the JSON.
Let me know if I have any bugs in the code as its been beautified from the original.
Related
I'm new to web and chrome extension dev and am trying to use the localForage API to store data for my chrome extension- currently I lose everything everytime I switch to a new tab, but I want the data to stay until the user explicitly clears everything out (so even over multiple sessions, etc.)
I decided to give the localForage api a go (since it's supposed to be like localStorage but simpler) and feel like I'm missing something important- I can setItems/getItems without issues but it's not actually saving any of the data.
How exactly do I make sure my data stays while switching tabs (and over multiple browsing sessions)?
Using localForage.getItem/setItem- this seems to be working as far as using the data but isn't doing anything as far as saving it when I switch tabs
citeUrl(values, function(citation)
{
count++;
var string = citation[0];
localforage.setItem(string, [citation[0],citation[1]], function(err, value)
{// Do other things once the value has been saved.
console.log(value[0] + " " + value[1]);
});
/*var result = document.getElementById('cite[]');
result.style.visibility = 'visible';
result.style.display = 'block';
result.innerHTML = citation[0];
renderStatus(citation[1]);*/
for(i = 0; i < count ; i++)
{
var newCitation = document.createElement('div');
localforage.getItem(citation[i], function(err, value)
{
newCitation.innerHTML = value[1] + "<br>" + value[0];
}
);
newCitation.style.backgroundColor = "white";
newCitation.style.marginBottom = "7px";
newCitation.style.padding = "6px";
newCitation.style.boxShadow= "0 2px 6px rgba(0,0,0,0.4)";
newCitation.style.borderRadius = "3px";
document.getElementById("answered[]").appendChild(newCitation);
}
}
localForage is built upon localStorage and friends. The important part is that it is bound to the origin in which you access it.
Using it from a content script uses the website's origin, e.g. using your extension on http://example.com/test will bind the data to http://example.com/ origin, and using your extension on http://example2.com/test will bind the data to a completely independent store attached to origin http://example2.com/. What's more, the data is shared with (and may interfere with) the page's own storage.
As such, using localStorage (and by extension, localForage) does not allow for intended results in a content script (though it may be still useful if you're trying to manupulate the page's own storage).
So, there are 2 possiblities to do it correctly:
If you must use it at all, use localForage in the background script. In that case, the data is bound to origin chrome-extension://yourextensionidhere. However, this is not accessible from content scripts - you'll need to pass data using Messaging, which is tiresome.
Better, extension-specific approach: use native chrome.storage API, which is shared between all parts of the extension. This API specifically exists to address the need-to-pass-data limitations, among other things.
(Win lots of internet points approach) Write a custom driver for localForage using chrome.storage API. This will allow people to use it in Chrome extensions and apps with ease. Which is, apparently, something already attempted.
This question may be of use.
Let's say normally my users access our web page via https://www.mycompany.com/go/mybusinessname
Inside this web page, we have a iframe which actually comes from https://www.mycompany.com/myapp
Everything is working fine, except that if for some reason, the users come to know about this url https://www.mycompany.com/myapp. They can start accessing it directly by typing into the address bar.
This is what I want to prevent them from doing. Is there any best practice to achieve this?
==== Update to provide more background ====
The parent page which is https://www.mycompany.com is the company's page and it's maintained by some other team. So they have all the generic header and footer, etc. so each application is rendered as an iframe inside it. (This also means we cannot change the parent page's code)
If users access https://www.mycompany.com/myapp directly, they won't be able to see the header and footer. Yes, it's not a big deal, but I just want to maintain the consistency.
Another of my concern is that, in our dev environment (aka when running the page locally) we don't have the parent-iframe thing. We access our page directly from http://localhost:port. Hence I want to find a solution that can allow us access it normally when running locally as well.
If such solution simple does not exist, please let me know as well :)
On your iframe's source, you can check the parent's window by using window.top.location and see if it's set to 'https://www.mycompany.com/go/mybusinessname'. If not, redirect the page.
var myUrl = 'https://www.mycompany.com/go/mybusinessname';
if(window.top.location.href !== myUrl) {
window.top.location.href = myUrl;
}
I realized we already had a function to determine whether the page in running under https://www.mycompany.com. So now I only need to do the below to perform the redirecting when our page is not iframe
var expectedPathname = "/go/mybusinessname";
var getLocation = function (href) {
var l = document.createElement("a");
l.href = href;
return l;
};
if (window == window.top) { // if not iframe
var link = getLocation(window.top.location.href);
if (link.pathname !== expectedPathname) {
link.pathname = expectedPathname;
window.top.location.replace(link.href);
}
}
You can use HTTP referer header on server-side. If the page is opened in IFRAME - the referer contains parent page address. Otherwise, it is empty or contains different page.
I have the following scenario:
A user can paste html content in a wysiwyg editor. When that pasted content contains images which are hosted on other domains, I want these to be uploaded to my server. Right now the only way of doing that is manually downloading via "save image as..." context menu, then uploading the image to the server via a form and updating the images in the editor.
I have to solve this client side.
I'm working on a firefox addon that can automate the process. Of course I could download these images, store them on the harddrive and then upload them with FormData or better the pupload , but this seems clumsy as since the content is displayed in the browser, it must be downloaded already and reside somewhere in memory. I would like to grab the image files from memory and tell firefox to upload them (being able to make a Blob of them would suffice it seems).
However, I'm getting hopelessly lost in the API documentation for several different Caching systems on MDN and fail to find any example code of how to use them. I checked code of other addons that access the cache, but most is uncommented and still quite cryptic.
Can you point me to some sample code of what the recommended way would be to achieve this? The best possible solution would be if I can request the particular url from firefox so I can use it in FormData, and if it isn't in the cache firefox downloads to memory, but if it's already there I just get it directly.
The master documentation for Mozilla's version 2 HTTP Cache is located here. Aside from the blurbs on this page, the only way I was able to make sense of this new scheme is by looking at the actual code for each object and back-referencing almost everything. Even though I wasn't able to get a 100% clear picture of what exactly was going on, I figured out enough to get it working. In my opinion, Mozilla should have taken the time to create some simple-terms documentation before they went ahead an pushed out the new API. But, we get what they give us I suppose.
On to your problem. We're assuming that the users who want to upload an image already have this image saved in their cache somewhere. In order to be able to pull it out of the user's cache for upload, you must first be able to determine the URI of the image before it can be pulled explicitly from the cache. For the sake of brevity, I'm going to assume that you already have this part figured out.
An important thing to note about the new HTTP Cache is that although it's all based off callbacks, there can still only ever be a single writing process. While in your example it may not be necessary to write to the descriptor, you should still request write access since that will prevent any other processes (i.e. the browser) from altering/deleting the data until you are done with it. Another side note and a source of a lot of pain for me was the fact that requesting a cache entry from the memory cache will ALWAYS created a new entry, overwriting any pre-existing entries. You shouldn't need this, but if it is necessary, you can access the memory cache from the disk (the disk cache is physical disk+memory cache -- Mozilla logic) cache without that side effect.
Once the URI is in hand, you can then make a request to pull it out of the cache. The new caching system is based completely on callbacks. There is one key object that we will need in order to be able to fetch the cache entry's data -- nsICacheEntryOpenCallback. This is a user-defined object that handles the response after a cache entry is requested. It must have two member functions: onCacheEntryCheck(entry, appcache) and onCacheEntryAvilable(descriptor, isnew, appcache, status).
Here is a cut-down example from my code of such an object:
var cacheWaiter = {
//This function essentially tells the cache service whether or not we want
//this cache descriptor. If ENTRY_WANTED is returned, the cache descriptor is
//passed to onCacheEntryAvailable()
onCacheEntryCheck: function( descriptor, appcache )
{
//First, we want to be sure the cache entry is not currently being written
//so that we can be sure that the file is complete when we go to open it.
//If predictedDataSize > dataSize, chances are it's still in the process of
//being cached and we won't be able to get an exclusive lock on it and it
//will be incomplete, so we don't want it right now.
try{
if( descriptor.dataSize < descriptor.predictedDataSize )
//This tells the nsICacheService to call this function again once the
//currently writing process is done writing the cache entry.
return Components.interfaces.nsICacheEntryOpenCallback.RECHECK_AFTER_WRITE_FINISHED;
}
catch(e){
//Also return the same value for any other error
return Components.interfaces.nsICacheEntryOpenCallback.RECHECK_AFTER_WRITE_FINISHED;
}
//If no exceptions occurred and predictedDataSize == dataSize, tell the
//nsICacheService to pass the descriptor to this.onCacheEntryAvailable()
return Components.interfaces.nsICacheEntryOpenCallback.ENTRY_WANTED;
}
//Once we are certain we want to use this descriptor (i.e. it is done
//downloading and we want to read it), it gets passed to this function
//where we can do what we wish with it.
//At this point we will have full control of the descriptor until this
//function exits (or, I believe that's how it works)
onCacheEntryAvailable: function( descriptor, isnew, appcache, status )
{
//In this function, you can do your cache descriptor reads and store
//it in a Blob() for upload. I haven't actually tested the code I put
//here, modifications may be needed.
var cacheentryinputstream = descriptor.openInputStream(0);
var blobarray = new Array(0);
var buffer = new Array(1024);
for( var i = descriptor.dataSize; i == 0; i -= 1024)
{
var chunksize = 1024;
if( i < 0 )
chunksize = 1024 + i;
try{
cacheentryinputstream.read( buffer, chunksize );
}
catch(e){
//Nasty NS_ERROR_WOULD_BLOCK exceptions seem to happen to me
//frequently. The Mozilla guys don't provide a way around this,
//since they want a responsive UI at all costs. So, just keep
//trying until it succeeds.
i += 1024;
continue;
}
for( var j = 0; j < chunksize; j++ )
{
blobarray.push(buffer.charAt(j));
}
if( i < 0 )
i = 0 //Set i == 0 to signal loop break
}
}
var theblob = new Blob(blobarray);
//Do an AJAX POST request here.
}
Now that the callback object is set up, we can actually do some requests for cache descriptors. Try something like this:
var theuri = "http://www.example.com/image.jpg";
//Load the cache service
var cacheservice = Components.classes["#mozilla.org/netwerk/cache-storage-service;1"].getService(Components.interfaces.nsICacheStorageService)
//Select the default disk cache.
var hdcache = cacheservice.diskCacheStorage(Services.loadContextInfo.default, true);
//Request a cache entry for the URI. OPEN_NORMALLY requests write access.
hdcache.asyncOpenURI(ioservice.newURI(theuri, null, null), "", hdcache.OPEN_NORMALLY, cacheWaiter);
As far as actually getting the URI, you could provide a window for a user to drag-and-drop an image into or perhaps just paste the URL of the image into. Then, you could do an AJAX request to fetch the image (in the case that the user hasn't actually visited the image for some reason, it would then be cached). You could then use that URL to then fetch the cache entry for upload. As an aesthetic touch, you could even show a preview of the image but that's a bit out of scope of the question.
If you need any more clarifications, please feel free to ask!
I'm trying to do this by using a Tampermonkey Script. However I'm open to new approaches...
What I want to do is extract some data (data-video), from a specific <div>. However this data is not available under the HTML code of the page, but it's available under Dev Tools -> Resources and then on Frames.
Anyone knows if it's possible to get that information available under DevTools? And how can I do that?
Comparative between the two pages can be found here: "Original HTML PAGE" and "HTML PAGE under DevTools"
On the first hyperlink the id=video-canvas cannot be seen, however it's on the <object type="application/x-shockwave-flash(...)
As you state in your question the data you're looking for is available in DevTools under the "Resources" tab in the "Frames" folder. What you are looking at there is the Source HTML, similar to View Source.
The code you want, is what is getting replaced. It appears the site is using the JW Player Plugin, which is replacing the <div id="video-canvas"> with the appropriate HTML for the device / browser detected to play the video. With all of my browsers on my Mac, they are being forced to use the Flash, even when it's disabled. When using my iPhone, which can't play flash , and inspecting the page it uses JW's own custom video element. It appears that it must be storing the file location in memory since it is not in the generated markup.
I am able to run through the console in the dev tools and access their JS class. It appears i can call jwplayer._tracker , which has an object b . Object b has an object AlWv3iHmEeOzwBIxOUCPzg This object seems to be consistent each time i check between different browsers, you can use the for loop inmy first example to get the correct value but tirmming it down to .b Following that object is e and in e is the object http://i.n.jwpltx.com/v1.... really long string that appears to contain a url, so it will need to parsed.
So to get the HTML string i ran
for ( var loc in jwplayer._tracker.b.AlWv3iHmEeOzwBIxOUCPzg.e){
loc
}
so if we put that in a function to parse the string and return a value
function getSubURL(){
var initURL;
for ( var loc in jwplayer._tracker.b.AlWv3iHmEeOzwBIxOUCPzg.e){
initURL = loc;
}
//look for 'mp4:' this is in front of the file path
var start = initURL.indexOf("mp4%3A");
//look for the .mp4 for the end of the file name
var stop = initURL.indexOf(".mp4");
//grab the string between
//start+6 to remove characters used to find it
//and stop+4 to include characters used to find it
var subPath = (initURL.substring((start+6),(stop+4))).split("%2F").join("/");
return subPath;
}
//and run it
getSubURL();
it will return ciencia/astronomia/fimsol.mp4
you can run this from your console, but I am unaware of how you can use this in Tamper Monkey, but i think it gets ya a lot closer to what you wanted.
This is the approach I've used to solve my problem... I couldn't grab the code I want under Dev Tools, but I find a way to get the data from jwplayer with the function getPlaylistItem. And this is how I get the url filename of each video:
function getFilename(filename) {
var filename;
if(jwplayer().getPlaylistItem){
filename = jwplayer().getPlaylistItem()['file'];
}
else{
return filename;
}
filename = filename.substring(filename.indexOf("/mp4:") + 5);
return filename;
}
I'm trying to create a navigation system for an internal website.
To do so I'm creating an array by means of javascript that tracks the url of the page and with each new page I'm pushing that new url into the array.
Problem is, each new page seems to be overwriting the last page.
This is what is in my javascript file ... notice I only create a new array if the array doesn't already exist (it will be deleted when the person leaves the website).
var myURL = document.URL;
if (typeof myHistory == "undefined" || !(myHistory instanceof Array)) {
var myHistory = [];
}
myHistory.push(myURL);
var last_element = myHistory[myHistory.length - 1];
var number_rows = myHistory.length;
This what I'm using to see the values in the html ...
<script type="text/javascript">
<!--
document.write(last_element);
document.write(number_rows);
// -->
</script>
It's displaying the URL (last_element) as desired but number_rows remains at 1 when I browse between pages rather than go up to 2, 3, 4, etc which is what I hope to achieve.
Can anyone give me any pointers?
Every time you refresh a page, JavaScript is refreshed anew. If you need to have data persistence, you'll need to use cookies, localStorage, or server-side data storage.
All of those options will require that you serialize to and deserialize from strings.
Here's a quick example of how you could do this using localStorage:
//closure to prevent global pollution
//it's good to be green
(function () {
"use strict";
var history,
url;
//set url here
//I'm making the assumption that no url will have a ',' char in it
history = localStorage['history'].split(',');
history.push(url);
localStorage['history'] = history.join(',');
console.log(url, history.length);
}());
When you browse between pages the javascript environment is re-created on each page. So you are always starting with an empty array.