Saving Chrome Extension Data (with LocalForage) - javascript

I'm new to web and chrome extension dev and am trying to use the localForage API to store data for my chrome extension- currently I lose everything everytime I switch to a new tab, but I want the data to stay until the user explicitly clears everything out (so even over multiple sessions, etc.)
I decided to give the localForage api a go (since it's supposed to be like localStorage but simpler) and feel like I'm missing something important- I can setItems/getItems without issues but it's not actually saving any of the data.
How exactly do I make sure my data stays while switching tabs (and over multiple browsing sessions)?
Using localForage.getItem/setItem- this seems to be working as far as using the data but isn't doing anything as far as saving it when I switch tabs
citeUrl(values, function(citation)
{
count++;
var string = citation[0];
localforage.setItem(string, [citation[0],citation[1]], function(err, value)
{// Do other things once the value has been saved.
console.log(value[0] + " " + value[1]);
});
/*var result = document.getElementById('cite[]');
result.style.visibility = 'visible';
result.style.display = 'block';
result.innerHTML = citation[0];
renderStatus(citation[1]);*/
for(i = 0; i < count ; i++)
{
var newCitation = document.createElement('div');
localforage.getItem(citation[i], function(err, value)
{
newCitation.innerHTML = value[1] + "<br>" + value[0];
}
);
newCitation.style.backgroundColor = "white";
newCitation.style.marginBottom = "7px";
newCitation.style.padding = "6px";
newCitation.style.boxShadow= "0 2px 6px rgba(0,0,0,0.4)";
newCitation.style.borderRadius = "3px";
document.getElementById("answered[]").appendChild(newCitation);
}
}

localForage is built upon localStorage and friends. The important part is that it is bound to the origin in which you access it.
Using it from a content script uses the website's origin, e.g. using your extension on http://example.com/test will bind the data to http://example.com/ origin, and using your extension on http://example2.com/test will bind the data to a completely independent store attached to origin http://example2.com/. What's more, the data is shared with (and may interfere with) the page's own storage.
As such, using localStorage (and by extension, localForage) does not allow for intended results in a content script (though it may be still useful if you're trying to manupulate the page's own storage).
So, there are 2 possiblities to do it correctly:
If you must use it at all, use localForage in the background script. In that case, the data is bound to origin chrome-extension://yourextensionidhere. However, this is not accessible from content scripts - you'll need to pass data using Messaging, which is tiresome.
Better, extension-specific approach: use native chrome.storage API, which is shared between all parts of the extension. This API specifically exists to address the need-to-pass-data limitations, among other things.
(Win lots of internet points approach) Write a custom driver for localForage using chrome.storage API. This will allow people to use it in Chrome extensions and apps with ease. Which is, apparently, something already attempted.
This question may be of use.

Related

How to find specific cache entries in firefox and turn them into a File or Blob object?

I have the following scenario:
A user can paste html content in a wysiwyg editor. When that pasted content contains images which are hosted on other domains, I want these to be uploaded to my server. Right now the only way of doing that is manually downloading via "save image as..." context menu, then uploading the image to the server via a form and updating the images in the editor.
I have to solve this client side.
I'm working on a firefox addon that can automate the process. Of course I could download these images, store them on the harddrive and then upload them with FormData or better the pupload , but this seems clumsy as since the content is displayed in the browser, it must be downloaded already and reside somewhere in memory. I would like to grab the image files from memory and tell firefox to upload them (being able to make a Blob of them would suffice it seems).
However, I'm getting hopelessly lost in the API documentation for several different Caching systems on MDN and fail to find any example code of how to use them. I checked code of other addons that access the cache, but most is uncommented and still quite cryptic.
Can you point me to some sample code of what the recommended way would be to achieve this? The best possible solution would be if I can request the particular url from firefox so I can use it in FormData, and if it isn't in the cache firefox downloads to memory, but if it's already there I just get it directly.
The master documentation for Mozilla's version 2 HTTP Cache is located here. Aside from the blurbs on this page, the only way I was able to make sense of this new scheme is by looking at the actual code for each object and back-referencing almost everything. Even though I wasn't able to get a 100% clear picture of what exactly was going on, I figured out enough to get it working. In my opinion, Mozilla should have taken the time to create some simple-terms documentation before they went ahead an pushed out the new API. But, we get what they give us I suppose.
On to your problem. We're assuming that the users who want to upload an image already have this image saved in their cache somewhere. In order to be able to pull it out of the user's cache for upload, you must first be able to determine the URI of the image before it can be pulled explicitly from the cache. For the sake of brevity, I'm going to assume that you already have this part figured out.
An important thing to note about the new HTTP Cache is that although it's all based off callbacks, there can still only ever be a single writing process. While in your example it may not be necessary to write to the descriptor, you should still request write access since that will prevent any other processes (i.e. the browser) from altering/deleting the data until you are done with it. Another side note and a source of a lot of pain for me was the fact that requesting a cache entry from the memory cache will ALWAYS created a new entry, overwriting any pre-existing entries. You shouldn't need this, but if it is necessary, you can access the memory cache from the disk (the disk cache is physical disk+memory cache -- Mozilla logic) cache without that side effect.
Once the URI is in hand, you can then make a request to pull it out of the cache. The new caching system is based completely on callbacks. There is one key object that we will need in order to be able to fetch the cache entry's data -- nsICacheEntryOpenCallback. This is a user-defined object that handles the response after a cache entry is requested. It must have two member functions: onCacheEntryCheck(entry, appcache) and onCacheEntryAvilable(descriptor, isnew, appcache, status).
Here is a cut-down example from my code of such an object:
var cacheWaiter = {
//This function essentially tells the cache service whether or not we want
//this cache descriptor. If ENTRY_WANTED is returned, the cache descriptor is
//passed to onCacheEntryAvailable()
onCacheEntryCheck: function( descriptor, appcache )
{
//First, we want to be sure the cache entry is not currently being written
//so that we can be sure that the file is complete when we go to open it.
//If predictedDataSize > dataSize, chances are it's still in the process of
//being cached and we won't be able to get an exclusive lock on it and it
//will be incomplete, so we don't want it right now.
try{
if( descriptor.dataSize < descriptor.predictedDataSize )
//This tells the nsICacheService to call this function again once the
//currently writing process is done writing the cache entry.
return Components.interfaces.nsICacheEntryOpenCallback.RECHECK_AFTER_WRITE_FINISHED;
}
catch(e){
//Also return the same value for any other error
return Components.interfaces.nsICacheEntryOpenCallback.RECHECK_AFTER_WRITE_FINISHED;
}
//If no exceptions occurred and predictedDataSize == dataSize, tell the
//nsICacheService to pass the descriptor to this.onCacheEntryAvailable()
return Components.interfaces.nsICacheEntryOpenCallback.ENTRY_WANTED;
}
//Once we are certain we want to use this descriptor (i.e. it is done
//downloading and we want to read it), it gets passed to this function
//where we can do what we wish with it.
//At this point we will have full control of the descriptor until this
//function exits (or, I believe that's how it works)
onCacheEntryAvailable: function( descriptor, isnew, appcache, status )
{
//In this function, you can do your cache descriptor reads and store
//it in a Blob() for upload. I haven't actually tested the code I put
//here, modifications may be needed.
var cacheentryinputstream = descriptor.openInputStream(0);
var blobarray = new Array(0);
var buffer = new Array(1024);
for( var i = descriptor.dataSize; i == 0; i -= 1024)
{
var chunksize = 1024;
if( i < 0 )
chunksize = 1024 + i;
try{
cacheentryinputstream.read( buffer, chunksize );
}
catch(e){
//Nasty NS_ERROR_WOULD_BLOCK exceptions seem to happen to me
//frequently. The Mozilla guys don't provide a way around this,
//since they want a responsive UI at all costs. So, just keep
//trying until it succeeds.
i += 1024;
continue;
}
for( var j = 0; j < chunksize; j++ )
{
blobarray.push(buffer.charAt(j));
}
if( i < 0 )
i = 0 //Set i == 0 to signal loop break
}
}
var theblob = new Blob(blobarray);
//Do an AJAX POST request here.
}
Now that the callback object is set up, we can actually do some requests for cache descriptors. Try something like this:
var theuri = "http://www.example.com/image.jpg";
//Load the cache service
var cacheservice = Components.classes["#mozilla.org/netwerk/cache-storage-service;1"].getService(Components.interfaces.nsICacheStorageService)
//Select the default disk cache.
var hdcache = cacheservice.diskCacheStorage(Services.loadContextInfo.default, true);
//Request a cache entry for the URI. OPEN_NORMALLY requests write access.
hdcache.asyncOpenURI(ioservice.newURI(theuri, null, null), "", hdcache.OPEN_NORMALLY, cacheWaiter);
As far as actually getting the URI, you could provide a window for a user to drag-and-drop an image into or perhaps just paste the URL of the image into. Then, you could do an AJAX request to fetch the image (in the case that the user hasn't actually visited the image for some reason, it would then be cached). You could then use that URL to then fetch the cache entry for upload. As an aesthetic touch, you could even show a preview of the image but that's a bit out of scope of the question.
If you need any more clarifications, please feel free to ask!

JavaScript: Writing an output file within a limited & secure scenario

I would like to add a function in my javascript to write to a text file in the local directory where the javascript file is located. This means I'm not looking for some insecure way of accessing the user's file system in any way. All I care about is extracting the user's input into an html page that is accessed by my javascript then using that input as data externally. I just need a simple text file. This user input isn't actually text by the way, but rather a bunch of actions using my online game's components that the underlying javascript turns into a text string (so this particular string is what I want to save, not really even anything direct from the user).
I don't want to write to a user's file system, but rather, the file where the javascript (and html) code is located (a folder hosted on a server). Is there any simple way to get some file I/O going?
I know Javascript has a FileReader, is there any way to get it to do this in reverse? Like a FileWriter. GoogleClosure looks like it has a FileWriter, but it doesn't seem to quite work and I can't find any decent examples of how to get it to do this.
If this requires a different language, is there any way I can just get the relevant snippet and insert this into my Javascript file?
(the folder is hosted on a Linux system if that helps)
ADDENDUM: Elias Van Ootegem's solution below is excellent and I would highly recommend looking into it as it's a great example of client-server interaction and getting your system to provide you the data you're looking to extract. Workers are pretty interesting.
But for those of you looking at this post with that similar question that I initially had about JavaScript I/O, I found one other work-a-round depending on your case. My team's project site made use of a database site, MongoDB, that stored some of the user's interaction data if the user had hit a "Save" button. MongoDB, and other online database systems, provide a "dumping" function/script that you can call from your local machine/server and put that data into an output file (I was able to put the JSON data into a text file). From that output, you can write a parser to extract and format the data you desire from that output since databases like MongoDB can be pretty clear as to what format the text will be in (very structured, organized). I wrote a parser in C (with a few libraries I had written to extend the language) to do what I needed, but the idea is pretty generalizable to other programming/scripting languages.
I did look at leaving cookies as option as well, and made use of a test program to try it out (it works too!). However, one tradeoff for leaving cookies on a user's local system is that those cookies generally are meant to hold small amounts of data (usually things like username, date created, & expiration date of the cookie) and are dependent upon the user's local machine. Further, while you can extract the data in those cookies from JavaScript, you are back to the initial problem: the data still exists on the web, not on an output file on your server's file system. In the case you need to extract data and want some guarantee this data will exist on your machine, use Elias Van Ootegem's solution.
JavaScript code that is running client-side cannot access the server's filesystem at the same time, let alone write a file. People often say that, if JS were to have IO capabilities, that would be rather insecure... just imagine how dangerous that would be.
What you could do, is simply build your string, using a Worker that, on closing, returns the full data-string, which is then sent to the server (AJAX call).
The server-side script (Perl, PHP, .NET, Ruby...) can receive this data, parse it and then write the file to disk as you want it to.
All in all, not very hard, but quite an interesting project anyway. Oh, and when using a worker, seeing as it's an online game and everything, perhaps a setInterval to send (a part of) the data every 5000ms might not be a bad idea, either.
As requested - some basic code snippets.
A simple AJAX-setup function:
function getAjax(url,method, callback)
{
var ret;
method = method || 'POST';
url = url || 'default.php';
callback = callback || success;//assuming you have a default function called "success"
try
{
ret = new XMLHttpRequest();
}
catch (error)
{
try
{
ret= new ActiveXObject('Msxml2.XMLHTTP');
}
catch(error)
{
try
{
ret= new ActiveXObject('Microsoft.XMLHTTP');
}
catch(error)
{
throw new Error('no Ajax support?');
}
}
}
ret.open(method, url, true);
ret.setRequestHeader('X-Requested-With', 'XMLHttpRequest');
ret.setRequestHeader('Content-type', 'application/x-www-form-urlencode');
ret.onreadystatechange = callback;
return ret;
}
var getRequest = getAjax('script.php?some=Get&params=inURL', 'GET');
getRequest.send(null);
var postRequest = getAjax('script.php', 'POST', function()
{//passing anonymous function here, but this could just as well have been a named function reference, obviously...
if (this.readyState === 4 && this.status === 200)
{
console.log('Post request complete, answer was: ' + this.response);
}
});
postRequest.send('foo=bar');//set different headers to pos JSON.stringified data
Here's a good place to read up on whatever you don't get from the code above. This is, pretty much a copy-paste bit of code, but if you find yourself wanting to learn just a bit more, Here's a great place to do just that.
WebWorkers
Now these are pretty new, so using them does mean not being able to support older browsers (you could support them by using the event listeners to send each morsel of data to the server, but a worker allows you to bundle, pre-process and structure the data without blocking the "normal" flow of your script. Workers are often presented as a means to sort-of multi-thread JavaScript code. Here's a good intro to them
Basically, you'll need to add something like this to your script:
var worker = new Worker('preprocess.js');//or whatever you've called the worker
worker.addEventListener('message', function(e)
{
var xhr = getAjax('script.php', 'post');//using default callback
xhr.send('data=' + e.data);
//worker.postMessage(null);//clear state
}, false);
Your worker, then, could start off like so:
var time, txt = '';
//entry point:
onmessage = function(e)
{
if (e.data === null)
{
clearInterval(time);
txt = '';
return;
}
if (txt === '' && !time)
{
time = setInterval(function()
{
postMessage(txt);
}, 5000);//set postMessage to be called every 5 seconds
}
txt += e.data;//add new text to current string...
}
Server-side, things couldn't be easier:
if ($_POST && $_POST['data'])
{
$file = $_SESSION['filename'] ? $_SESSION['filename'] : 'File'.session_id();
$fh = fopen($file, 'a+');
fwrite($fh, $_POST['data']);
fclose($fh);
}
echo 'ok';
Now all of this code is a bit crude, and most if it cannot be used in its current form, but it should be enough to get you started. If you don't know what something is, google it.
But do keep in mind that, when it comes to JS, MDN is easily the best reference out there, and as far as PHP goes, their own site (php.net/{functionName}) is pretty ugly, but does contain a lot of info, too...

How to append timestamp to the javascript file in <script> tag url to avoid caching

I want to append a random number or a timestamp at the end of the javascript file source path in so that every time the page reloads it should download a fresh copy.
it should be like
<script type="text/javascript" src="/js/1.js?v=1234455"/>
How can i generate and append this number? This is a simple HTML page, so cant use any PHP or JSP related code
Method 1
Lots of extensions can be added this way including Asynchronous inclusion and script deferring. Lots of ad networks and hi traffic sites use this approach.
<script type="text/javascript">
(function(){
var randomh=Math.random();
var e = document.getElementsByTagName("script")[0];
var d = document.createElement("script");
d.src = "//site.com/js.js?x="+randomh+"";
d.type = "text/javascript";
d.async = true;
d.defer = true;
e.parentNode.insertBefore(d,e);
})();
</script>
Method 2 (AJZane's comment)
Small and robust inclusion. You can see exactly where JavaScript is fired and it is less customisable (to the point) than Method 1.
<script>document.write("<script type='text/javascript' src='//site.com
/js.js?v=" + Date.now() + "'><\/script>");</script>
If you choose to use dates or a random numbers to append to your URI, you will provide opportunities for the end user to be served the same cached file and may potentially expose unintended security risks. An explicit versioning system would not. Here's why:
Why "Random" and Random Numbers are both BAD
For random numbers, you have no guarantee that same random number hasn't been generated and served to that user before. The likelihood of generating the same string is greater with smaller "random" number sets, or poor algorithms that provide the same results more often than others. In general, if you are relying on the JavaScript random method, keep in mind it's pseudo-random, and could have security implications as well if you are trying to rely on uniqueness for say a patch in one of your scripts for XSS vulnerabilities or something similar. We don't want Johnny to get served the old cached and unpatched JS file with an AJAX call to a no-longer trusted 3rd-party script the day Mr. Hacker happened to be visiting.
Why dates or timestamps are bad too, but not as bad
Regarding Dates as "unique" identifiers, JavaScript would be generating the Date object from the client's end. Depending on the date format, your chances for unintended caching may vary. Date alone (20160101) allows a full day's worth of potential caching issues because a visit in the morning results in foo.js?date=20160101, and so does a visit in the evening. Instead, if you specify down to the second (20160101000000) your odds of an end user calling the same GET parameter go down, but still exist.
A few rare but possible exceptions:
Clocks get reset (fall behind) once a year in most time zones
Computers that reset their local time on reboot for one reason or another
Automatic network time syncing causing your clock to adjust backwards a few seconds/minutes whenever your local time is off from the server time
Adjusting time zones settings when traveling (The astronauts on the IIS travel through a zone every few minutes...let's not degrade their browsing experience :P)
The user likes resetting their system clock to mess with you
Why incremental or unique versioning is good :)
For a fontend only solution, my suggestion would be to set an explicit version, which could be simply hard-coded by you or your team members every time you change the file. Manually doing exactly as you had done in your same code of your question would be a good practice.
You or your team should be the only ones editing your JS files, so the key take away isn't that your file needs to be served fresh every time, I just needs to be served fresh when it changes. Browser caching isn't a bad thing in your case, but you do need to tell the end user WHEN it should update. Essentially, when your file is updated, you want to ensure the client gets the updated copy. With this, you also have the added bonus of being able to revert to previous versions of your code without worry of client caching issues. The only drawback is you need to use due diligence to make sure you actually update the version number when you update your JS files. Keep in mind just because something isn't automated, doesn't mean it is necessarily bad practice or poor form. Make your solution work for your situation and the resources you have available.
I suggest using a form like Semantic Versioning's rules to easily identify backwards or breaking compatibility by looking at the file name (assuming nobody in the development process fudged up their version numbering) if possible. Unless you have an odd use case, there is no good reason to force a fresh copy down to the client every time.
Automated version incrementing on the client side with Local Storage
If what you were after was frontend way to automate the generation of a unique version number for you so you don't have to explicitly set it, then you would have to implement some sort of local storage method to keep track of, and auto increment your file versions. The solution I've shown below would lose the ability for Semantic versioning, and also has the potential to be reset if the user knows how to clear Local Storage. However, considering your options are limited to client-side only solutions, this may be your best bet:
<script type="text/javascript">
(function(){
/**
* Increment and return the local storage version for a given JavaScript file by name
* #param {string} scriptName Name of JavaScript file to be versioned (including .js file extension)
* #return {integer} New incremented version number of file to pass to .js GET parameter
*/
var incrementScriptVer = function(scriptName){
var version = parseInt(localStorage.getItem(scriptName));
// Simple validation that our item is an integer
if(version > 0){
version += 1;
} else {
// Default to 1
version = 1;
}
localStorage.setItem(scriptName, version);
return version;
};
// Set your scripts that you want to be versioned here
var scripts = ['foo.js', 'bar.js', 'baz.js'];
// Loop through each script name and append our new version number
scripts.map(function(script){
var currentScriptVer = incrementScriptVer(script);
document.write("<script language='text/javascript' type='text/javascript' src='http://yoursite.com/path/to/js/" + script + "?version=" + currentScriptVer + " '><\/script>");
});
})();
</script>
I'm going to mention for completeness, if you are converting from an old system of generating "random" numbered or dated GET variables, to an incrementing versioned system, be sure that you will not step over any potentially randomly generated files names with your new versioning system. If in doubt, add a prefix to your GET variable when changing methods, or simply add a new GET variable all together. Example: "foo.js?version=my_prefix_121216" or "foo.js?version=121216&version_system=incremental"
Automated versioning via AJAX calls and other methods (if backend development is a possiblity)
Personally, I like to stay away from local storage options. If the option is available, it would be the "best" solution. Try to get a backend developer make an endpoint to track JS file versions, you could always use the response to that endpoint determine your version number. If you are already using version control like Git, you could optionally have on of your Dev Ops team bind your versioning to your commit versions for some pretty sweet integration as well.
A jQuery solution to a RESTful GET endpoint might look like:
var script = "foo.js";
// Pretend this endpoint returns a valid JSON object with something like { "version": "1.12.20" }
$.ajax({
url: "//yoursite.com/path/to/your/file/version/endpoint/" + script
}).done(function(data) {
var currentScriptVer = data.version;
document.write("<script language='text/javascript' type='text/javascript' src='http://yoursite.com/path/to/js/" + script + "?version=" + currentScriptVer + " '><\/script>");
});
Insert the script dynamically via document.createElement('script'), then when you set the URL, you can use new Date().getTime() to append the extra parameter.
If you are worried about your javascript executing before the script is loaded, you can use the onload callback of the script element (note that there are a few hoops to jump for IE though)
If you can't user server side code then you can use getScript method to do the same.
$(document).ready(function(){
var randomNum = Math.ceil(Math.random() * 999999);
var jsfile = 'scripts/yourfile.js?v=' + randomNum;
$.getScript(jsfile, function(data, textStatus, jqxhr) { });
});
Reference URL: http://api.jquery.com/jQuery.getScript/
(Please don't forget to mark as answer.)
Load scripts manually or with jQuery http://api.jquery.com/jQuery.getScript/. It also provides option to prevent chaching
You can replace the source of the script doing this with pure Javascript
// get first script
var script = document.getElementsByTagName('script')[0]
var new = document.createElement("script");
// add the source with a timestamp
new.src = 'yoursource.js?' + new Date.getTime().toString();
new.type = 'text/javascript'
script.parentNode.insertBefore(new,script);
Replace regular expression if not alphanumeric
Date.prototype.toISOString()
var today = new Date();
"MyFile_" + today.toISOString().replace(/[^\w]/g, "") + ".pdf"
MyFile_20191021T173426146Z.pdf
Old post but here is a one liner:
<script src="../Scripts/source/yourjsname.js?ver<%=DateTime.Now.Ticks.ToString()%>" type="text/javascript"></script>

fix chrome zoom issues

how can i set different zoom levels on different site. can i use window.location to get the url from the chrome address bar and set a zoom level for that specific site how can i modify this code to use window.location or window.location.href
function zoom(zp) {
page = document.getElementsByTagName('html')[0]
if (page != null) {
page.style.zoom = zp + "%";
}
}
chrome.extension.sendRequest(
{"type": "setZoom"},
function(zp) {
zoom(zp);
}
);
Firstly, note that Chrome already handles setting per-domain zoom levels set by C+ + and C+ -, but this is different from the html.style.zoom.
You can certainly do what you're trying to, but you'll need to inject a content script into the page whose CSS you want to manipulate. Then, you can send messages to that injected script from another part of your extension and get the desired result. You can keep track of zoom levels per URL by (for example) storing a {url: zoomLevel} hash table in your extension's localStorage.
Note that there are problems using the html.style.zoom property: for example, it doesn't work on iframes. There's an extensive discussion about this here: http://crbug.com/30583

getting last page URL from history object - cross browser?

Is it possible to get last page URL from the history object? I've come accross history.previous but that's either undefined or protected from what I've seen.
Not from the history object, but from document.referrer. If you want to get the last actual page visited, there is no cross-browser way without making a separate case based on support for each property.
You cant get to history in any browser. That would be a serious security violation since that would mean that anyone can snoop around the history of their users.
You might be able to write a Browser Helper Object for IE and other browsers that give you access to that. (Similar to the google toolbar et al). But that will require the users to allow that application to run on their machine.
There are some nasty ways you can get to some history using some "not-so-nice" ways but I would not recommend them. Look up this link.
Of course, as people have said, its not possible. However what I've done in order to get around this limitation is just to store every page loaded into localStorage so you can create your own history ...
function writeMyBrowserHistory(historyLength=3) {
// Store last historyLength page paths for use in other pages
var pagesArr = localStorage.myPageHistory
if (pagesArr===null) {
pagesArr = [];
} else {
pagesArr = JSON.parse(localStorage.myPageHistory);
pagesArr.push(window.location.pathname) // can use whichever part, but full url needs encoding
}
if (pagesArr.length>historyLength) {
// truncate the array
pagesArr = pagesArr.slice(pagesArr.length-historyLength,pagesArr.length)
}
// store it back
localStorage.myPageHistory = JSON.stringify(pagesArr);
// optional debug
console.log(`my page history = ${pagesArr}`)
}
function getLastMyBrowserHistoryUrl() {
var pagesArr = localStorage.myPageHistory
var url = ""
if (pagesArr!==null) {
pagesArr = JSON.parse(localStorage.myPageHistory);
// pop off the most recent url
url = pagesArr.pop()
}
return url
}
So then on a js in every page call
writeMyBrowserHistory()
When you wanna figure out the last page call
var lastPageUrl = getLastMyBrowserHistoryUrl()
Note: localStorage stores strings only hence the JSON.
Let me know if I have any bugs in the code as its been beautified from the original.

Categories