Get title and URL of last 5 websites in history - javascript

I'm looking to include a small window that shows the last 5 pages they visited on a my site.
Primarily I'd like it to show the title of the page and the URL so I can link them to it. It would be great if I can filter these to a word or website since I'd like to be for my site only.
Would JavaScript be good for this and does it work cross browsers?

You can only look 1 page back which is document.referrer but I am not sure how to get the title.
If you are monitoring your own site you can use localStorage to store the last 5 pages but if you want to monitor other sites then no you cant do it in Javascript that will be a privacy concern if you can do it.
localStorage is HTML5 but its already supported by major browsers.

If you are doing server-side scripting it should be fairly simple to keep a record of what page they have visited on your website. It could be done with Javascript and cookies, but it does not necessarily have to be done that way.

Related

GDPR: youtube-nocookie embedded URL's, need visitors' permission?

This is my first time posting on Stack Overflow and I have a question about the GDPR.
Hi there! (This is ment to be on top of the post, but for some reason it gets deleted when I save it)
Situation:
On my website I don't want to bother visitors with cookie notifications, so the goal is to only place necessary cookies. However, there will be embedded YouTube video's on the website, which usually places tracking cookies.
After some research I stumpled upon the youtube-nocookie.com domain, which I am using now. Without using that domain, an embedded video url will be:
https://www.youtube.com/embed/7cjVj1ZyzyE
With using it, it is:
https://www.youtube-nocookie.com/embed/7cjVj1ZyzyE
By using the latter, cookies will only be placed after playing the video, and no tracking cookies will be placed (according to Google: https://support.google.com/youtube/answer/171780?hl=en under 'Turn on privacy-enhanced mode'). However, there will still be placed some cookies, and it is not clear for me if visitors will need to give permission for those, and if so, under what category (and maybe they are still tracking?).
Image of the cookies:
Image of cookies youtube-nocookies.com places
This is in Chrome. The cookies from the gstatic domain are placed on page-load for some reason. That doesn't happen in Opera.
Another weird thing is that FireFox (with allowing all cookies and trackers) and Edge don't seem to place any of the 6 cookies from the image at all.
Many sites and blogs say that this is the way to embed YouTube video's, but I can't seem to find a clear answer to the question if you still need visitors' permission for these cookies. Also on many sites where I only accept necessary cookies, I still have the possibility to view YouTube video's and the corresponding cookies will be happily placed without my consent.
Has anybody delt with this before?
Thanks in advance!
After some more research I think I found a clear answer. From a report of Cookiebot:
“Privacy-Enhanced Mode” currently
stores an identifier named “yt-remote-device-id”
in the web browser’s “Local Storage”. This
allows tracking to continue regardless of
whether users click, watch, or in any other way
interact with a video – contrary to Google’s
claims. Rather than disabling tracking, “privacyenhanced mode” seems to cover it up.
Source: https://www.cookiebot.com/media/1136/cookiebot-report-2019-ad-tech-surveillance-2.pdf
The 'yt-remote-device-id' indentifier, along with some other ones, are, even with the use of the youtube-nocookie.com domain (or 'Privacy Enhanced Mode'), still being placed on page load (given that the iframe with the set source is already part of the DOM at this point of course).
So while no tracking 'cookies' cookies are placed, the tracking has moved to the browsers localStorage (I overlooked this before), which basically means visitors actually do need to give permission before embedded YouTube video's with Privacy Enhanced Mode enabled should be loaded on the page.
Update
Gave some nuance in response to Marc Hjorth's comment.
i can confirm that the localStorage entry effectively replaces the funktion of the cookie. it is persistent and makes you identifiable across browser sessions. i get the same "yt-remote-device-id" value each time after restarts. only erasing the local storage makes a difference.

Setting cookies in an epub3 for IBooks

Does the IBooks reader allow setting and reading cookies?
Basically what I need is that the status of one page (the user clicking on some elements) gets stored and used in the next page.
I have been experimenting and so far I could not get it to work, thus I wonder whether cookies are supported at all.
I found my own answer. I was able to store cookies inside an epub3 eBook using the js-cookie library. Unfortunately the cookie is only visible for the current page and is not useful to share information between pages (which in epub3 are different html5 documents).

Deep linking javascript powered websites

I have a website which has two versions, an all singing all dancing javascript powered application which is served when you request the root url
/
As you navigate around the lovely website the content updates, as does the url, thanks to html5 push state or good old correctly formatted #! urls. However if you don't have javascript enabled you can still use all functionality of the site as each piece of content also exists under it's own url. This is great for 3 reasons
non javascript users can still use the site
SEO - web crawlers can index the site easily
everything is shareable on social networks
The third reason is very important to me as every piece of content must be individually shareable on the site. And because each piece of content has it's own url it is easy to deep link to that url, and each piece of content can have it's own specific open graph data.
However the issue I hit is the following. You are a normal person and have javascript enabled and you are browsing and image gallery on the site and decide to share the picture of a lovely cat you have found. Using javascript the url has been updated to
/gallery/lovely-cat
You share this url and your friend clicks on it. When they click on the link the server sends you the non javascript / web crawler version of the site, and the experience is no where near as nice as the javascript version you would have been served if you directly went to the root of the site and navigated there.
Do anyone have a nice solution / alternative setup to solve this problems? I have several hacks which work, however I am not that happy with them. They include :
javascript redirect to the root of the site on every page and store a cookie / add a #! to the url so on page render the javascript router will show the correct content. ( does google punish automatic javascript redirects? )
render the no javascript page, and add some javascript which redirects the user to the root, similar to above, whenever the user clicks on a link
I don't particularly like either of these solutions, but can't think of a better solution. Rendering the entire javascript app for each page doesn't appear to be a solution to me, as you would end up with bad looking urls such as /gallery/lovely-cat/gallery/another-lovely-cat as you start navigating through the site.
My solution must support old browsers which do not implement push state
Make the "non javascript / web crawler version of the site" the same as the JavaScript version. Just build HTML on the server instead of DOM on the client.
Rendering the entire javascript app for each page doesn't appear to be a solution to me,
That is the robust approach
as you would end up with bad looking urls such as /gallery/lovely-cat/gallery/another-lovely-cat
Only if you linked (and pushStateed) to gallery/another-lovely-cat instead of /gallery/another-lovely-cat. (Note the / at the front).
Try out this plugin it might solve your 3rd reason, along with two reasons.
http://www.asual.com/jquery/address/

Doing links like Twitter, Hash-Bang #! URL's [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What’s the shebang/hashbang (#!) in Facebook and new Twitter URLs for?
I was wondering how Twitter works its links.
If you look in the source code, you use the links are done like /#!/i/connect or /#!/i/discover, but they don't have a JavaScript function attached to them like load('connect') or something, and that it doesn't require a page reload. It just changes out the page content.
I saw this page, but then all of those files would have to exist, and you couldn't just go straight to one of them. I imagine that on Twitter each of those files don't exist, and that it is handled in some other method. Please correct me if I'm wrong, though.
Is there a way I could replicate this effect? If so, is there a tutorial on how to go about doing this?
"Hash-Bang" navigation, as it's sometimes called, ...
http://example.com/path/to/#!/some-ajax-state
...is a temporary solution for a temporary problem that is quickly becoming a non-issue thanks to modern browser standards. In all likelihood, Twitter will phase it out, as Facebook is already doing.
It is the combination of several concepts...
In the past, a link served two purposes: It loaded a new document and/or scrolled down to an embedded anchor as indicated with the hash (#).
http://example.com/script.php#fourth-paragraph
Anything in a URL after the hash was not requested from the server, but was searched for in the page by the browser. This all still works just fine.
With the adoption of AJAX, new content could be loaded into the current (already loaded) page. With this dynamic loading, several problems arose: 1) there was no unique URL for bookmarking or linking to this new content, 2) search would never see it.
Some smart people solved the first problem by using the hash as a sort of "state" reference to be included in links & bookmarks. After the document loads, the browser reads the hash and runs the AJAX requests, displaying the page plus its dynamic AJAX changes.
http://example.com/script.php#some-ajax-state
This solved the AJAX problem, but the search engine problem still existed. Search engines don't load pages and execute Javascript like a browser.
Google to the rescue. Google proposed a scheme where any URL with a hash-bang (#!) in lieu of just a hash (#) would suggest to the search bot that there was an alternate URL for indexing, which involved an "_escaped_fragment_" variable, among other things. Read about it here: Ajax Crawling: Getting Started.
Today, with the adoption of Javascript's pushstate in most major browsers, all of this is becoming obsolete. With pushstate, as content is dynamically loaded or changed, the current page URL can be altered without causing a page load. When desired, this provides a real working URL for bookmarks & history. Links can then be made as they always were, without hashes & hash-bangs.
As of today, if you load Facebook in an older browser, you'll see the hash-bangs, but a current browser will demonstrate the use of pushstate.
You might wanna check out more on Unique URLs.
It's loading the page via AJAX, and parsing the "hash" (the values that come after the "#") to determine which page it's going to load. Also, this method is used due to the nature that AJAX requests don't count to the browser's history thus the "back button breaks". But the browser does however store into history the hash changes.
Using hashes plus the fact that you can use hashes to determine pages, you can say that you can keep AJAX requested pages "in history". Added to that, hashed URLs are just URLs, and they are bookmarkable including the hash, so you can also bookmark AJAX requested pages.

How do you keep content from your previous web page after clicking a link?

I'm sorry if this is a newbie question but I don't really know what to search for either. How do you keep content from a previous page when navigating through a web site? For example, the right side Activity/Chat bar on facebook. It doesn't appear to refresh when going to different profiles; it's not an iframe and doesn't appear to be ajax (I could be wrong).
Thanks,
I believe what you're seeing in Facebook is not actual "page loads", but clever use of AJAX or AHAH.
So ... imagine you've got a web page. It contains links. Each of those links has a "hook" -- a chunk of JavaScript that gets executed when the link gets clicked.
If your browser doesn't support JavaScript, the link works as it normally would on an old-fashioned page, and loads another page.
But if JavaScript is turned on, then instead of navigating to an HREF, the code run by the hook causes a request to be placed to a different URL that spits out just the HTML that should be used to replace a DIV that's already showing somewhere on the page.
There's still a real link in the HTML just in case JS doesn't work, so the HTML you're seeing looks as it should. Try disabling JavaScript in your browser and see how Facebook works.
Live updates like this are all over the place in Web 2.0 applications, from Facebook to Google Docs to Workflowy to Basecamp, etc. The "better" tools provide the underlying HTML links where possible so that users without JavaScript can still get full use of the applications. (This is called Progressive Enhancement or Graceful degradation, depending on your perspective.) Of course, nobody would expect Google Docs to work without JavaScript.
In the case of a chat like Facebook, you must save the entire conversation on the server side (for example in a database). Then, when the user changes the page, you can restore the state of the conversation on the server side (with PHP) or by querying your server like you do for the chat (Javascript + AJAX).
This isn't done in Javascript. It needs to be done using your back-end scripting language.
In PHP, for example, you use Sessions. The variables set by server-side scripts can be maintained on the server and tied together (between multiple requests/hits) using a cookie.
One really helpful trick is to run HTTPFox in Firefox so you can actually monitor what's happening as you browse from one page to the next. You can check out the POST/Cookies/Response tabs and watch for which web methods are being called by the AJAX-like behaviors on the page. In doing this you can generally deduce how data is flowing to and from the pages, even though you don't have access to the server side code per se.
As for the answer to your specific question, there are too many approaches to list (cookies, server side persistence such as session or database writes, a simple form POST, VIEWSTATE in .net, etc..)
You can open your last closed web-page by pressing ctrl+shift+T . Now you can save content as you like. Example: if i closed a web-page related by document sharing and now i am on travel web page. Then i press ctrl+shift+T. Now automatic my last web-page will open. This function works on Mozilla, e explorer, opera and more. Hope this answer is helpful to you.

Categories