I wonder how one would monitor a single page web app so you can see what the user does in your app, what "pages" he visited etc.
Kinda like Google Analytics with statistics for a lot of things.
Google Analytics is great for this. Check out custom events: http://code.google.com/apis/analytics/docs/tracking/eventTrackerGuide.html
It takes a fair amount more work than the "set it and forget it" type tracking you can do with traditional websites, but it's also pretty easy.
It will not 'just work', you need to be calling trackPageView to make the requests count as pageviews and include all the visitor information that would be included (not trackEvent). This gist is an old but popular solution to ajax navigation showing how to call trackPageView. It is called right after the content from the new url is injected into the page, so an equivalent in a framework like Backbone.js would on view initialization, like suggested in this guide.
Related
I am creating an angular app that is hosted on a webserver that doesn't allow me to edit htaccess files or webconfig. There is no server side language option available which means no middleware for creating HTML snapshots. This is a high dollar CRM with webstore and no option of switching hosts.
So I have come up with my own "solution" to the issue. Would it be considered ok to create hyperlinks that link to url's that will generate the same view that will be updated by an onClick event. This way the user will see the content loaded immediately, but bots will have to reload the page at the new url to see the page content.
Example:
View 2
I'm struggling to find a good solution to this issue, and I know others have to be in the same situation as me when it comes to development. The code above is just a visual reference to what I am referring to.
Have you looked at
grunt-html-snapshot
After implementing this and testing this, it does work well. Google sees them as new pages and the user never has to worry about loading new content.
I have a website which has two versions, an all singing all dancing javascript powered application which is served when you request the root url
/
As you navigate around the lovely website the content updates, as does the url, thanks to html5 push state or good old correctly formatted #! urls. However if you don't have javascript enabled you can still use all functionality of the site as each piece of content also exists under it's own url. This is great for 3 reasons
non javascript users can still use the site
SEO - web crawlers can index the site easily
everything is shareable on social networks
The third reason is very important to me as every piece of content must be individually shareable on the site. And because each piece of content has it's own url it is easy to deep link to that url, and each piece of content can have it's own specific open graph data.
However the issue I hit is the following. You are a normal person and have javascript enabled and you are browsing and image gallery on the site and decide to share the picture of a lovely cat you have found. Using javascript the url has been updated to
/gallery/lovely-cat
You share this url and your friend clicks on it. When they click on the link the server sends you the non javascript / web crawler version of the site, and the experience is no where near as nice as the javascript version you would have been served if you directly went to the root of the site and navigated there.
Do anyone have a nice solution / alternative setup to solve this problems? I have several hacks which work, however I am not that happy with them. They include :
javascript redirect to the root of the site on every page and store a cookie / add a #! to the url so on page render the javascript router will show the correct content. ( does google punish automatic javascript redirects? )
render the no javascript page, and add some javascript which redirects the user to the root, similar to above, whenever the user clicks on a link
I don't particularly like either of these solutions, but can't think of a better solution. Rendering the entire javascript app for each page doesn't appear to be a solution to me, as you would end up with bad looking urls such as /gallery/lovely-cat/gallery/another-lovely-cat as you start navigating through the site.
My solution must support old browsers which do not implement push state
Make the "non javascript / web crawler version of the site" the same as the JavaScript version. Just build HTML on the server instead of DOM on the client.
Rendering the entire javascript app for each page doesn't appear to be a solution to me,
That is the robust approach
as you would end up with bad looking urls such as /gallery/lovely-cat/gallery/another-lovely-cat
Only if you linked (and pushStateed) to gallery/another-lovely-cat instead of /gallery/another-lovely-cat. (Note the / at the front).
Try out this plugin it might solve your 3rd reason, along with two reasons.
http://www.asual.com/jquery/address/
This question may be not related to exact software stack, framework or language.
For my current project, we are using AngularJS to build the front-end that has a constant entrance page to load real data and render, which is easy for CDN and good for fast loading speed from browser side. But for some social feature, such architect may result in some problem. For example, when you paste your interested link to Facebook portal to share, Facebook will grab your page and show up a preview. If a landing page is empty, such preview won't work.
(I heard that Google+ recently support rendering javascript logic at server side before send back a preview, but obviously it's not a common support for other similar services. Google.com also supports indexing js based one-page application.)
Is there a better solution to solve this problem gracefully rather than fallback to have dynamic page which includes real data? Have I missed something in understanding this problem?
========
... I was even thinking of that, for requests that identified as FB request (like user agent), redirect it to a special gateway that wrapping sth like PhantomJS, fetch the page, render it server-side, and send back a DOM tree snapshot as content for FB to generate preview. But I also doubt that it's a good direction. : (
We are in the same situation. The simple solution is to use Open Graph meta tags in the pages your server will serve to Facebook scrapers.
Basically you need to do server-side what your web app is doing client-side. Amount of work highly depends on your hosting technology (MVC makes it super easy), your URI format and the APIs you use.
You will find some explanations here:
https://developers.facebook.com/docs/plugins/share-button/
Open graph introduction:
http://ogp.me/
I'm sorry if this is a newbie question but I don't really know what to search for either. How do you keep content from a previous page when navigating through a web site? For example, the right side Activity/Chat bar on facebook. It doesn't appear to refresh when going to different profiles; it's not an iframe and doesn't appear to be ajax (I could be wrong).
Thanks,
I believe what you're seeing in Facebook is not actual "page loads", but clever use of AJAX or AHAH.
So ... imagine you've got a web page. It contains links. Each of those links has a "hook" -- a chunk of JavaScript that gets executed when the link gets clicked.
If your browser doesn't support JavaScript, the link works as it normally would on an old-fashioned page, and loads another page.
But if JavaScript is turned on, then instead of navigating to an HREF, the code run by the hook causes a request to be placed to a different URL that spits out just the HTML that should be used to replace a DIV that's already showing somewhere on the page.
There's still a real link in the HTML just in case JS doesn't work, so the HTML you're seeing looks as it should. Try disabling JavaScript in your browser and see how Facebook works.
Live updates like this are all over the place in Web 2.0 applications, from Facebook to Google Docs to Workflowy to Basecamp, etc. The "better" tools provide the underlying HTML links where possible so that users without JavaScript can still get full use of the applications. (This is called Progressive Enhancement or Graceful degradation, depending on your perspective.) Of course, nobody would expect Google Docs to work without JavaScript.
In the case of a chat like Facebook, you must save the entire conversation on the server side (for example in a database). Then, when the user changes the page, you can restore the state of the conversation on the server side (with PHP) or by querying your server like you do for the chat (Javascript + AJAX).
This isn't done in Javascript. It needs to be done using your back-end scripting language.
In PHP, for example, you use Sessions. The variables set by server-side scripts can be maintained on the server and tied together (between multiple requests/hits) using a cookie.
One really helpful trick is to run HTTPFox in Firefox so you can actually monitor what's happening as you browse from one page to the next. You can check out the POST/Cookies/Response tabs and watch for which web methods are being called by the AJAX-like behaviors on the page. In doing this you can generally deduce how data is flowing to and from the pages, even though you don't have access to the server side code per se.
As for the answer to your specific question, there are too many approaches to list (cookies, server side persistence such as session or database writes, a simple form POST, VIEWSTATE in .net, etc..)
You can open your last closed web-page by pressing ctrl+shift+T . Now you can save content as you like. Example: if i closed a web-page related by document sharing and now i am on travel web page. Then i press ctrl+shift+T. Now automatic my last web-page will open. This function works on Mozilla, e explorer, opera and more. Hope this answer is helpful to you.
I'm developing a Firefox extension and would like to track its use with google analytics, but I can't get it working.
I've tried manually calling a function from ga.js, but that didn't work for some reason. No error was produced, but neither was any data collected.
My last attempt was to have a website that just holds the tracking javascript and then load it within the extension in an iframe with the URL configured so it contains meaningful data. This way the analytics are getting connected when I visit said webpage with a browser, but not in an extension. I've tried putting some visible javascript on the site and have confirmed the site's javascript is executing. This method also works with other trackers, but I don't like their output and would prefer Google Analytics.
Any ideas what else I could try to accomplish this?
The solution is to use Remy Sharp's mini library for tracking bookmarklets and extensions with Google Analytics. Works like a charm.
Usage is as simple as:
gaTrack('UA-123456', 'yoursite.com', '/js/script.js');
Note that, since it doesn't use cookies, there's no differentiation between pageviews and visits, or for that matter, between visits and visitors. But, the rest of the functionality is fairly reliable.
Depending on what you want to track you may not need Google Analytics. Mozilla's addon.mozilla.org portal already provides comprehensive tracking and usage statistics for addons.
To check if Mozilla provides what you need go to the Statistics Dashboard and choose the statistics for one of the publicly available addons.
Here is a small library to proxy the requests through an iframe hosted on another server: https://github.com/yelloroadie/google_analytics_proxy
This gets around the bug in the add-on sdk that causes ga.js to die (https://bugzilla.mozilla.org/show_bug.cgi?id=785914).
This method allows full use of google analytics, unlike the limited use found in the library by Remy Sharp.
I don't think this is possible. Firefox extensions don't allow you to load pages from other servers. So the only way I can think of is to have an invisible iframe load up the code. The pings to Google's servers need to be from a domain belonging to you. So I guess your own servers have to serve up pages every time a user loads the extension, which just kills your server and defeats the purpose of Google doing all the work!! Please post if you have found a way around it. Chrome extensions can be tracked easily!
For using analytics in the main/background script you might want to use this solution:
https://stackoverflow.com/a/17430194/193017
Citing part of the answer:
I would suggest you take a look at the new Measurement Protocol in Universal Analytics:
https://developers.google.com/analytics/devguides/collection/protocol/v1/
This allows you to use XHR POST to simply send GA events directly.
This will coexist much better with Firefox extensions.
The code would look something like this: