Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I am at the final stages of my website, and currently I need to find a suitable statistics application/tool.
I have looked into webalizer, but it seems outdated.
Also, I have looked into Google analytics, but I am afraid that if I implement it, my website will go slow. It is already pretty heavy with database material being displayed which is dynamic btw.
I have read I can put the GA js code at the bottom of the page and thus the page will load first, but I still don't want a slow down.
You are all much more experienced in statistics than I am, so I believe you can give me some good advice.
I have my own private server (Linux) and I have root access as well (offcourse).
Do you think I should have a statistics app on the server, without interferring with my website, or should I go the Google way and use analytics?
Please give me good application names which you have tested etc...
Thanks
Any additional calls to scripts will slow down your site. However, Google Analytics instructs you to place it in a specific place so that it isn't loaded until the page has loaded. (It used to be before the </body> tag but I believe it's now supposed to be the last <script> in the <head> tag.) Don't worry about it too much; the benefits of analytics will far outweigh the extra call to a remote file.
Focus on other optimizations (database queries, CSS sprites, fewer HTTP requests). Analytics is necessary in today's site market and is indispensable; IMO it is not an option to forgo it.
As far as having your own "statistics app," I assume you're talking about building your own proprietary statistics codebase? I would discourage that, because it takes a lot of time and effort and in the end you will not have the same optimizations that Google has employed an entire project's worth of software engineers to make. Remember that while it's always great to create your own product, you don't have to reinvent the wheel, especially when it comes to things like this that have many sensible drop-in solutions that are widely available for free.
With respect to non-Google analytics solutions, one other of note is Clicky. I'm not as experienced with it as I am with GA, but I've heard many reviews that it is more precise and more informative than GA. However, just as an end-user browsing the web I've noticed a lot of times that its calls to Clicky's website do tend to slow down pages, and noticeably so; I cannot really say that I have seen the same effect with GA.
One last thing I would caution against is this: Do not employ more than one analytics solution unless you are trying to find the best one to suit your needs. It's just overkill to run two remotely-hosted analytics solutions on every single one of your pages, so what I would encourage you to do is try out a few for the first few weeks or so of your site (yes, pages will slow down during this trial phase) and then simply stick with the one that you like best. That will also give you the added benefit of being able to see first-hand what the speed implications are on your unique hosting environment for each script.
Here's some other analytics solutions that you might check out:
Piwik
Webtrends
GoingUp!
Yahoo! Web Analytics
Straight from Google's analytic sign up page (https://www.google.com/analytics/provision/)
"The appearance of your website will never be affected by your use of Google Analytics - we don't place any images or text on your pages. Likewise, the performance of your pages won't be impacted, with the possible exception of the very first page-load after you have added the tracking code. This first pageview calls the JavaScript on Google's servers, which may take slightly longer than a regular page load. Subsequent pageviews will use cached data and will not be affected."
Use the Asynchronous Snippet of Analytics:
http://code.google.com/apis/analytics/docs/tracking/asyncTracking.html
People focus to much on total load times when what is important is render times and in particular progressive rendering. If you use Google Analytics properly, it will load after the page has shown to the user. So yes, it will add a small overhead to every request but because the user can see the page already they probably won't even notice. Just go for it.
Webalizer runs on server side after apache logs doesn't it? That's why it appears outdated, it can't collect as much info as JS can. But it doesn't slow the user down any. You could run Webalizer and Google together for a bit and see what serves your needs best.
We decided to work around the possibility of google's servers appearing to slow our site down. Instead of our users downloading the ga.js file from google's servers we store it locally. The only problem with that approach is that our local copy becomes outdated. So we wrote an application that periodically compares our local file to google's and updates our file accordingly.
Andrew
Google Analytics is javascript based and does not tun on your server. All processing and storage is done on Google servers, so it's ideal if you are worrying about local resources.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Nowadays, people/companies behind the development of browsers are taking privacy in a serious way. They try to implement new security measures or simply change the default browser behaviours which have been around for a long time and today are considered as harmful for the privacy.
One example of this are third-party cookies. While IE requires a P3P policy to be sent when setting a cookie from a third-party domain, other browsers are blocking these cookies by default - or encouraging the user to activate such blocking option.
Also, if we think about extensions that help to prevent tracking (AdBlock, Ghostery...), it is getting more and more difficult to track users (whether for legitimate reasons or not).
As a developer, I found that there are some workarounds, such as ETag, although as you may know already, there are ways to prevent this type of tracking. Local Storage, available in most modern browsers (the ones that support HTML5 + enabled JS), is another way to accomplish this.
I would like to ask you what method do you find better and why. I feel like Local Storage could be the best replacement to third-party cookies, as it stores persistent data (it is not cleaned after the browser is closed) and it works in the vast majority of browsers - but still a much smaller percentage that cookies. A LocalStorage+fallback_to_cookies seems to be the best way for me, but would like to hear more opinions.
localstorage isn't getting the same heat as cookies simply because it's a "newer" technology. Give it time and I guarantee you it will end up being blocked/removed the same way cookies are being blocked/removed.
So far first party cookies are relatively safe, though ultimately scripts like GA still make requests to GA server, and as you said, there are many plugins/extensions/addons that block them.
But IMO the future will be in server-side tracking solutions. For example, when you go to a web page, that's a request to the server. Lots of basic info can be grabbed from it already. Then the javascript library would send (ajax) requests to the same server, not the 3rd party tracking server. Then all this data would then be forwarded to the 3rd party tracking vendor (e.g. GA, Adobe Analytics, etc.) by a server-side script.
Many tracking script offer server-side solutions already, but it's little more than an API with (many times) vague documentation, since it's not as popular to go this route. So I think there will be a lot of development to more easily handle payloads from the client and make server-side requests, make it almost as easy to implement as the current js version.
The main sticking point is tying the info to a single visitor. That's the most important part of the tracking cookies: a visitor ID that can tie all the activity together. Thing is, the alternatives (using combinations of IP and header info) isn't that far behind the accuracy of cookies, when you measure it against the cookies being blocked, so it's not a complete loss to not rely on cookies in the first place. But I think this will also have the affect of more and more websites enforcing a login system before a visitor does anything meaningful on their site. This will allow them to use your login id as the visitor id and would actually stand to increase accuracy.
But overall.. it's more important to look at the trends in the numbers, not the actual numbers, and from that PoV, it's even less a big deal. Unfortunately, a lot of people forget this or don't understand this point.
When a developing a web app, versus a web site, what reasons are there to use multiple HTML pages, rather than using one html page and doing everything through Javascript?
I would expect that it depends on the application -- maybe -- but would appreciate any thoughts on the subject.
Thanks in advance.
EDIT:
Based on the responses here, and some of my own research, if you wanted to do a single-page, fully JS-Powered site, some useful tools would seem to include:
JQuery Plug Ins:
JQuery History:
http://balupton.com/projects/jquery-history
JQuery Address:
http://plugins.jquery.com/project/jquery-address
JQuery Pagination:
http://plugins.jquery.com/project/pagination
Frameworks:
Sproutcore
http://www.sproutcore.com/
Cappucino
http://cappuccino.org/
Possibly, JMVC:
http://www.javascriptmvc.com/
page based applications provide:
ability to work on any browser or device
simpler programming model
they also provide the following (although these are solvable by many js frameworks):
bookmarkability
browser history
refresh or F5 to repeat action
indexability (in case the application is public and open)
One of the bigger reasons is going to be how searchable your website is.
Doing everything in javascript is going to make it complicated for search engines to crawl all content of your website, and thus not fully indexing it. There are ways around this (with Google's recent AJAX SEO guidelines) but I'm not sure if all search engines support this yet. On top of that, it's a little bit more complex then just making separate pages.
The bigger issue, whether you decide to build multiple HTML pages, or you decide to use some sort of framework or CMS to generate them for you, is that the different sections of your website have URL's that are unique to them. E.g., an about section would have a URL like mywebsite.com/about, and that URL is used on the actual "about" link within the website.
One of the biggest downfalls of single-page, Ajax-ified websites is complexity. What might otherwise be spread across several pages suddenly finds its way into one huge, master page. Also, it can be difficult to coordinate the state of the page (for example, tracking if you are in Edit mode, or Preview mode, etc.) and adjusting the interface to match.
Also, one master page that is heavy on JS can be a performance drag if it has to load multiple, big JS files.
At the OP's request, I'm going to discuss my experience with JS-only sites. I've written four relevant sites: two JS-heavy (Slide and SpeedDate) and two JS-only (Yazooli and GameCrush). Keep in mind that I'm a JS-only-site bigot, so you're basically reading John Hinkley on the subject of Jody Foster.
The idea really works. It produces gracefully, responsive sites at very low operational costs. My estimate is that the cost for bandwidth, CPU, and such goes to 10% of the cost of running a similar page-based site.
You need fewer but better (or at least, better-trained) programmers. JavaScript is an powerful and elegant language, but it has huge problems that a more rigid and unimaginative language like Java doesn't have. If you have a whole bunch of basically mediocre guys working for you, consider JSP or Ruby instead of JS-only. If you are required to use PHP, just shoot yourself.
You have to keep basic session state in the anchor tag. Users simply expect that the URL represents the state of the site: reload, bookmark, back, forward. jQuery's Address plug-in will do a lot of the work for you.
If SEO is an issue for you, investigate Google Ajax Crawling. Basically, you make a very simple parallel site, just for search engines.
When would I not use JS-only? If I were producing a site that was almost entirely content, where the user did nothing but navigate from one place to another, never interacting with the site in a complicated manner. So, Wikipedia and ... well, that's about it. A big reference site, with a lot of data for the user to read.
modularization.
multiple files allows you to mre cleanly break out different workflow paths and process parts.
chances are your Business Rules are something that do not usually directly impact your layout rules and multiple files would better help in editing on what needs to be edited without the risk of breaking something unrelated.
I actually just developed my first application using only one page.
..it got messy
My idea was to create an application that mimicked the desktop environment as much as possible. In particular I wanted a detailed view of some app data to be in a popup window that would maintain it's state regardless of the section of the application they were in.
Thus my frankenstein was born.
What ended up happening due to budget/time constraints was the code got out of hand. The various sections of my JavaScript source got muddled together. Maintaining the proper state of various views I had proved to be... difficult.
With proper planning and technique I think the 'one-page' approach is a very easy way to open up some very interesting possibilities (ex: widgets that maintain state across application sections). But it also opens up many... many potential problem areas. including...
Flooding the global namespace (if you don't already have your own... make one)
Code organization can easily get... out of hand
Context - It's very easy to
I'm sure there are more...
In short, I would urge you to stay away from relying on JavaScript dependency for the compatibility issue's alone. What I've come to realize is there is simply no need rely on JavaScript to everything.
I'm actually in the process of removing JavaScript dependencies in loo of Progressive Enhancement. It just makes more sense. You can achieve the same or similar effects with properly coded JavaScript.
The idea is too...
Develop out well-formatted, fully functional application w/o any JavaScript
Style it
Wrap the whole thing with JavaScript
Using Progressive Enhancement one can develop an application that delivers the best possible experience for the user that is possible.
For some additional arguments, check out The Single Page Interface Manifesto and some (mostly) negative reaction to it on Hacker News (link at the bottom of the SPI page):
The Single Page Interface Manifesto: http://itsnat.sourceforge.net/php/spim/spi_manifesto_en.php
stofac, first of all, thanks for the link to the Single Page Interface (SPI) Manifesto (I'm the author of this boring text)
Said this, SPI != doing everything through Javascript
Take a look to this example (server-centric):
http://www.innowhere.com/insites/
The same in GAE:
http://itsnatsites.appspot.com/
More info about the GAE approach:
http://www.theserverside.com/news/thread.tss?thread_id=60270
In my opinion coding a complex SPI application/web site fully on JavaScript is very very complex and problematic, the best approach in my opinion is "hybrid programming" for SPI, a mix of server-centric for big state management and client-centric (a.k.a JavaScript by hand) for special effects.
Doing everything on a single page using ajax everywhere would break the browser's history/back button functionality and be annoying to the user.
I utterly despise JS-only sites where it is not needed. That extra condition makes all the difference. By way of example consider the oft quoted Google Docs, in this case it not only helps improve experiences it is essential. But some parts of Google Help have been JS-only and yet it adds nothing to the experience, it is only showing static content.
Here are reasons for my upset:
Like many, I am a user of NoScript and love it. Pages load faster, I feel safer and the more distracting adverts are avoided. The last point may seem like a bad thing for webmasters but I don't want anyone to get rewarded for pushing annoying flashy things in my face, if tactless advertisers go out of business I consider it natural selection.
Obviously this means some visitors to your site are either going to be turned away or feel hassled by the need to provide a temporary exclusion. This reduces your audience.
You are duplicating effort. The browser already has a perfectly good history function and you shouldn't need to reinvent the wheel by redrawing the previous page when a back button is clicked. To make matters worse going back a page shouldn't require re-rendering. I guess I am a student of If-it-ain't-broke-don't-fix-it School (from Don't-Repeat-Yourself U.).
There are no HTTP headers when traversing "pages" in JS. This means no cache controls, no expiries, content cannot be adjusted for requested language nor location, no meaningful "page not found" nor "unavailable" responses. You could write error handling routines within your uber-page that respond to failed AJAX fetches but that is more complexity and reinvention, it is redundant.
No caching is a big deal for me, without it proxies cannot work efficiently and caching has the greatest of all load reducing effects. Again, you could mimic some caching in your JS app but that is yet more complexity and redundancy, higher memory usage and poorer user experience overall.
Initial load times are greater. By loading so much Javascript on the first visit you are causing a longer delay.
More JavaScript complexity means more debugging in various browsers. Server-side processing means debugging only once.
Unfuddle (a bug-tracker) left a bad taste. One of my most unpleasant web experiences was being forced to use this service by an employer. On the surface it seems well suited; the JS-heavy section is private so doesn't need to worry about search engines, only repeat visitors will be using it so have time to turn off protections and shouldn't mind the initial JS library load.
But it's use of JS is pointless, most content is static. "Pages" were still being fetched (via AJAX) so the delay is the same. With the benefit of AJAX it should be polling in the background to check for changes but I wouldn't get notified when the visible page had been modified. Sections had different styles so there was an awkward re-rendering when traversing those, loading external stylesheets by Javascript is Bad Practice™. Ease of use was sacrificed for whizz-bang "look at our Web 2.0" features. Such a business-orientated application should concentrate on speed of retrieval, but it ended up slower.
Eventually I had to refuse to use it as it was disrupting the team's work flow. This is not good for client-vendor relationships.
Dynamic pages are harder to save for offline use. Some mobile users like to download in advance and turn off their connection to save power and data usage.
Dynamic pages are harder for screen readers to parse. While the number of blind users are probably less than those with NoScript or a mobile connection it is inexcusable to ignore accessibility - and in some countries even illegal, see the "Disability Discrimination Act" (1999) and "Equality Act" (2010).
As mentioned in other answers the "Progressive Enhancement", née "Unobtrusive Javascript", is the better approach. When I am required to make a JS-only site (remember, I don't object to it on principle and there are times when it is valid) I look forward to implementing the aforementioned AJAX crawling and hope it becomes more standardised in future.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
We're using GWO (Google Website Optimizer) now. The multivariate and A/B testing is exactly what we need and works great from the perspective of showing the variations to the users. However, we have several issues that make me want to use a different tool:
Statistics are inaccurate compared to Google Analytics, so we now disregard them and have to manually check
Previews typically don't work
Cannot have dynamic content in variations (I know about variation_content, but I cannot get it to work and nobody in google's forums has been able to help.. I suspect google may have stopped supporting this)
Documentation is poor, there's a techie guide with well-known inaccuracies which haven't been fixed in well over a year.
The html/javascript code we modify our multivariate test sections with is ugly and makes our pages fail standards validation
Only 8 test sections per page, problem there is we want to allow our marketeers the ability to do everything they need from within GWO, but now they need to mark off which test sections they want/don't want in our custom tool
Different experiment key for every test, again it makes marketeers need to work with our code sometimes
Is there a good tool like GWO that works with Google Analytics (which I love)?
UPDATE: We went with Optimizely and have generally been happy. However, it can be difficult to work with because it does a little too much for you. You edit your webpage directly from their UI, but of course that isn't always easy or even possible. Particularly when Javascript is involved. Our UI often gets screwed up in the process. I liked GWO's approach to this in that the developer sections off the code and the marketer can then fill in those sections with variables the developer allows for. To me that's ideal, except that GWO, of course, doesn't actually work.
There's a very similar competitor to Optimizely called Visual Website Optimizer. Also looks very nice, but has the same issue I describe above.
Is there a GWO that works?
You should take a look at Optimizely.
Doesn't require creating invalid code.
Easy to create variations on the fly, though only A/B, not MVT.
Simple WYSIWYG test design, on the fly.
Real time data.
Retroactive goals
With regex/head matching for experiments, you can set the experiment to work on dynamic pages.
You can set a Google Analytics custom variable for the experiment that will pass the variation the end user sees as a custom variable. (It even allows you to set what slot you want it to use.)
The test variations are basically just jQuery manipulations of the DOM, so if you know a little jQuery, it's very easy to extend its capabilties even further than the very generous WYSIWYG GUI.
Installation is easy: You only need to include a single script tag, one time, on any experiment or goal page.
I have found Adobe Test&Target to have all the features you need. It is very easy to create experiments, add variations, and conversion goals. You can easily inject JQuery snippets to create new variations, click Save, and the test is running in production.
I have no idea how much it costs, but I'm guessing it is not cheap.
Now Google Website Optimizer killed multivariate testing in new version (Google Analytics Content Experiments) we launched Convert Experiments on Convert.com for people that look for a GWO alternative with MVT
Yes I am Founder of Convert Insights, the company behind this tool...
Dennis
Re your update: I have tried both GWO and Optimizely, and I'd go with Optimizely every time.
You say you wish it worked a little more like GWO - if you want, instead of manipulating the elements of the design using their GUI, you can just redirect each variant to a different URL:
https://help.optimizely.com/hc/en-us/articles/200040675
There are a few other tools which do A/B and MVT. Aren't free, but check them out for yourself: Omniture, Webtrends Optimize, SiteSpect.
Hope this helps!
You can also try VWO. It does MVT as well as A/B testing and is also a good tool. Optimizely is easier to use though so you might want to evaluate both for your scenario.
I am seeking to (legitimately) plant bugging in my web pages to collect and report information about website performance.
Preference for internally hosted. While I expect that there are commercial offerings out there (e.g. Google Analytics) I'm keen to find something we can run entirely in-house (its not a public website and may contain sensitive data).
Also, I'm looking for something where it can report back to an independent URL - i.e. not relying on adding in a reverse-proxy / recording results within existing webserver logs. Indeed, I'd prefer something which does not require access to the webserver logs logs at all (other than those for the URL the bug reports back to).
I need to be able to monitor bulk traffic - so things tools like pagespeed and tamperdata are not appropriate.
I've tried googling but just seem to be getting lots of noise about the performance of javascript and web pages rather than how to actually measure these.
TIA
You could use the open source analytics software Piwik and write a plugin for it that sends the performance data to it.
Thanks chiborg. I'd kind of forgotten about this it was so long ago I asked. Yes, I was aware of PiWik - but not been very impressed with either its implementation nor the quality of documentation.
I'm currently working on a solution using Boomerang.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Why aren't there any Javascript distributed computing frameworks / projects? The idea seems absolutely awesome to me because:
The Client is the Browser
Iteration can be done with AJAX
Webmasters could help projects by linking the respective Javascript
Millions or even billions of users would help DC projects without even noticing
Please share your views on this subject.
EDIT: Also, what kind of problems do you think would be suitable for JSDC?
GIMPS for instance would be impossible to implement.
I think that Web Workers will soon be used to create distributed computing frameworks, there are some early attempts at this concept. Non-blocking code execution could have been done before using setTimeout, but it made a little sense as most browser vendors focused on optimizing their JS engines just recently. Now we have faster code execution and new features, so running some tasks unconsciously in background as we browse the web is probably just a matter of months ;)
There is something to be said for 'user rights' here. It sounds like you're describing a situation where the webmaster for Foo.com includes the script for, say, Folding#Home on their site. As a result, all visitors to Foo.com have some fraction of their CPU "donated" to Folding#Home, until they navigate away from Foo.com. Without some sort of disclaimer or opt-in, I would consider that a form of malware and avoid vising any site that did that.
That's not to say you couldn't build a system that asked for confirmation or permission, but there is definite potential for abuse.
I have pondered this myself in the context of item recommendation.
First, there is no problem with speed! JIT compiled javascript can be as fast as unoptimized C, especially for numeric code.
The bigger problem is that running javascript in the background will slow down the browser and therefore users may not like your website because it runs slowly.
There is obviously an issue of security, how can you verify the results?
And privacy, can you ensure sensitive data isn't compromised?
On top of this, it's quite a difficult thing to do. Can the number of visits you receive justify the effort that you'll have to put into it? It would be better if you could run the code transparently on either the server or client-side. Compiling other languages to javascript can help here.
In summary, the reason that it's not widespread is because developers' time is more valuable than server time. The risk of losing user data and the inconvenience to users outweighs the potential gains.
First that comes to my mind is security.
Almost all distributed protocols that I know have encryption, thats why they prevent security risks. Although this subject is not so innovative..
http://www.igvita.com/2009/03/03/collaborative-map-reduce-in-the-browser/
Also Wuala is a distributed system, that is implemented using java applet.
I know of pluraprocessing.com doing similar thing, not sure if exactly javascript, but they run Java through browser and runs totally in-memory with strict security.
They have 50,000 computers grid on which they have successfully run applications even like web-crawling (80legs).
I think we can verify results on some kind of problem.
Let's say we have n number of items and need to sort it. We'll give it to worker-1, worker-1 will give us the result. We can verify it O(n) time. Please consider that it take at least O(n*log(n)) time to produce the result. Additionally we should consider how large is n items? (concern about network speed)
Another example, f(x)=12345, and function is given. Purpose is to find value of x. We can test it by replace x with some worker's result. I think some problems that are not verifiable are difficult to give to someone.
The whole idea of Javascript Distributed Computing has number of disadvantages:
single point of failure - there is no direct way to comunicate between nodes
natural fails of nodes - every node is working as long as browser
no guarantee that message sent will be ever received - according to natural fails of nodes
no guarantee that message received have been ever sent - because some hacker can interpose
annoying load on client side
ethical problems
while there is only one (but very tempting) advantage:
easy and free access to milions of nodes - almost every device has JS supporting browser nowadays
However the biggest problem is corelation between scalability and annoyance. Let's say you offer some attractive web service and run computing on client side. More people you use for computing, more people are annoyed. More people are annoyed, less people use your service. Well, you can limit annoyance (computing), scalability or try something between.
Consider google for example. If google will run computations on client side, some people will start to use bing. How many ? Depends on annoyance level.
The only hope for Javascript Distributed Computing may be multimedial services. As long as they consume lots of CPU, nobody will notice any additional load.
I think the no.1 problem is javascript inefficiency at computing. It wouldn't be just worth it, because an application in pure c/c++ would be 100 times faster.
I found a question similar to this a while back, so I built a thingy that does this. It uses web workers and fetches scripts dynamically (but no Eval!). Web workers sandbox the scripts so they cannot access the window or the DOM. You can see the code here, and the main website here
The library has a consent popup on first load, so the user knows what's going on in the background.