We are have a site with a feed similar to pinterest and are planning to refactor the jquery soup into something more structured. The two most likely candidates are AngularJS and Backbone+Marionette. The site is user-generated and is mostly consumption-oriented (typical 90/9/1 rule) with the ability for users to like, bookmark, and comment on posts. From the feed we open a lightbox to see more detail about the post with comments, related posts, similar to pinterest.
We have used backbone sporadically and are familiar with the idea but put off by the boilerplate. I assume Marionette would help a lot with that but we're open to changing the direction more radically (eg Angular) if it will help in the long term.
The requirements:
Initial page must static for SEO reasons. It's important that the framework be able to start with existing content, preferable with little fight.
we would prefer to have the data needed for the lightbox loaded already in feed so that the transition can be faster. Some of the data is already there (title, description, photos, num likes/ num bookmarks,num comments) but there is additional data that would be loaded for the detail view - comments, similar posts, who likes this, etc.
Changes to the post that happen in the feed or detail lightbox should be reflected in the other with little work (eg, if I like it from the feed, I should see that like and new like count number if I go to the lightbox - or the opposite.)
We would like to migrate our mobile site (currently in Sencha Touch) to also use the same code base for the parts that are common so we can have closer feature parity between mobile and main site.
These requirements related to my concerns about Angular:
1) Will it be possible/problematic to have initial page loads be static while rending via the templates additional pages.
2) is it problematic to have multiple data-sources for different parts of page - eg the main post part comes from embedded json data and from "see more"s in the feed while the additional detail would come from a different ajax call.
3) While the two-way binding is cool - I'm concerned it might be a negative in our case because of the number of items being rendered. The number of elements that we need two-way binding is relatively small. Posts like:
https://stackoverflow.com/a/7654856/214545
Angular JS ng-repeat consumes more browser memory
concern me for our use-case. We can easily have hundreds of posts each with 1-2 dozen details. Can the two-way binding be "disabled" where I have fields/elements that I know won't change?
Is it normal/possible to unload elements outside of the view port to same memory? This is also connected to the mobile direction because memory is even more of a concern there.
Would AngularJS work/perform well in our use-case? Are there any tricks/tips that would help here?
There are different methods of "infinite scroll" or feed as you put it. The needs of the users and size of acceptable response payload will determine which one you choose.
You sacrifice usability where you meet performance it seems here.
1. Append assets
This method is your traditional append to bottom approach where if the user reaches the bottom of the current scroll height, another API call will be made to "stack on more" content. This has it's benefits as being the most effective solution to handle cross device caveats.
Disadvantages of this solution, as you have mentioned, come from large payloads flooding memory as user carelessly scrolls through content. There is no throttle.
<div infinite-scroll='getMore()' infinite-scroll-distance='0'>
<ul>
<li ng-repeate="item in items">
{{item}}
</li>
</ul>
</div>
var page = 1;
$scope.getMore() = function(){
$scope.items.push(API.returnData(i));
page++;
}
2. Append assets with a throttle
Here, we are suggesting that the user can continue to display more results in a feed that will infinitely append, but they must be throttle or "manually" invoke the call for more data. This becomes cumbersome relative to the size of the content being returned that the user will scroll through.
If there is a lot of content being retruned per payload, the user will have to click the "get more" button less. This is of course at a tradeoff of returning a larger payload.
<div>
<ul>
<li ng-repeate="item in items">
{{item}}
</li>
</ul>
</div>
<div ng-click='getMore()'>
Get More!
</div>
var page = 1;
$scope.getMore() = function(){
$scope.items.push(API.returnData(i));
page++;
}
3. Virtual Scroll
This is the last and most interesting way to infinite scroll. The idea is that you are only storing the rendered version of a range of results in browser memory. That is, complicated DOM manipulation is only acting on the current range specified in your configuration. This however has it's own pitfalls.
The biggest is cross device compatibility .
If your handheld device has a virtual scrolling window that reaches the width of the device --- it better be less then the total height of the page because you will never be able to scroll past this "feed" with its own scroll bar. You will be "stuck" mid page because your scroll will always be acting on the virtual scroll feed rather than the actual page containing the feed.
Next is reliability. If a user drags the scroll bar manually from a low index to one that is extremely high, you are forcing the broswer to run these directives very very quickly, which in testing, has caused my browser to crash. This could be fixed by hiding the scroll bar, but of course a user could invoke the same senario by scrolling very very quickly.
Here is the demo
The source
"Initial page must static for SEO reasons. It's important that the framework be able to start with existing content, preferable with little fight."
So what you are saying is that you want the page to be prerendered server side before it serves content? This approach worked well in the early thousands but most everyone is moving away from this and going towards the single page app style. There are good reasons:
The inital seed you send to the user acts as a bootstrap to fetch API data so your servers do WAY less work.
Lazy loading assets and asynchronous web service calls makes the percieved load time much faster than the traditional "render everything on the server first then spit it back out to the user approach."
Your SEO can be preserved by using a page pre-render / caching engine to sit in front of your web server to only respond to web crawlers with your "fully rendered version". This concept is explained well here.
we would prefer to have the data needed for the lightbox loaded already in feed so that the transition can be faster. Some of the data is already there (title, description, photos, num likes/ num bookmarks,num comments) but there is additional data that would be loaded for the detail view - comments, similar posts, who likes this, etc.
If your inital payload for feed does not contain children data points for each "feed id" and need to use an additional API request to load them in your lightbox --- you are doing it right. That's totally a legit usecase. You would be arguing 50-100ms for a single API call which is unpercievable latency to your end user. If you abosultely need to send the additional payload with your feed, you arent winning much.
Changes to the post that happen in the feed or detail lightbox should be reflected in the other with little work (eg, if I like it from the feed, I should see that like and new like count number if I go to the lightbox - or the opposite.)
You are mixing technologies here --- The like button is an API call to facebook. Whether those changes propogate to other instantiations of the facebook like button on the same page is up to how facebook handles it, I'm sure a quick google would help you out.
Data specific to YOUR website however --- there are a couple different use cases:
Say I change the title in my lightbox and also want the change to propogate to the feed its currently being displayed in. If your "save edit action" POST's to the server, the success callback could trigger updating the new value with a websocket. This change would propogate to not just your screen, but everyone elses screen.
You could also be talking about two-way data binding (AngularJS is great at this). With two way data-binding, your "model" or the data you get back from your webservice can be binded to muiltiple places in your view. This way, as you edit one part of the page that is sharing the same model, the other will update in real time along side it. This happens before any HTTP request so is a completely different use case.
We would like to migrate our mobile site (currently in Sencha Touch) to also use the same code base for the parts that are common so we can have closer feature parity between mobile and main site.
You should really take a look a modern responsive CSS frameworks like Bootstrap and Foundation. The point of using responsive web design is that you only have to build the site once to accomadate all the different screen sizes.
If you are talking about feature modularity, AngularJS takes the cake. The idea is that you can export your website components into modules that can be used for another project. This can include views as well. And if you built the views with a responsive framework, guess what --- you can use it anywhere now.
1) Will it be possible/problematic to have initial page loads be static while rending via the templates additional pages.
As discussed above, its really best to move away from these kind of approaches. If you absolutely need it, templating engines dont care about wether your payload was rendered serverside or client side. Links to partial pages will be just as accesible.
2) is it problematic to have multiple data-sources for different parts of page - eg the main post part comes from embedded json data and from "see more"s in the feed while the additional detail would come from a different ajax call.
Again, this is exactly what the industry is moving into. You will be saving in "percieved" and "actual" load time using an inital static bootstrap that fetches all of your external API data --- This will also make your development cycle much faster because you are separating concerns of completely independant peices. Your API shouldnt care about your view and your view shouldnt care about your API. The idea is that both your API and your front end code can become modular / reusable when you break them into smaller peices.
3) While the two-way binding is cool - I'm concerned it might be a negative in our case because of the number of items being rendered. The number of elements that we need two-way binding is relatively small.
I'm also going to combine this question with the comment you left below:
Thanks for the answer! Can you clarify - it seems that 1) and 2) just deal with how you would implement infinite scrolling, not the performance issues that might come from such an implementation. It seems that 3 addresses the problem in a way similar to recent versions of Sencha Touch, which could be a good solution
The performance issues you will run into are totally subjective. I tried to outline the performance considerations like throttling into the discussion because throttling can drastically reduce the amount of stress your server is taking and the work your users browser has to do with each new result set appended into the DOM.
Infinite scroll, after a while, will eat up your users browser memory. That much I can tell you is inevitible but only through testing will you be able to tell how much. In my experience I could tell you that a users browser can handle a great deal of abuse but again, how big your payload is for each result set and what directives you are running on all of your results are totally subjective. There are solutions that render only on a ranged data set in option three I described, but have their limitations as well.
API data coming back shouldn't be anymore than 1-2kbs in size, and should only take about 50-200ms to return a query. If you arent meeting those speeds, mabye it's time to re-evaluate your queries or cut down on the size of the result set coming back by using child ID's to query other endpoints for specifics.
The main thing that remains unanswered in Dan's answer is the initial page load. We're still not satisfied with the approach that we have to do it client side - it still seems to us that there is a risk for SEO and initial page load. We have a fair amount of SEO traffic and looking for more - these people are coming to our site with no cache and we have a few seconds to catch them.
There are a few options to handle angular on the server side - I'll try to collect some of them here:
https://github.com/ithkuil/angular-on-server
https://github.com/ithkuil/angular-on-server/wiki/Running-AngularJS-on-the-server-with-Node.js-and-jsdom
https://github.com/angular/angular.js/issues/2104
will add more as they come up.
Related
I was wondering if there are any methods to check if a website is successfully displayed or rendered on the a user's system.
The application of this is to to deliver a content if and only if it is a real user rather than a crawler/spider fetching the content.
so the check would be:
-check if the content is rendered/displayed,
-execute the next script
-otherwise
-do something else
any help is highly appreciated.
Cheers
Most crawlers simply do not execute any JavaScript, but you cannot really rely on that since it's easy to imagine a sophisticated company creating a search engine that actually does mimic a JS-enabled browser. Many crawlers have an easily identifiable user-agent string, but you cannot rely on that either.
You could do something, I suppose, like attempt to poll for the mouse x,y position a couple of times, looking for values other than 0,0, which is likely to indicate a person with an actual computer and pointing device is at the other end. That still may not get what you want for touch screen devices though. You might also consider waiting until you detect a scrolling event if your secondary scripts don't need to load right away.
I have a client who wants to do a website with specific height for the content part.
The Question:
Is there any way that when the text is long / reach the maximum height of the content part, then a new page is created for the next text.
Within my knowledge, somehow I know this can't be done.
Thanks for helping guys!
You will probably want to look into something like jQuery paging with tabs
http://code.google.com/p/jquery-ui-tabs-paging/
Unfortunately you would need to figure out the maximum number of characters you want to allow in the content pane and anything after that would need to be put into another tab. You can hide the tab and use just a link instead.
Without more knowledge on what you're development is, this is a difficult question to answer. Are you looking to create a different page entirely, or just different sections on a page?
The former can be done using server-side code (e.g. Rails), and dynamically serving out pages (e.g. Google results are split across many page).
The latter can be done with Javascript and/or CSS. A simple example is:
<div id="the_content" style="overflow:hidden;width:200px;height:100px">
Some really long text...
</div>
This would create a "scroll" bar and just not disrupt the flow of the page. In Javascript (e.g. JQuery), you'll be able to split the content into "tabs".
Does this help?
(Almost) everything is possible, but your intuitions are right in that this can't be done easily or in a way that makes any sense.
If I were in your position, I would go up to the client and present advantages and disadvantages to breaking it up. Advantages include the fact that you'd be able to avoid long pages and that with some solutions to this problem, the page will load faster. Disadvantages include the increased effort (i.e., billable hours) it would take to accomplish this, the lack of precedent for it resulting in users being confused, and losses to SEO (you're splitting keywords amongst n pages).
This way, you're not shooting down the client's idea, and in the likely case the client retreats from his position, he will go away thinking that he's just made a smart choice by himself and everyone goes away happy.
If you're intent on splitting it up into pages, you can do it on the backend by either literally structuring your content into pages or applying some rule (e.g., cut a page off at the first whole paragraph after 1000 characters) to paginate the results. On the frontend, you could use hashtags to allow Javascript to paginate the results. You could even write an extensible library that "paginates" any text node. In fact, I wouldn't be surprised if one didn't exist already.
Hi
my web site provides instant filtering of articles via JavaScript.
Initially, 12 most fresh summaries for articles are displayed.
Summaries of ALL articles are putted into JavaScript cache-object (rendered by server in script tags).
When user click on tags, corresponding summaries for articles will be taken from JS cache-object and inserted into the page as HTML pieces.
Does it have some negative impact on how SEO-friendly my web site is.
The main problem is clear: only 12 "static" URL's are displayed and another will appear programmatically only on user interaction.
How to make the site SEO-friendly, keeping this nice filtering feature ?
When i will add a link "all articles" that will load separate page with all articles, will it solve the SEO problems ?
The way to make this work for Search Engines, user who don't have JavaScript and also in your funky way is to write this feature in stages.
Stage 1: Get a working "Paged" version of this page working, so it shows 12 results and you can click on "next page" and "last page" and maybe even on various page numbers.
Stage 2: Implement the filter using a form-post and have it change the results shown in the page view.
Stage 3: Add JavaScript over the top of the working form and have it display the results the normal post would display. You can also replace the full-page-reload for paging with JavaScript safe in the knowledge that it all works without JavaScript.
Most people use an AJAX request rather than storing an ever-increasing list in a JavaScript array.
Crawlers (or most of them) don't enable javascript while crawling. therefore, all javascript powered content won't be referenced. And your site will be considered smaller as it is by search engines. this will be penalizing for your pages.
Making a "directory" page can be a solution. But if you do so, search engines will send user to these static pages, and not through your javascript viewer homepage.
Anyway, I would not recommend to make a javascript-only viewable content:
First it's not SEO friendly
then it's not javascript-disabled
friendly
imposibillity to use history
functions (back and next)
middle click "open in a new
tab/window" won't be working.
Also you can't bookmark javascript
generated content
so is your nice feature nice enough to lose the points mentionned above?
There are ways to conciliate your feature + all those points but that's far from easy to do.
I have a large amount of XHTML content, that I would like to display in WebKit. Unfortunately, Webkit is running on a mobile platform with fairly slow hardware, so loading a large HTML file all at once is REALLY slow. Therefore I would like to gradually load HTML content. I have split my data in small chunks and am looking for the proper way to feed it into WebKit.
Presumably I would use javascript? What's the right way of doing it through javascript? I'm trying document.body.innerHTML += 'some data'. Which doesn't seem to do very much - possibly because my chunks may not be valid standalone html. I've also tried document.body.innerText += 'some data'. Which doesn't seem to work.
Any suggestions?
This sounds like a perfect candidate for Ajax based "lazy" content loading that starts loading content while the user scrolls down the page. There are several jQuery plugins for this. This is one of them.
You will have to have valid chunks for this in any case, though. Also, I'm not sure how your hardware is going to react to this. If the problem is RAM or hard disk memory, you may encounter the same problems no matter how you load the data. Only if it's the actual connection or the speed at which the page is loaded, will lazy loading make sense.
Load it as needed via ajax. I have a similar circumstance, and as the user scrolls near the end of the page, it loads another 50 entries. Because each entry contains many js events, too many entries degrades performance; so after 200 records, I remove 50 from the other end of the list as the user scrolls.
No need to reinvent the wheel. Use jQuery's ajax method (and the many shorthand variantes out there): http://api.jquery.com/category/ajax/
just wondering if anyone knows anything of using javascript to set html to new content instead of linking to new pages, if this is generally a bad idea or if it kind of hurts SEO(which im kind of new to)
Basically the home page displays given content, and the links to like contact pages and stuff, just change the body content to what would normally be a separate html page. my OCD kinda bugs me when pages reload and either flash the background or its offset somehow, so i wanted to know if making sites like this was a bad idea or whatever-
i suppose at the least, i could create duplicates/hidden pages for SEO purposes
As you describe it, it is a bad idea. The right methodology is to use progressive enhancement: you develop for javascript-disabled users (such as searchbots) and then use javascript for ajax loading. So most users will benefit from an improved user experience, without preventing the rest from accessing your data.
In practice it means your regular markup for a page-based navigation menu
(products) and via javascript (such as jquery) you modify the behaviour:
$('#nav a').bind('click',function(){
$('#content').load($(this).attr('href'));
return false;
});
usually for a good SEO, you need to have the maximum pages you can, and then, if you want to use javascript to load, use ajax unobtrusive.
Breaks bookmarking
Breaks the back button
Breaks saving the page
Breaks sending a link to a friend
Breaks search engine indexing
It is possible to mitigate (to some extent) most of these, but only imperfectly and only with quite a lot of work.
In order to allow for some SEO - you can have all the data on the screen as divs. a Home div, Contact Us div etc...
With javascript, you would switch off the divs corresponding to which page you'd like.
If the user has no javascript, they see all the pages at once.
Heres an example of a site that does this with OK SEO - but switch off the javascript it all goes a bit wrong.
http://www.spideronline.co.uk/#our-work