I have a large amount of XHTML content, that I would like to display in WebKit. Unfortunately, Webkit is running on a mobile platform with fairly slow hardware, so loading a large HTML file all at once is REALLY slow. Therefore I would like to gradually load HTML content. I have split my data in small chunks and am looking for the proper way to feed it into WebKit.
Presumably I would use javascript? What's the right way of doing it through javascript? I'm trying document.body.innerHTML += 'some data'. Which doesn't seem to do very much - possibly because my chunks may not be valid standalone html. I've also tried document.body.innerText += 'some data'. Which doesn't seem to work.
Any suggestions?
This sounds like a perfect candidate for Ajax based "lazy" content loading that starts loading content while the user scrolls down the page. There are several jQuery plugins for this. This is one of them.
You will have to have valid chunks for this in any case, though. Also, I'm not sure how your hardware is going to react to this. If the problem is RAM or hard disk memory, you may encounter the same problems no matter how you load the data. Only if it's the actual connection or the speed at which the page is loaded, will lazy loading make sense.
Load it as needed via ajax. I have a similar circumstance, and as the user scrolls near the end of the page, it loads another 50 entries. Because each entry contains many js events, too many entries degrades performance; so after 200 records, I remove 50 from the other end of the list as the user scrolls.
No need to reinvent the wheel. Use jQuery's ajax method (and the many shorthand variantes out there): http://api.jquery.com/category/ajax/
Related
I am creating an ePub reader on IOS. Basically, I use uiwebview to load the xhtml files.
Every time page turn, I need to load the file with uiwebview then call javascript to scroll to the right offset. Here is a problem, some xhtml file is big (like > 2MB) that it cost too much time to load. Thus the page turn animation is not so smooth.
So I am thinking I could load the xhtml once with uiwebviewA , and each time page turn, I create another uiwebviewB and grab the needed html content(like second page) from uiwebviewA. In this way, I could limite the html into small size and the page turn animation should be smooth.
My question is that is there any open source javascript library can do the job?
Any comments is appreciated!!
There is no meaningful, well-defined way to split an HTML document in the way you seem to be describing. You are confusing two very different things: (1) splitting the rendering into page-sized chunks, and (2) splitting the HTML source. To put it a different way, there is no algorithm I can imagine that would split an HTML file into pieces, such that the sequential composition of the rendered pieces was identical to the rendering of the original HTML file. In other words, in order to figure out how to split the HTML, assuming it is even possible, you'd have to do much or all of the work involved in rendering the page, which would defeat the entire purpose.
You should abandon the notion of splitting the HTML. Ebook readers all paginate by essentially rendering the entire HTML document once, then "windowing" and "clipping" and "offsetting" or, in some cases, using CSS regions.
There are a couple of alternatives I can think of, if I understand what you are trying to do.
Reduce the size of the input HTML files by pre-splitting them earlier in the
workflow. For instance, in one project I know, the source (X)HTML files have bits of additional markup that tell a pre-processor where split them into individual files if desired, which in this case is a work-around for ebook readers that don't honor CSS page-break-* properties properly.
Pre-compute the rendering for next page as a graphic and
use it for the page turning.
As already discussed, rethink your architecture of reloading the entire HTML document for every page in your book. If it is merely page turning effects that
lead you to want to do that, then give them up.
Consider that many ebook readers provide a scrolling mode that does
not require pagination, and some (eg Himawari Reader) provide only
scrolling mode, which is actually something that some readers prefer.
You can put out your scrolling version, and then do pagination in version 2.
You should really check this:
http://cubiq.org/swipeview
And this demo that does exactly what you asked for:
http://cubiq.org/dropbox/SwipeView/demo/ereader/
It takes a text (book) and paginates the text to fit the screen size.
I tested the demo in Android and iOS and it works great!!
One of the pages served by my application is very long (about 8Mb of source HTML, mostly tables). I know this itself is a sign of wrong architecture but there are circumstances that don't allow to change that quickly :(
In almost every browser apart from IE the page is still fine - of course it's slower than an average one but it looks like the time it takes to display is mostly defined by code download speed, and I have no problems with that.
It is very different in IE (7, 8 and 9) though - the page is extremely slow for about 10-15 seconds after downloading with frozen screen effect, then experiences noticeable scrolling lags and "script is running slowly" messages while there is no javascript running on the page. IE9 takes about 800Mb of RAM as well when displaying that page.
When I serve plain text content of that size it is much better, but formatted HTML tables seem to be causing problems. It looks like long DOM is a blocker for IE of any version.
I don't know what answer am I hoping for - obviously a proper solution would be to change the page architecture by breaking it down on server side and serving piece by piece via ajax, but still - is there any kind of say magic pragma or js for IE to stop doing what it does with DOM tree speeding it up?
It would be the best solution to chuck page downloading by client. But you have to be advised that the "table" tag is the most slowest rendering tag in IE (as my experience says). So in the first step I think you should do some modifications on the HTML document. Here are some suggestions:
Clear inline style sheets and use css classes as far as you can. It will help your HTML document to be smaller in size.
use some other expressions instead of using TABLE. using DIV s would be my first recommendation. Simplify your document and the parser can read the codes as easy as you can. So make them easy to read. It causes to write less and again, it helps the document to be much smaller.
remove all the spaces, tabs, new line characters and many other extra contents from the HTML document.
Qualify the content you are presenting to be more useful for the client. As we all now, we can see two lines at most. So all the data on one page is not a good idea, actually useless. Because while user is downloading the document, some data might be updated on the server and the data your user has is not valid anymore.
After all, always remember that every character stores 8 bytes in the memory (no matter it's virtual or not) including all the parsing variables and memory uses by xml parsers and some how hard codes to load the HTML string and make a DOM out of it. The speed to read the document and parsing it is so important as much as the size would be.
Hope it helps you..
Cheers
In print mode, I would like to render such a links:
link
like this:
link (page 11)
(suppose the target page is on 11. page in print preview).
It is possible to add page number using counters to the page footer with plain CSS, but could I use this in more "the way I want it" way?
The solution doesn't need to be cross-browser (FF/Chrome is enough), any CSS/Javascript tricks are allowed.
As I wrote in my blog post "Printing web pages", printing web pages is in a sorry state.
Unless you print only (one column) text without images and tables, it's already hard enough to get a printout which resembles the screen content at least somewhat (yeah, I exaggerate here but not much).
The best browser for printing currently is still, since about a decade, Opera. All other browsers suck more or less.
CSS could help a lot but no browser implements the necessary parts. See Target counters in the Generated Content for Paged Media module - this would do exactly what you need.
Okay, after getting the rant out of the way, here are some obstacles which you will need to solve:
The page numbers will start to shift as soon as you start adding test to existing links. So if you write "blabla (page #12)" that will probably page #13 once you get there.
To be able to know what goes onto which page, you will have to split the web page into pages yourself. If you have mostly text, this isn't terribly hard: Just create one div per page, make sure the browser doesn't print it over a page break (good luck with that :-( ) and then move DOM elements into the divs until they fit a page ... if you can find out what the page size is.
To avoid the shifting problem, add " (page #0000)" to all links to make sure they occupy enough room. Later, they will be replaced with shorter texts, so pages with a lot of links might have some empty space at the bottom but that's better than the alternatives.
You should be able to write something like that (and debug it) in just six months or so ...
Which is why everyone uses a rendering engine on the server which can produce HTML and, say, PDF output. PDF allows you exact control over the page layout. You just need to find a rendering engine that supports text references. I suggest LaTeX; it's a bit unwieldy but it gets the job done.
Could this work?
#page {
#bottom-right {
content: counter(page) " of " counter(pages);
}
}
(from Page numbers with CSS/HTML)
I have a client who wants to do a website with specific height for the content part.
The Question:
Is there any way that when the text is long / reach the maximum height of the content part, then a new page is created for the next text.
Within my knowledge, somehow I know this can't be done.
Thanks for helping guys!
You will probably want to look into something like jQuery paging with tabs
http://code.google.com/p/jquery-ui-tabs-paging/
Unfortunately you would need to figure out the maximum number of characters you want to allow in the content pane and anything after that would need to be put into another tab. You can hide the tab and use just a link instead.
Without more knowledge on what you're development is, this is a difficult question to answer. Are you looking to create a different page entirely, or just different sections on a page?
The former can be done using server-side code (e.g. Rails), and dynamically serving out pages (e.g. Google results are split across many page).
The latter can be done with Javascript and/or CSS. A simple example is:
<div id="the_content" style="overflow:hidden;width:200px;height:100px">
Some really long text...
</div>
This would create a "scroll" bar and just not disrupt the flow of the page. In Javascript (e.g. JQuery), you'll be able to split the content into "tabs".
Does this help?
(Almost) everything is possible, but your intuitions are right in that this can't be done easily or in a way that makes any sense.
If I were in your position, I would go up to the client and present advantages and disadvantages to breaking it up. Advantages include the fact that you'd be able to avoid long pages and that with some solutions to this problem, the page will load faster. Disadvantages include the increased effort (i.e., billable hours) it would take to accomplish this, the lack of precedent for it resulting in users being confused, and losses to SEO (you're splitting keywords amongst n pages).
This way, you're not shooting down the client's idea, and in the likely case the client retreats from his position, he will go away thinking that he's just made a smart choice by himself and everyone goes away happy.
If you're intent on splitting it up into pages, you can do it on the backend by either literally structuring your content into pages or applying some rule (e.g., cut a page off at the first whole paragraph after 1000 characters) to paginate the results. On the frontend, you could use hashtags to allow Javascript to paginate the results. You could even write an extensible library that "paginates" any text node. In fact, I wouldn't be surprised if one didn't exist already.
I'm building a site that makes extensive use of FLIR to allow the use of non-websafe fonts. However, pageloads are an ugly process, as first the HTML text version of each field loads and then (a few hundred milliseconds later) it's replaced by its FLIR image counterpart.
Is there any way to avoid this sort of thing? I've got a client presentation in a few hours and I know it'll raise eyebrows. My situation is sort of related to this question which is in regards to sIFR, not FLIR. Any ideas?
Thanks,
Justin
Try putting the following rules into your stylesheet:
.flir-replaced{text-indent:-5000px;}
.flir-image{display:block;}
You may have to modify your other FLIR-related CSS rules to account for the fact that the generated images are now vertically aligned to the top of their respective parents.
It's been a while since I used FLIR, but I recall there was an internal caching method that would pull from cache on load instead of generate it each time.
http://docs.facelift.mawhorter.net/configuration:settings
Also, you can't have too many on the page at once. I found that between 6-10 were optimal for performance.
Are you on shared hosting? Is your css/js compressed? I found that the initial load was a little slow, but fairly quick after the images had been generated.