I am creating an ePub reader on IOS. Basically, I use uiwebview to load the xhtml files.
Every time page turn, I need to load the file with uiwebview then call javascript to scroll to the right offset. Here is a problem, some xhtml file is big (like > 2MB) that it cost too much time to load. Thus the page turn animation is not so smooth.
So I am thinking I could load the xhtml once with uiwebviewA , and each time page turn, I create another uiwebviewB and grab the needed html content(like second page) from uiwebviewA. In this way, I could limite the html into small size and the page turn animation should be smooth.
My question is that is there any open source javascript library can do the job?
Any comments is appreciated!!
There is no meaningful, well-defined way to split an HTML document in the way you seem to be describing. You are confusing two very different things: (1) splitting the rendering into page-sized chunks, and (2) splitting the HTML source. To put it a different way, there is no algorithm I can imagine that would split an HTML file into pieces, such that the sequential composition of the rendered pieces was identical to the rendering of the original HTML file. In other words, in order to figure out how to split the HTML, assuming it is even possible, you'd have to do much or all of the work involved in rendering the page, which would defeat the entire purpose.
You should abandon the notion of splitting the HTML. Ebook readers all paginate by essentially rendering the entire HTML document once, then "windowing" and "clipping" and "offsetting" or, in some cases, using CSS regions.
There are a couple of alternatives I can think of, if I understand what you are trying to do.
Reduce the size of the input HTML files by pre-splitting them earlier in the
workflow. For instance, in one project I know, the source (X)HTML files have bits of additional markup that tell a pre-processor where split them into individual files if desired, which in this case is a work-around for ebook readers that don't honor CSS page-break-* properties properly.
Pre-compute the rendering for next page as a graphic and
use it for the page turning.
As already discussed, rethink your architecture of reloading the entire HTML document for every page in your book. If it is merely page turning effects that
lead you to want to do that, then give them up.
Consider that many ebook readers provide a scrolling mode that does
not require pagination, and some (eg Himawari Reader) provide only
scrolling mode, which is actually something that some readers prefer.
You can put out your scrolling version, and then do pagination in version 2.
You should really check this:
http://cubiq.org/swipeview
And this demo that does exactly what you asked for:
http://cubiq.org/dropbox/SwipeView/demo/ereader/
It takes a text (book) and paginates the text to fit the screen size.
I tested the demo in Android and iOS and it works great!!
Related
I have some html content that gets embedded into a page via a server side call. So, when the page's html is being compiled on the server, a call is made to another server to return some html, which is then embedded within a div somewhere in the body. The problem is, this content contains it's own css. So, I wrote a script to inject style tags into the HEAD on ready, which works fine on desktop browsers. However, on mobile devices there's a fairly significant flash of unstyled content. I know that you're technically not supposed to include style tags in the body, but in this case would it yield better results to just include them in the body instead of injecting them into the head?
In this case, it sounds like the right solution is to fix up your architecture so that the server-side compiler can include CSS for the remote page in the page head. This probably involves separating the CSS of the remote page(s) out of the markup there and then grabbing it as a separate file to be included in the page head during compilation.
Since the right solution is not always feasible given a myriad reasons, compromise is often required. Leaving the CSS in the remote markup, if it produces the result you desire, could be the best solution for you. Or perhaps some other hack to get the CSS into the head server-side could be appropriate. You need to decide if it is worth the effort to do any of these things, if they are possible for you to accomplish given your constraints.
Some discussion here. In my experience a lot of enterprise content does it. Does that mean it's the RIGHT thing to do? I dont know. But it's certainly not frowned upon in my experience.
Source: https://www.w3.org/wiki/The_web_standards_model_-_HTML_CSS_and_JavaScript
Why separate?
Efficiency of code: The larger your files are, the longer they will take to download, and the more they will cost some people to view (some people still pay for downloads by the megabyte.) You therefore don’t want to waste your bandwidth on large pages cluttered up with styling and layout information in every HTML file. A much better alternative is to make the HTML files stripped down and neat, and include the styling and layout information just once in a separate CSS file. To see an actual case of this in action, check out the A List Apart Slashdot rewrite article where the author took a very popular web site and re-wrote it in XHTML/CSS.
Ease of maintenance: Following on from the last point, if your styling and layout information is only specified in one place, it means you only have to make updates in one place if you want to change your site’s appearance. Would you prefer to update this information on every page of your site? I didn’t think so.
Accessibility: Web users who are visually impaired can use a piece of software known as a “screen reader” to access the information through sound rather than sight — it literally reads the page out to them, and it can do a much better job of helping people to find their way around your web page if it has a proper semantic structure, such as headings and paragraphs. In addition keyboard controls on web pages (important for those with mobility impairments that can't use a mouse) work much better if they are built using best practices. As a final example, screen readers can’t access text locked away in images, and find some uses of JavaScript confusing. Make sure that your critical content is available to everyone.
Device compatibility: Because your HTML/XHTML page is just plain markup, with no style information, it can be reformatted for different devices with vastly differing attributes (eg screen size) by simply applying an alternative style sheet — you can do this in a few different ways (look at the [mobile articles on dev.opera.com] for resources on this). CSS also natively allows you to specify different style sheets for different presentation methods/media types (eg viewing on the screen, printing out, viewing on a mobile device.)
Web crawlers/search engines: Chances are you will want your pages to be easy to find by searching on Google, or other search engines. A search engine uses a “crawler”, which is a specialized piece of software, to read through your pages. If that crawler has trouble finding the content of your pages, or mis-interprets what’s important because you haven’t defined headings as headings and so on, then your rankings in relevant search results will probably suffer.
It’s just good practice: This is a bit of a “because I said so” reason, but talk to any professional standards-aware web developer or designer, and they’ll tell you that separating content, style, and behaviour is the best way to develop a web application.
Additional stackoverflow articles:
Using <style> tags in the <body> with other HTML
Will it be a wrong idea to have <style> in <body>?
One of the pages served by my application is very long (about 8Mb of source HTML, mostly tables). I know this itself is a sign of wrong architecture but there are circumstances that don't allow to change that quickly :(
In almost every browser apart from IE the page is still fine - of course it's slower than an average one but it looks like the time it takes to display is mostly defined by code download speed, and I have no problems with that.
It is very different in IE (7, 8 and 9) though - the page is extremely slow for about 10-15 seconds after downloading with frozen screen effect, then experiences noticeable scrolling lags and "script is running slowly" messages while there is no javascript running on the page. IE9 takes about 800Mb of RAM as well when displaying that page.
When I serve plain text content of that size it is much better, but formatted HTML tables seem to be causing problems. It looks like long DOM is a blocker for IE of any version.
I don't know what answer am I hoping for - obviously a proper solution would be to change the page architecture by breaking it down on server side and serving piece by piece via ajax, but still - is there any kind of say magic pragma or js for IE to stop doing what it does with DOM tree speeding it up?
It would be the best solution to chuck page downloading by client. But you have to be advised that the "table" tag is the most slowest rendering tag in IE (as my experience says). So in the first step I think you should do some modifications on the HTML document. Here are some suggestions:
Clear inline style sheets and use css classes as far as you can. It will help your HTML document to be smaller in size.
use some other expressions instead of using TABLE. using DIV s would be my first recommendation. Simplify your document and the parser can read the codes as easy as you can. So make them easy to read. It causes to write less and again, it helps the document to be much smaller.
remove all the spaces, tabs, new line characters and many other extra contents from the HTML document.
Qualify the content you are presenting to be more useful for the client. As we all now, we can see two lines at most. So all the data on one page is not a good idea, actually useless. Because while user is downloading the document, some data might be updated on the server and the data your user has is not valid anymore.
After all, always remember that every character stores 8 bytes in the memory (no matter it's virtual or not) including all the parsing variables and memory uses by xml parsers and some how hard codes to load the HTML string and make a DOM out of it. The speed to read the document and parsing it is so important as much as the size would be.
Hope it helps you..
Cheers
I have a client who wants to do a website with specific height for the content part.
The Question:
Is there any way that when the text is long / reach the maximum height of the content part, then a new page is created for the next text.
Within my knowledge, somehow I know this can't be done.
Thanks for helping guys!
You will probably want to look into something like jQuery paging with tabs
http://code.google.com/p/jquery-ui-tabs-paging/
Unfortunately you would need to figure out the maximum number of characters you want to allow in the content pane and anything after that would need to be put into another tab. You can hide the tab and use just a link instead.
Without more knowledge on what you're development is, this is a difficult question to answer. Are you looking to create a different page entirely, or just different sections on a page?
The former can be done using server-side code (e.g. Rails), and dynamically serving out pages (e.g. Google results are split across many page).
The latter can be done with Javascript and/or CSS. A simple example is:
<div id="the_content" style="overflow:hidden;width:200px;height:100px">
Some really long text...
</div>
This would create a "scroll" bar and just not disrupt the flow of the page. In Javascript (e.g. JQuery), you'll be able to split the content into "tabs".
Does this help?
(Almost) everything is possible, but your intuitions are right in that this can't be done easily or in a way that makes any sense.
If I were in your position, I would go up to the client and present advantages and disadvantages to breaking it up. Advantages include the fact that you'd be able to avoid long pages and that with some solutions to this problem, the page will load faster. Disadvantages include the increased effort (i.e., billable hours) it would take to accomplish this, the lack of precedent for it resulting in users being confused, and losses to SEO (you're splitting keywords amongst n pages).
This way, you're not shooting down the client's idea, and in the likely case the client retreats from his position, he will go away thinking that he's just made a smart choice by himself and everyone goes away happy.
If you're intent on splitting it up into pages, you can do it on the backend by either literally structuring your content into pages or applying some rule (e.g., cut a page off at the first whole paragraph after 1000 characters) to paginate the results. On the frontend, you could use hashtags to allow Javascript to paginate the results. You could even write an extensible library that "paginates" any text node. In fact, I wouldn't be surprised if one didn't exist already.
I have a large amount of XHTML content, that I would like to display in WebKit. Unfortunately, Webkit is running on a mobile platform with fairly slow hardware, so loading a large HTML file all at once is REALLY slow. Therefore I would like to gradually load HTML content. I have split my data in small chunks and am looking for the proper way to feed it into WebKit.
Presumably I would use javascript? What's the right way of doing it through javascript? I'm trying document.body.innerHTML += 'some data'. Which doesn't seem to do very much - possibly because my chunks may not be valid standalone html. I've also tried document.body.innerText += 'some data'. Which doesn't seem to work.
Any suggestions?
This sounds like a perfect candidate for Ajax based "lazy" content loading that starts loading content while the user scrolls down the page. There are several jQuery plugins for this. This is one of them.
You will have to have valid chunks for this in any case, though. Also, I'm not sure how your hardware is going to react to this. If the problem is RAM or hard disk memory, you may encounter the same problems no matter how you load the data. Only if it's the actual connection or the speed at which the page is loaded, will lazy loading make sense.
Load it as needed via ajax. I have a similar circumstance, and as the user scrolls near the end of the page, it loads another 50 entries. Because each entry contains many js events, too many entries degrades performance; so after 200 records, I remove 50 from the other end of the list as the user scrolls.
No need to reinvent the wheel. Use jQuery's ajax method (and the many shorthand variantes out there): http://api.jquery.com/category/ajax/
I'm building a site that makes extensive use of FLIR to allow the use of non-websafe fonts. However, pageloads are an ugly process, as first the HTML text version of each field loads and then (a few hundred milliseconds later) it's replaced by its FLIR image counterpart.
Is there any way to avoid this sort of thing? I've got a client presentation in a few hours and I know it'll raise eyebrows. My situation is sort of related to this question which is in regards to sIFR, not FLIR. Any ideas?
Thanks,
Justin
Try putting the following rules into your stylesheet:
.flir-replaced{text-indent:-5000px;}
.flir-image{display:block;}
You may have to modify your other FLIR-related CSS rules to account for the fact that the generated images are now vertically aligned to the top of their respective parents.
It's been a while since I used FLIR, but I recall there was an internal caching method that would pull from cache on load instead of generate it each time.
http://docs.facelift.mawhorter.net/configuration:settings
Also, you can't have too many on the page at once. I found that between 6-10 were optimal for performance.
Are you on shared hosting? Is your css/js compressed? I found that the initial load was a little slow, but fairly quick after the images had been generated.