My application serves user created bundles of HTML pages for e-learning, also known as SCORM packages, and i'm trying to make that as fast as possible.
Loading page-by-page in iframes is quite slow, as pages may include high resolution graphics, animations, audio, video and so on.
Unfortunately pre-loading these pages is quite difficult, as they usually react to onLoad() events to start animations and interactions.
Without using applets or extensions, would it be possible to download the user bundle and serve it "in-browser" to the application?
This is a common-enough task with the advent of fat clients built on Backbone.JS, Angular, Ember, etc. Clients request data (usually JSON), media, etc. from the server as opposed to pre-rendered HTML, and do rendering and resource management client-side. If you want to go this way so that you can support flexible offline mode the way you specified, you usually need a set of generic loaders and tools in your app cache manifest that will loading the more specific (user-specific, lesson-specific, etc.) resources on page load.
The first time your user opens your app, it should be in online mode, and your app will need to request the specific resources it needs to work well offline and store them in client-side storage (localStorage, indexedDB or what it's trying to replace - WebSQL, and fileSystem. There are many resources on the web on how to use each of these APIs.). This step can also be incremental, rather than a huge download of megabytes of data.
The next time your user opens your page, your app can attempt to load all the resources it needs from client-side storage before even calling the server. It will only need to call the server if it's missing some resources, or if it needs to get a fresher version of a resource, or of course if you need to write to the server. If you did a good job of loading all the resources it needed into client-side storage the first time, it can work decently in offline mode.
If your users are running modern browsers you could use the HTML5 cache manifest.
Creating a manifest file will get the browser to download and store the site locally and then the user may even visit it offline
http://en.wikipedia.org/wiki/Cache_manifest_in_HTML5
Related
I have 2 HTML5 widgets, both made with Phaser.js and having images and audio, which are loaded on the fly by phaser library.
One of the widget(HTML5 file) works on local file system without XAMPP, while another only work when serve through XAMPP server.
I want to know why some HTML5 canvas files works without server while most of the time we require some server for canvas files.
Its a great confusion for me.
Plz help.
There's a very good explanation of why you need a web server on the getting started page for Phaser.
What it boils down to is you need to use a web server because:
It's to do with the protocol used to access the files. When you
request anything over the web you're using http, and the server level
security is enough to ensure you can only access files you're meant
to. But when you drag a file in it's loaded via the local file system
(technically file://) and that is massively restricted, for obvious
reasons. Under file:// there's no concept of domains, no server level
security, just a raw file system.
...
Your game is going to need to load resources: images, audio files,
JSON data, maybe other JavaScript files. And in order to do this it
needs to run unhindered by the browser security shackles. It needs
http:// access to the game files. And for that we need a web server.
Technically, none of your Phaser applications should run without a web server, it's quite odd that you got one of them to.
Set game.load.crossOrigin = true in your preload code and it should work.
I built a CRM for a client of mine, and now they've requested an interesting feature:
For each customer record, they have a matching directory of files on their local computer. They want the ability to open that folder in Windows Explorer directly from within the web app (the app doesn't need access to the directory/files; it just has to launch Windows Explorer so that the user can interact with their files).
This is obviously not possible with regular JavaScript running in the browser (thankfully). I thought there might be some way to accomplish this by building a Chrome extension for this purpose, but it seems Chrome extensions/apps can only access a sandboxed filesystem, which doesn't serve my needs at all. Building an NPAPI plugin in out of the question since Chrome is discontinuing support for NPAPI.
File URIs don't solve this problem either. Their display is ugly, there's no drag-and-drop, no big icons/thumbnails, no sorting etc. They want the full capability of the Windows Explorer.
The only viable option I thought of is to create a local node.js server, make a localhost CORS request to that server, and then run an exec command from node.
Any better idea?
One possibility is to register a custom URI protocol handler with the user's operating system, and then your web page can contain links using your custom protocol, such as openfolder://c/path/to/folder This sort of customization is probably most commonly seen in practice with itunes:// links.
A quick Google search led me to this decent looking tutorial: https://support.shotgunsoftware.com/hc/en-us/articles/200213756-How-to-launch-external-applications-using-custom-protocols-rock-instead-of-http-
The downside is that the user will have to run a small installer of some sort in order to set the correct registry entries (or whatever the non-Windows equivalent is for other OSes) and to drop a small script on disk. That would be much lighter-weight than running a node.js server like you proposed, though.
The linked tutorial uses a Python script, but even that is probably overkill for your needs. A batch file would likely suffice.
EDIT: One additional note, please be aware of the security implications of implementing a custom handler like this. Any webpage in any browser can potentially take advantage of your custom protocol, and an attacker would be able to pass arbitrary data to your script. You should take steps to ensure that the script will not accidentally execute arbitrary commands that may be injected by a malicious web page, and that it will only open a folder and nothing else.
That would require each customer to run a node.js server, which seems unrealistic in your case.
You could use File URIs.
Browsers will refuse to open them by default. However, as suggested in this answer, you could ask your customers to install LocalLinks.
This is not a programming question per se. I am using a free web host called getfreehosting. I am using their online file manager to transfer files. From time to time, the changes I make on source code do NOT reflect immediately after I upload them. I.e. when I run my application on Chrome, then go to view page source, I realize the JavaScript running is still the old version! In most cases this doesn't happen but when it does it is extremely frustrating. I've tried clearing the browser's cache. I even tried editing the file directly on their servers. Sometimes it solves the problem but other times it doesn't.
Is this a common issue encountered when transferring files to a web host? Or perhaps this is one of the downsides of using a free web host?
Thanks.
You can try clearing your browser's cache, or the ol' CTRL+F5 refresh trick. Otherwise, the hosting provider may be using a caching layer to help ease resource usage.
It is the responsibility of the server to indicate to the browser what the cacheable lifetime of the script files are when they are served to the browser (1 hr, 1 day, 1 month, etc...). This is a server side setting.
Caching is very important for both server-side efficiency and client-side performance so you don't want to defeat it completely.
You can either shorten the server-side setting for the cache lifetime or you can use a version number in your script files (like jQuery does) so that when you revise your script files, you give them a new filename like "myscript-v12.js" and update the corresponding HTML files to refer to the new filename. Then, as soon as the browser gets the new HTML file, it is guarenteed to get the new JS file because the new filename could never have been in the browser cache.
If this is just an issue for you personally while developing and revising your site, then just clear your browser cache after you upload new files and then when your browser loads that page, it won't have any version in the cache and will be forced to get the new version from the server.
There is a CACHE system in modern browsers.
Try clear cache before you browse your web site.
I have a web app (sencha/phonegap) that includes a feature allowing users to click on buttons that link to Wikipedia articles. This obviously works fine if the device has internet access, but I get numerous requests to make the app work when the app is offline too. To accomplish this, I'd like to give the user the option to download the linked articles/webpages for offline access. When the device does not have internet access, the app would instead display the saved version (which might be stale/out-of-date, but is better than nothing). What are possible ways to accomplish this task?
My first thought was to somehow use the html manifest to cache the pages in the phone's browser, which sounds possible on the Android browser, but iOS apparently has a 5MB browser cache limit - too small.
My next thought was to save the needed html & associated files and bundle them up inside the app. But this seems a rather cumbersome approach, the app becomes much larger than it needs to be, and the webpages are stale back to the date the app was installed.
Using javascript, is it possible to download webpages, which I could then save (on the sd card, for example) for access later?
Or is there a more elegant approach?
If anyone could point me in the right direction it would be much appreciated.
In pure Javascript you can make an Ajax request to download a page. Then you can use the FileWriter to write the responseText to a file on the file system. However, that won't help you when it comes to images. You'll need to use the FileTransfer.download() command to get the binary image files.
If I were you I'd:
Use AJAX to download the html.
Parse the html looking for images.
Use FileTransfer.download to get the images.
In a web application I'm creating a lot of media is used. As such there's quite a strain if each time the page is loaded this media has to be loaded again. (And as the media isn't inside the page, but rather retrieved by means of websockets I doubt the browser will cache it).
As such to protect both the server & the client, and to prevent useless strain on networks I wish to "store" the media locally - and then simply load each time the program is run. Now the big problem is, localstorage (which seems to be made for this) is very, very limited: 5Mb in firefox. That's by far not enough for the media my program will use (around 100Mb) - it's also weird to me as nowadays we have hard drives of several terrabytes?
Is there a manner to "ask" for more local storage (apart from telling people to fiddle with about:config or similar things). Or otherwise, can I download this media and then load this local data? (Can I KNOW where the user has downloaded the data without the user manually navigating to the data?)
The newest browsers support, one way or another, an application cache. It is intended to allow offline access to the resources a web-application fetches from a server, and does not use the localStorage memory space.
http://html5doctor.com/go-offline-with-application-cache/