how to create an offline web app using javascript - javascript

I am looking for a solution on how to create an offline compatible web app using html, JavaScript, and maybe jQuery. I looked into service workers, but they aren’t comparable with all mobile devices yet. I also looked at the manifest file thing, it worked but it didn’t update the files. So now I’m here asking for a solution. I intend this application to be a music website that can be a web app. I like music and i take it everywhere so I’m trying to find out how i can save the website files for offline use so even if I don’t have WiFi, i can listen to my saved music. btw the files I’d like to save are:
main.js
Main.css
Index.html
EDIT 1
Also, if you know how to properly use service workers, can you show an example?

For future reference:
1/ Create a service worker file in the app root folder.
Example sw.js:
let cacheName = "core" // Whatever name
// Pass all assets here
// This example use a folder named «/core» in the root folder
// It is mandatory to add an icon (Important for mobile users)
let filesToCache = [
"/",
"/index.html",
"/core/app.css",
"/core/main.js",
"/core/otherlib.js",
"/core/favicon.ico"
]
self.addEventListener("install", function(e) {
e.waitUntil(
caches.open(cacheName).then(function(cache) {
return cache.addAll(filesToCache)
})
)
})
self.addEventListener("fetch", function(e) {
e.respondWith(
caches.match(e.request).then(function(response) {
return response || fetch(e.request)
})
)
})
2/ Add an onload event anywhere in the app:
window.onload = () => {
"use strict";
if ("serviceWorker" in navigator && document.URL.split(":")[0] !== "file") {
navigator.serviceWorker.register("./sw.js");
}
}
3/ Create a manifest.json file in the app root folder.
{
"name": "APP",
"short_name": "App",
"lang": "en-US",
"start_url": "/index.html",
"display": "standalone"
}
4/ Test
Start a web server from the root folder:
php -S localhost:8090
Visit http://localhost:8090 one time.
Stop the web server with Ctrl + c.
Refresh http://localhost:8090, the page should respond.
To switch off when developing, remove the onload event, and in Firefox
visit about:debugging#workers to unregister the service.
Newest versions of Firefox are showing an application tab directly in the debugger instead. about:debugging#workers is not valid any more.
https://developer.mozilla.org/en-US/docs/Tools/Application/Service_workers
Source for more details
Manifest.json reference

If you need to save settings after the user left, you need to use cookies.
If you need some server data (and ajax requests for example), I'm afraid you can't do that offline.
For everything else (as far as I know), if you want it to work offline, you have to make the user's browser download all code it's going to use, including JQuery, Bootstrap, or any plugin code you want. You have to add them to your website sources and link them internally :
<script src="http://code.jquery.com/jquery-3-3-0-min.js"></script> <!-- Won't work offline.-->
<script src="./js/jquery-3-3-0-min.js"></script> <!-- Will work offline -->
Be careful about plugin dependencies ! For example Bootstrap 3.3.6 JS plugin needs JQuery 1.12.4
Hope it helps you !

Related

Is there anyway to delete my users cache?

I made a sw.js file that caches my chat website so users can open in offline mode, however, the Service Worker file caused alot of issues including not being able to see new messages and alot of website crashes so I was forced to delete it. Sadly my none of my current users can delete the cache manually! NOte that I kept the sw.js file but it's now empty so is there any code I can write to delete all of my current user caches?
I don't think this is relevant but my app uses django
To delete the cache, you can use inbuild cache API.
caches.keys().then(cacheNames => {
cacheNames.forEach(value => {
caches.delete(value);
});
})
Removing content from your sw.js file is not enough. If there is already a service worker installed and running then I would suggest you to "unRegister" that also. You can do so
programmatically using below code.
navigator.serviceWorker.getRegistrations().then(function(registrations) {
for(let registration of registrations) {
registration.unregister()
}
})
Please note you only need to run this code once in all the user's browser.

The browser can't recache updated website

I've completely rewritten a website, all it's resources have been moved to other folders (changed file structure) but if I access the site from a device that has cached it, it loads the old html file and looks for the old resource paths. I've tried to solve it with meta tags, I have changed the default index.php start file to home.php in .htaccess, I tried to solve it via js, but nothing works.
After days of searching, I think that I have found the problem. The webpage was transformed to a PWA and I registered a service worker for it to cache the index.php page. I think that this service worker's cache may be my problem. How can I update it in order to recache the website? The problem is, that I can update the index.php file however I want, the browser still loads the old file.
I am sure that it can be solved somehow, but I don't have any experiences with this. Any ideas? Thanks!
var cacheName = 'prisma-pwa';
var filesToCache = [
'/',
];
/* Start the service worker and cache all of the app's content */
self.addEventListener('install', function(e) {
e.waitUntil(
caches.open(cacheName).then(function(cache) {
return cache.addAll(filesToCache);
})
);
});
/* Serve cached content when offline */
self.addEventListener('fetch', function(e) {
e.respondWith(
caches.match(e.request).then(function(response) {
return response || fetch(e.request);
})
);
});
In some few cases it's not enough to reset with F5 or even CTRL-F5. Here helps to really delete the cache in your browser.
If you use Chrome than you can disable in preferences the cache while using DevTools. This helps me sometimes. The little loss of perfomance doesn't bother me for testing.

Vue PWA not getting new content after refresh

I'm new to Vue and created a project with the PWA Service-worker plugin. After deploying a new version of my App I get these messages in console:
After refreshing the page (F5) these messages still appear the same way and the App is still in it's old state. I tried everything to clear the cache but it still won't load the new content.
I haven't changed anything from the default config after creating my project and didn't add any code which interacts with the serviceworker. What is going wrong? Am I missing something?
As I figured out, this question is really only related to beginners in PWA, which don't know that you can (and need) to configure PWA for achieving this. If you feel addressed now (and using VueJS) remember:
To automatically download the new content, you need to configure PWA. In my case (VueJS) this is done by creating a file vue.config.js in the root directory of my project (On the same level as package.json).
Inside this file you need this:
module.exports = {
pwa: {
workboxOptions: {
skipWaiting: true
}
}
}
Which will automatically download your new content if detected.
However, the content won't be displayed to your client yet, since it needs to refresh after downloading the content. I did this by adding window.location.reload(true) to registerServiceWorker.js in my src/ directory:
updated () {
console.log('New content is available: Please refresh.')
window.location.reload(true)
},
Now, if the Service Worker detects new content, it will download it automatically and refresh the page afterwards.
I figured out a different approach to this and from what I've seen so far it works fine.
updated() {
console.log('New content is available; please refresh.');
caches.keys().then(function(names) {
for (let name of names) caches.delete(name);
});
},
What's happening here is that when the updated function gets called in the service worker it goes through and deletes all the caches. This means that your app will start up slower if there is an update but if not then it will serve the cached assets. I like this approach better because service workers can be complicated to understand and from what I've read using skipWaiting() isn't recommend unless you know what it does and the side effects it has. This also works with injectManifest mode which is how I'm currently using it.
pass registration argument then use the update() with that.
the argument uses ServiceWorkerRegistration API
updated (registration) {
console.log('New content is available; please refresh.')
registration.update()
},

How to get Workbox PWA to work with .php file

I am new to PWA and using Workbox; So I have this test folder with the following file structure, using localhost as my server (i.e. localhost/test)
index.html
test.css
test.jpg
test.js
sw.js (Code Shown below);
importScripts('https://storage.googleapis.com/workbox-cdn/releases/3.0.0/workbox-sw.js');
if (workbox) {
console.log(`Yay! Workbox is loaded 🎉`);
} else {
console.log(`Boo! Workbox didn't load 😬`);
}
//precache all the site files
workbox.precaching.precacheAndRoute([
{
"url": "index.html",
"revision": "8e0llff09b765727bf6ae49ccbe60"
},
{
"url": "test.css",
"revision": "1fe106d7b2bedfd2dda77f06479fb676"
},
{
"url": "test.jpg",
"revision": "1afdsoaigyusga6d9a07sd9gsa867dgs"
},
{
"url": "test.js",
"revision": "8asdufosdf89ausdf8ausdfasdf98afd"
}
]);
Everything is working perfectly fine, Files are precached and I didn't get this regular offline message when I am in offline mode, as shown in the image below.
So, I copied the exact folder to have test-2 folder, then renamed my index.html file to to index.php and in my sw.js file I updated the url to below code
{
"url": "index.php",
"revision": "8e987fasd5727bf6ae49ccbe60"
},
- Please note that I changed the revision value too
I did this because I want to implement PWA using Workbox into my own custom built single page app (but its in .php format).
Coming to my browser to run localhost/test-2 (normal mode), my files were precached too, including my index.php file (no error messages in my console and service worker was working perfectly fine); Only for me to switch to (offline mode) in my source tab and refresh my browser to test the offline experience and Alas! I got this Offline Message as shown in the Image below :(
I don't know what went wrong, I have no Idea what happened and I tried to google out some reasons for days but I don't seem to get any right and corresponding answer. Most of the tutorials out there is with .html
So the question is how can I implement PWA with .php file, so that when the user is offline, they dont get the normal You're offline Message but Instead my webpage should render?
Thanks in Advance
To elaborate to #pate's answer.
Workbox by default tries to make sure that pretty URL's are supported out of the box.
So in the first example, you cached /test/index.html. So when you request /test/, Workbox precaching actually checks the precache for:
/test/
/test/index.html
If your page was /test/about.html, and you visited the page /test/about precache would append a .html and check for that.
When you switched to the .php extension this logic would suddenly no longer work.
There are a few options to get this working:
If you are using any of the workbox tools to build your manifest, you are use the templatedUrls feature to map to a file (More details here):
templatedUrls: {
'/test-2/': ['/test-2/index.php']
}
If you are making the precache list yourself server side, you can just tell it to precache the URL /test-2/ without the index.php and precaching will simply cache that. Please note that you must ensure the revision changes with any changes to index.php.
If you aren't making the precache manifest, you can use urlManipulaion option to tell precache to check for URL's, (More details here):
workbox.precaching.precacheAndRoute(
[
.....
],
{
urlManipulaion: ({url}) => {
// TODO: Check if the URL ends in slash or not
// Add index.php based on this.
return [url, `${url}.php`, `${url}index.php`];
}
}
);
This is most likely because in the error showing screenshot you're trying to access test-2/ instead of test-2/index.php.
Workbox, in the background, falls back to trying index.html for every route that ends in a slash. For this reason, even if you don't have "/" cached SW tries to give you "/" + "index.html" which seems to be cached, and the page works offline.
I bet your page works if you try to access test-2/index.php while offline. Does it?

AngularJS SEO for static webpages (S3 CDN)

I've been looking into ways to improve SEO for angularJS apps that are hosted on a CDN like Amazon S3 (i.e. simple storage with no backend). Most of the solutions out there, PhantomJS, prerender.io, seo.js etc., rely on a backend to recognise the ?_escaped_fragment_ url that the crawler generates and then fetch the relevant page from elsewhere. Even grunt-html-snapshot ultimately needs you to do this, even though you generate the snapshot pages ahead of time.
This solution is basically relying on using cloudflare as a reverse proxy, which seems a bit of a waste given that most of the security apparatus etc. that their service provides is totally redundant for a static site. Setting up a reverse proxy myself as suggested here also seems problematic given that it would require either i) routing all AngularJS apps I need static html for through one proxy server which would potentially hamper performance or ii) setting up a separate proxy server for each app, at which point I may as well set up a backend, which isn't affordable at the scale I am working.
Is there anyway of doing this, or are statically hosted AngularJS apps with great SEO basically impossible until google updates their crawlers?
Reposted on webmasters following John Conde's comments.
Actually this is a task that is indeed very troublesome, but I have managed to get SEO working nicely with my AngularJS SPA site (hosted on AWS S3) at http://www.jobbies.co/. The main idea is to pre-generate and populate the content into the HTML. The templates will still be loaded when the page loads and the pre-rendered content will be replaced.
You can read more about my solution at http://www.ericluwj.com/2015/11/17/seo-for-angularjs-on-s3.html, but do note that there are a lot of conditions.
Here is a full overview of how to make your app SEO-friendly on a storage service such as S3, with nice urls (no #) and everything with grunt with the simple command to be performed after build:
grunt seo
It's still a puzzle of workarounds, but it's working and it's the best you can do. Thank you to #ericluwj and his blogpost who inspired me.
Overview
The goal & url structure
The goal is to create 1 html file per state in your angular app. The only major assumption is that you remove the '#' from your url by using html5history (which you should do !) and that all your paths are absolute or using angular states. There are plenty of posts explaining how to do so.
Urls end with a trailing slash like this
http://yourdomain.com/page1/
Personally I made sure that http://yourdomain.com/page1 (no trailing slash) also reaches its destination, but that's off topic here. I also made sure that every language has a different state and a different url.
The SEO logic
Our goal is that when someone reaches your website through an http request:
If it's a search engine crawler: keep him on the page which contains the required html. The page also contains angular logic (eg to start your app) but the crawler cannot read that so he is intentionally stuck with the html you served him and will index that.
For normal humans and intelligent machines : make sure angular gets activated, erase the generated html and start your app normally
The grunt tasks
Here we go with the grunt tasks:
//grunt plugins you will need:
grunt.loadNpmTasks('grunt-prerender');
grunt.loadNpmTasks('grunt-replace');
grunt.loadNpmTasks('grunt-wait');
grunt.loadNpmTasks('grunt-aws-s3');
//The grunt tasks in the right order
grunt.registerTask('seo', 'First launch server, then prerender and replace', function (target) {
grunt.task.run([
'concurrent:seo' //Step 1: in parrallel launch server, then perform so-called seotasks
]);
});
grunt.registerTask('seotasks', [
'http', //This is an API call to get all pages on my website. Skipping this step in this tutorial.
'wait', // wait 1.5 sec to make sure that server is launched
'prerender', //Step 2: create a snapshot of your website
'replace', //Step 3: clean the mess
'sitemap', //Create a sitemap of your production environment
'aws_s3:dev' //Step 4: upload
]);
Step 1: Launch local server with concurrent:seo
We first need to launch a local server (like grunt serve) so that we can take snapshots of our website.
//grunt config
concurrent: {
seo: [
'connect:dist:keepalive', //Launching a server and keeping it alive
'seotasks' //now that we have a running server we can launch the SEO tasks
]
}
Step 2: Create a snapshot of your website with grunt prerender
The grunt-prerender plugins allows you to take a snapshot of any website using PhantomJS. In our case we want to take a snapshot of all pages of the localhost website we just launched.
//grunt config
prerender: {
options: {
sitePath: 'http://localhost:9001', //points to the url of the server you just launched. You can also make it point to your production website.
//As you can see the source urls allow for multiple languages provided you have different states for different languages (see note below for that)
urls: ['/', '/projects/', '/portal/','/en/', '/projects/en/', '/portal/en/','/fr/', '/projects/fr/', '/portal/fr/'],//this var can be dynamically updated, which is done in my case in the callback of the http task
hashed: true,
dest: 'dist/SEO/',//where your static html files will be stored
timeout:5000,
interval:5000, //taking a snapshot of how the page looks like after 5 seconds.
phantomScript:'basic',
limit:7 //# pages processed simultaneously
}
}
Step 3: Clean the mess with grunt replace
If you open the pre-rendered files, they will work for crawlers, but not for humans. For humans using chrome, your directives will load twice. Therefore you need to redirect intelligent browsers to your home page before angular gets activated (i.e., right after head).
//Add the script tag to redirect if we're not a search bot
replace: {
dist: {
options: {
patterns: [
{
match: '<head>',
//redirect to a clean page if not a bot (to your index.html at the root basically).
replacement: '<head><script>if(!/bot|googlebot|crawler|spider|robot|crawling/i.test(navigator.userAgent)) { document.location = "/#" + window.location.pathname; }</script>'
//note: your hashbang (#) will still work.
}
],
usePrefix: false
},
files: [
{expand: true, flatten: false, src: ['dist/SEO/*/**/*.html'], dest: ''}
]
}
Also make sure you have this code in your index.html on your ui-view element, which clears all the generated html directives BEFORE angular starts.
<div ui-view autoscroll="true" id="ui-view"></div>
<!-- this script is needed to clear ui-view BEFORE angular starts to remove the static html that has been generated for search engines who cannot read angular -->
<script>
if(!/bot|googlebot|crawler|spider|robot|crawling/i.test( navigator.userAgent)) { document.getElementById('ui-view').innerHTML = ""; }
</script>
Step 4: Upload to aws
You first upload your dist folder which contains your build. Then you overwrite it with the files you prerendered and updated.
aws_s3: {
options: {
accessKeyId: "<%= aws.accessKeyId %>", // Use the variables
secretAccessKey: "<%= aws.secret %>", // You can also use env variables
region: 'eu-west-1',
uploadConcurrency: 5, // 5 simultaneous uploads
},
dev: {
options: {
bucket: 'xxxxxxxx'
},
files: [
{expand: true, cwd: 'dist/', src: ['**'], exclude: 'SEO/**', dest: '', differential: true},
{expand: true, cwd: 'dist/SEO/', src: ['**'], dest: '', differential: true},
]
}
}
That's it, you have your solution ! Both humans and bots will be able to read your web-app
if you use ng-cloak in interesting ways there could be a good solution.
I haven't tried this myself, but it should work in theory
The solution is highly dependent on CSS, but it should perfectly well.
For example you have three states in your angular app:
- index (pathname : #/)
- about (pathname : #/about)
- contact (pathname : #/contact)
The base case for index can be added in too, but will be tricky so I'll leave it out for now.
Make your HTML look like this:
<body>
<div ng-app="myApp" ng-cloak>
<!-- Your whole angular app goes here... -->
</div>
<div class="static">
<div id="about class="static-other">
<!-- Your whole about content here... -->
</div>
<div id="contact" class="static-other">
<!-- Your whole contact content here... -->
</div>
<div id="index" class="static-main">
<!-- Your whole index content here... -->
</div>
</div>
</body>
(It's Important that you put your index case last, if you want to make it more awesome)
Next Make your CSS look something like this:-
[ng-cloak], .static { display: none; }
[ng-cloak] ~ .static { display: block; }
Just that will probably work well enough for you anyway.
The mg-cloak directive will keep your angular app hidden when angular is not loaded and will show your static content instead. Google will get your static content in the HTML.
As a bonus end-users can also see well styles static content while angular loads.
You can then get more creative if you start using :target pseudo selectors in your CSS. You can use actual links in your Static content, but just make them links to various ids. So in #index div make sure you have links to #about and #contact. Note the missing '/' in the links. HTML id's can't start with a slash.
Then make your CSS look like this:
[ng-cloak], .static { display: none; }
[ng-cloak] ~ .static { display: block; }
.static-other {display: none;}
.static-other:target {display: block;}
.static-other:target ~ .static-main {display: none;}
You now have a full functioning static app WITH ROUTINg that works before angular starts-up.
As an additional bonus, when angular starts up it is smart enough to convert #about to #/about automatically, and the experience shouldn't even break at all.
Also, not to forget the SEO problem has totally been solved, of course. I've not used this technique yet, as I've always had a server to configure, but I'm very interested in how this works out for you.
Hope this helps.
As AWS is offering Lambda#Edge as a service we can handle this issue without grunt or anything else. (Atleast for basic stuff)
I tried Lambda#Edge and it worked as expected, in my case I just had all the routes set to "/" in Lambda#Edge (Except for the files are present in s3 like css, images etc).
The event for the Lambda that I set to is "viewerRequest" and following is the code.
'use strict';
exports.handler = (event, context, callback) => {
console.log("Event received is", JSON.stringify(event));
console.log("Context received is", context);
const request = event.Records[0].cf.request;
if (request.uri.endsWith(".rt")) {
console.log("URI is matching with .rt, the URI is ", request.uri);
request.uri = "/";
} else {
console.log("URI is not ending with rt so letting it go URI is", request.uri);
}
console.log("Final request URI is", request.uri);
callback(null, request);
};
Logs in the cloudwatch are little difficult to check as the logs are populated in the region of the cloudwatch which is nearer to the edge location which is handling the request.
For ex. Though this Lambda is deployed/written for us-east I see this in ap-south region as I am accessing the cloudfront from Singapore.
Checked it in Google webmaster tools 'Fetch as google' options and the page is being rendered and viewed as expected.
I've been looking for days to find a solution for this. As far as I know there isn't nice solution to the problem. I hope firebase will eventually enable user-agent redirects. If you have the money you could use MaxCDN enterprise. They offer Edge Rules which includes redirects by user agent .
https://www.maxcdn.com/features/rules/

Categories