I have an HTML based project that works with media from other websites, which embeds images / songs / videos using their direct links. The system works perfectly so far, but I wish to make a change: As a lot of assets are accessed repeatedly by viewers, it would seem more optimal to cache them in a controlled way, so whenever certain media pops up you don't need to fetch it from the origin server each time. I never did this before so I don't know if and how it can be done.
To use an oversimplification: I have an embedded photo called "image.png" inside an image element, which will show up whenever I open the site. Currently it's simply defined as:
<img scr="https://foo.bar/image.png">
Works perfectly! However I want to make sure that when my site is accessed, you don't need to fetch that image from foo.bar each time: You will keep it in a local directory after downloading it once, from which the script can fetch and work with the file independently. For Firefox for instance, this subdirectory would be inside your ~/.mozilla/firefox/my_profile directory. Ideally it can be defined using a fixed name, so no matter which URL the website is opened from it uses the same cache path instead of each mirror of the project generating its own.
First, my script must tell the browser to download https://foo.bar/image.png and store it into this cache subdirectory. After that, it would need to generate a link to embed it directly from that subdirectory, so the URL I use would now be something of the following form:
<img scr="file://path_to_cache/image.png">
How do I do those two things, in a way that's compatible across popular web browsers? As a bonus, it would be useful to know if I can limit the size of this cache directory, so once it reaches say 100 MB the oldest items will be removed to stay under that size.
You could alternately add caching to your server's .htaccess file.
This site explains how: https://www.siteground.com/kb/leverage-browser-caching/
However this does not cache the image on the user's machine, it is cached on the server for quicker response.
You could use service workers to cache images on the user's machine.
https://developers.google.com/web/ilt/pwa/lab-caching-files-with-service-worker
Hope this helps.
Related
I'm wondering if it's possible to for certain JS files to be added to the web extension directory later?
Like say I have an app where users can select certain settings from within the app and those files (js and html files, images or blobs) are somehow added into the extension from the web. Like some sort of ondemand updater without using any native apps but it seems that upgrades are done by the appstores automatically.
I'm reading the files using ajax and adding them to indexeddb but because it could be more than one file that's getting messy.
Say a user wants a certain feature on the extension and there's an html page, js files and images then this gets downloaded to a certain folder inside the installed extension.
function download() { //only saves to downloads directory
var imgurl = "https://www.google.com.hk/images/srpr/logo11w.png";
console.log('download');
browser.downloads.download({url:imgurl},function(downloadId){
console.log("download begin, the downId is:" + downloadId);
});
}
I also tried the chrome download function above but that only works for the downloads folder not the extension folder.
Is there any way to make a custom updater?! I know we can't save to disk but any leniency or workarounds for the extension folder?! Even something silly like making a shell call to some dos (and linux/mac) thing that saves the file to the extension folder. I can fetch the files, just not save them.
Ok so I'll put it as an answer. This is the solution I'm leaning on which works for my scenario and I've listed some alternatives below:
Having the other files as separate extensions and giving the user an install link instead where they can install that extension, then those child extensions talk to the mother extension and they know the address to the resources in their child extension folder, so the mother gets the just the file locations from the children to load those assets from that folder. The child extensions are like bundles of those html and js with a background script which sends the addresses of these items to the mother.
https://developer.chrome.com/extensions/messaging#external
The drawback is that I'll have to see how that affects the urls like if I inject the html page from the child extension folder into the main interface using ajax then I can't use relative url's to any images in that 'cos the urls are relative to the mother extension folder.. I'll have to rewrite the child extension urls with the absolute paths into the html page to load images and js from the child extension html code which has relative urls.
Pros:
Cleaner and more persistent than indexeddb.
Files can be loaded normally from disk.
Cons:
User has to install separate extensions.
URL structure might be a bit confusing, need to rewrite urls if loading html from child. However this is only for image src's and where the javascript is loaded from so it's not such a big deal.
Other Possible Solutions:
Indexeddb which I'm already doing seems to be the preferred way of doing this but I really do not want to store every html asset in indexeddb. The upside is that while extensions need to be installed, this method can be done silently fetching and adding files without user interaction and indexeddb seems to be somewhat persistent. Might still end up using this because it is silent but having to load each asset from a database sounds like a nightmare.
The File Handle Api might have worked if I was working on Firefox only https://wiki.mozilla.org/WebAPI/FileHandleAPI
I haven't tried the shell copy, maybe if I fetch with ajax and then save to disk using some dos function and then doing different save functions for different OS systems.
Filesystem Api only saves to downloads and doesn't work for extensions anyways, so that's useless.
UPDATE
In windows there isn't any sudo, but this worked without admin priveleges for a subfolder (not on the C:\ root though). It would work for a linux only app very nicely. If I just wanted to save a file to a windows machine this might work.
Shell copy method would be to grab the contents of file with ajax from the local or remote location, output to DOS as a stream to save to file on windows. And do this for every operating system with a shell exec command or detect the OS and do that command. This way I can even put the files in the exact folder location.
Like say I make this sort of command from the contents:
//To append you can use >> instead of >
//folder seems necessary, can't save to root without admin
echo the content I want to save > C:\folder\textfile.txt
I thought of calling it using shell exec that only works in nodejs, so digging through the other answers on
How to execute shell command in Javascript
//full code to save file using javascript on windows
var shell = WScript.CreateObject("WScript.Shell");
shell.Run("echo content to save > C:\folder\textfile.txt");
The shell command doesn't seem to work. i can't find what this is for. There doesn't seem to be a shell command in regular javascript for windows. It seems to require IE ActiveX. Doesn't work with Firefox or Chrome.
Extensions can't modify their sources because the browser verifies them and resets/disables the extension if they change. Also, in Firefox the extensions aren't even unpacked.
The solution is actually quite trivial: save the code in any storage (localStorage, chrome.storage.local, IndexedDB) as a string and then add it in your extension page as a standard DOM script element. You'll have to relax the standard CSP a bit for that.
I am in developer mode in the .region file trying to add a background video with the video tag. I put the mp4 file into the template folder and I have been trying to access it through src="video.mp4" and display the video. It doesn't display the video and I am not sure why I can't grab it. When i change the source to any http// video online it works so its not the code. It only doesn't display the video when I try grabbing it from the local folder. Any leads or help would be appreciated. Thank you!
Files that are directly located in the /template folder are not intended to be accessible via http. Instead, put the file within /template/assets and then reference the file as /assets/video.mp4.
If that doesn't help, ensure that the file is even accessible via http by entering http://yoursite.squarespace.com/assets/video.mp4 in the address bar (using your site's correct URL). If you can access the video file, then it will work as a src attribute of a video element. If you cannot access it, then something else is going on: either you haven't uploaded the file or the file name is incorrect.
Another tip: if using the full URL for a file (as opposed to the relative URL), try using https for the protocol in place of http. The correct protocol depends on your site's settings, of course, and whether you are using your built-in or custom domain.
If using the local development server via Node.js (as opposed to the live server, that is, your actual Squarespace site), try pushing/uploading the files to the live server on Squarespace (via Git or SFTP) and then retesting locally. I've found that sometimes this may be required due to caching in the local environment. This will also reveal whether the file you are uploading is too large (the documentation does claim a 1MB limit which may be true, though it may be as large as 5MB or 20MB if the docs are out of date; I cannot recall whether this has changed).
If the file is too large for the /assets folder, then your only other option besides hosting it via a different service entirely is to use the file storage via the Squarespace Config UI, which allows up to 20MB, and referencing your video via that path. You'd have to get the video down to 20MB by shortening, scaling or further compressing it.
If hosting the file via a different service, Cloudinary may be worth considering; a free account may allow up to a 100MB video file and enough bandwidth (assuming your website's traffic is relatively low).
I've implemented this script on my Squarespace website using the wexley template to make images in a gallery act as links (Wexley does not support clickthrough URLs natively).
It works fine, but if I add any thumbnails to the gallery it will not work until the browser cache is cleared.
I am wondering if there is a way to fix this? Perhaps through:
1) setting an expiry on the cache? I am not in developer mode so this would have to go into a header injection
2) Versioning? I tried hosting the javascript as a file elsewhere on my site. This worked (it pulled the script from another location) but still get the same issue, even when I upload a new script file and point to that after updating the page!
You can force the client to download the field again. To accomplish this you need to make the clients browser to think it doesnt have the script in cache. You can do this changing the file name.
Imagine you have this folder structure:
index.html
index.js
If in your index.html you reference the script like src="index.js" you may force clients to download just apendding a query string to the import: src="index.js?0"
Now clients browsers will check if this file is in cache, and since it isnt, they will fetch from the server.
Checking the resource loading on my page I realized that the script was not being cached so it was something else getting cached that was interfering.
Because I am not in dev mode, I implemented a fix that relies on appending the URL with the date of the update, and then setting up 301 redirects.
The URL and redirects (2 total) would have to be updated when any content is added.
If anyone sees issues with this (relating to SEO or some unknown), I would appreciate your feedback.
I have a PhoneGap application in which I need to download certain images for offline usage and show those inside an iframe. Is this possible and do I need something like CorHTTPD (https://github.com/floatinghotpot/cordova-httpd) to serve the assets locally?
I have been trying to store the files on file system but when I try to show those (even without being inside iframe), those doesn't show. They seem to be loaded (can be seen in network console in remote debugging), though, but (of course) without any headers.
After spending more and more time on this and settings GapDebug correctly to remote debug my application, I was finally able to solve my problem by giving
{responseType: "arraybuffer"}
to AngularJS's $http.get method as config parameter as described here. Now I am able to get the images to ArrayBuffer correctly and from there to base64 encode them to be added inside HTML stored offline. Suitable solution for my case at least..
I have a web app (sencha/phonegap) that includes a feature allowing users to click on buttons that link to Wikipedia articles. This obviously works fine if the device has internet access, but I get numerous requests to make the app work when the app is offline too. To accomplish this, I'd like to give the user the option to download the linked articles/webpages for offline access. When the device does not have internet access, the app would instead display the saved version (which might be stale/out-of-date, but is better than nothing). What are possible ways to accomplish this task?
My first thought was to somehow use the html manifest to cache the pages in the phone's browser, which sounds possible on the Android browser, but iOS apparently has a 5MB browser cache limit - too small.
My next thought was to save the needed html & associated files and bundle them up inside the app. But this seems a rather cumbersome approach, the app becomes much larger than it needs to be, and the webpages are stale back to the date the app was installed.
Using javascript, is it possible to download webpages, which I could then save (on the sd card, for example) for access later?
Or is there a more elegant approach?
If anyone could point me in the right direction it would be much appreciated.
In pure Javascript you can make an Ajax request to download a page. Then you can use the FileWriter to write the responseText to a file on the file system. However, that won't help you when it comes to images. You'll need to use the FileTransfer.download() command to get the binary image files.
If I were you I'd:
Use AJAX to download the html.
Parse the html looking for images.
Use FileTransfer.download to get the images.