I have a PhoneGap application in which I need to download certain images for offline usage and show those inside an iframe. Is this possible and do I need something like CorHTTPD (https://github.com/floatinghotpot/cordova-httpd) to serve the assets locally?
I have been trying to store the files on file system but when I try to show those (even without being inside iframe), those doesn't show. They seem to be loaded (can be seen in network console in remote debugging), though, but (of course) without any headers.
After spending more and more time on this and settings GapDebug correctly to remote debug my application, I was finally able to solve my problem by giving
{responseType: "arraybuffer"}
to AngularJS's $http.get method as config parameter as described here. Now I am able to get the images to ArrayBuffer correctly and from there to base64 encode them to be added inside HTML stored offline. Suitable solution for my case at least..
Related
I have an HTML based project that works with media from other websites, which embeds images / songs / videos using their direct links. The system works perfectly so far, but I wish to make a change: As a lot of assets are accessed repeatedly by viewers, it would seem more optimal to cache them in a controlled way, so whenever certain media pops up you don't need to fetch it from the origin server each time. I never did this before so I don't know if and how it can be done.
To use an oversimplification: I have an embedded photo called "image.png" inside an image element, which will show up whenever I open the site. Currently it's simply defined as:
<img scr="https://foo.bar/image.png">
Works perfectly! However I want to make sure that when my site is accessed, you don't need to fetch that image from foo.bar each time: You will keep it in a local directory after downloading it once, from which the script can fetch and work with the file independently. For Firefox for instance, this subdirectory would be inside your ~/.mozilla/firefox/my_profile directory. Ideally it can be defined using a fixed name, so no matter which URL the website is opened from it uses the same cache path instead of each mirror of the project generating its own.
First, my script must tell the browser to download https://foo.bar/image.png and store it into this cache subdirectory. After that, it would need to generate a link to embed it directly from that subdirectory, so the URL I use would now be something of the following form:
<img scr="file://path_to_cache/image.png">
How do I do those two things, in a way that's compatible across popular web browsers? As a bonus, it would be useful to know if I can limit the size of this cache directory, so once it reaches say 100 MB the oldest items will be removed to stay under that size.
You could alternately add caching to your server's .htaccess file.
This site explains how: https://www.siteground.com/kb/leverage-browser-caching/
However this does not cache the image on the user's machine, it is cached on the server for quicker response.
You could use service workers to cache images on the user's machine.
https://developers.google.com/web/ilt/pwa/lab-caching-files-with-service-worker
Hope this helps.
I've implemented this script on my Squarespace website using the wexley template to make images in a gallery act as links (Wexley does not support clickthrough URLs natively).
It works fine, but if I add any thumbnails to the gallery it will not work until the browser cache is cleared.
I am wondering if there is a way to fix this? Perhaps through:
1) setting an expiry on the cache? I am not in developer mode so this would have to go into a header injection
2) Versioning? I tried hosting the javascript as a file elsewhere on my site. This worked (it pulled the script from another location) but still get the same issue, even when I upload a new script file and point to that after updating the page!
You can force the client to download the field again. To accomplish this you need to make the clients browser to think it doesnt have the script in cache. You can do this changing the file name.
Imagine you have this folder structure:
index.html
index.js
If in your index.html you reference the script like src="index.js" you may force clients to download just apendding a query string to the import: src="index.js?0"
Now clients browsers will check if this file is in cache, and since it isnt, they will fetch from the server.
Checking the resource loading on my page I realized that the script was not being cached so it was something else getting cached that was interfering.
Because I am not in dev mode, I implemented a fix that relies on appending the URL with the date of the update, and then setting up 301 redirects.
The URL and redirects (2 total) would have to be updated when any content is added.
If anyone sees issues with this (relating to SEO or some unknown), I would appreciate your feedback.
Is it possible to blur a remote image using http://www.blurjs.com?
I have our images hosted on a remote CDN and we want to use blurjs to blur the image for a background effect. When we try and use blur js directly with the remote image javascript cannot read the file and throws a unable to read image data error.
The way i'm currently doing it is regenerating the image in php and then using blurjs, but it is very slow and consumes a lot of resources.
We've tried the css solution too with filters but the browers runs too slow when we do.
does anybody have a solution?
Your problem is that pixel access in canvas is not allowed for images loaded from a different domain than the one the page is hosted on. What you need is a proxy script that runs on your server which that allows your javascript to load images from other domains via your server. Of course the downside is that all traffic will also run through your server and that the time to retrieve the image will increase (since the image has first to be loaded to your server and then to the client) and there is unfortunately no way around that.
The good news is that this is a problem that Flash developers had to face many years ago already so it has been solved many times:
Here's for example a php script: http://www.abdulqabiz.com/blog/archives/2007/05/31/php-proxy-script-for-cross-domain-requests/
Here's a more recent implementation in Node.js http://codelikebozo.com/creating-an-image-proxy-server-in-nodejs
I have a web app (sencha/phonegap) that includes a feature allowing users to click on buttons that link to Wikipedia articles. This obviously works fine if the device has internet access, but I get numerous requests to make the app work when the app is offline too. To accomplish this, I'd like to give the user the option to download the linked articles/webpages for offline access. When the device does not have internet access, the app would instead display the saved version (which might be stale/out-of-date, but is better than nothing). What are possible ways to accomplish this task?
My first thought was to somehow use the html manifest to cache the pages in the phone's browser, which sounds possible on the Android browser, but iOS apparently has a 5MB browser cache limit - too small.
My next thought was to save the needed html & associated files and bundle them up inside the app. But this seems a rather cumbersome approach, the app becomes much larger than it needs to be, and the webpages are stale back to the date the app was installed.
Using javascript, is it possible to download webpages, which I could then save (on the sd card, for example) for access later?
Or is there a more elegant approach?
If anyone could point me in the right direction it would be much appreciated.
In pure Javascript you can make an Ajax request to download a page. Then you can use the FileWriter to write the responseText to a file on the file system. However, that won't help you when it comes to images. You'll need to use the FileTransfer.download() command to get the binary image files.
If I were you I'd:
Use AJAX to download the html.
Parse the html looking for images.
Use FileTransfer.download to get the images.
I know that most of the media in web pages are temporarily stored to a temp folder or browser cache. Some are directly embedded in web pages so that we can see the source and can save them. But how to save images loaded using any other method?
You can see what I am talking about here. Is there any solution to save images from this site's gallery?
Yes there is a way to save the images by using the followings
1) Mozilla Firefox
2) Firebug
open the net console in it and select the tab named images
in that u can see all the images and save the images
for your reference, I attached a image.
then copy the location by right click and paste the location
and get the image.
~~~~~~ Happy Coding ~~~~~~~~~~~
Generally, js can't hold the image itself. but the Attribute src, a string instead. And js cannot handle the file on client, you can't modify, move, or copy files. So if you want to keep the images, you can send a http header like if-Modified-Since on server side with php or java, then the browser will not load the image again.
May this will help you. Good Luck!
You could try to use a offline browser.
They save whole webpages and deeping on software they catch more or less.
Offline Browsers