Force download through markup or JS - javascript

Lets assume I have a file on a CDN (Cloud Files from Rackspace) and a static html page with a link to that file. Is there any way I can force download this file (to prevent it from opening in the browser -- for mp3s for example)?
We could make our server read the file and set the corresponding header to:
header("Content-Type: application/force-download")
but we have about 5 million downloads per month so we would rather let the CDN take care of that.
Any ideas?

There’s no way to do this in HTML or JavaScript. There is now! (Ish. See #BruceAldrige’s answer below.)
The HTTP Content-Disposition header is what tells browsers to download the files, and that’s sent by the server. You have to configure the CDN to send that header with whichever files you want to browser to download instead of display.
Unhelpfully, I’m entirely unfamiliar with Rackspace’s Cloud Files service, so I don’t know if they allow this, nor how to do it. Just found a page from December 2009 that suggests not thought, sadly:
Cloud Files cannot serve a file with the 'Content-Disposition: attachment' HTTP header. Therefore, a download link that would work perfectly in any other service may result in the browser rendering the file directly. This was confirmed by Rackspace engineers. :-(
http://drupal.org/node/656714
I know that you can with Amazon’s CloudFront service, as it’s backed by S3 (see e.g. http://blog.cloudberrylab.com/2009/06/how-to-set-custom-http-headers-for.html)

You can use the download attribute:
<a href="http..." download></a>
https://stackoverflow.com/a/11024735/21460
However, it’s not currently supported by Safari (7) or IE (11).

Yes, you can do this through the cloudfiles API. Using the method stream allows you to stream the contents of files in - setting your own headers etc.

A crazy idea: download via XMLHttpRequest and serve a data: URL with the content type you want? :P

Related

How to serve static gzipped javascript files in lighttpd?

Background:
I have a small RaspberyPi-like server on Armbian (20.11.6) (precisely - on Odroid XU4).
I use lighttpd to serve pages (including Home Assistant and some statistics and graphs with chartjs).
(the example file here is Chart.bundle.min.js.gz)
Issue:
There seems to be a growing amount of javascript files, which become larger than the htmls and the data itself (some numbers for power/gas consumption etc.). I am used to use mod_compress, mod_deflate etc on servers (to compress files on the fly), but this would kill the Odroid (or unnecessarily load CPU and the pitiful SD card for caching).
Idea:
Now, the idea is, just to compress the javascript (and other static (like css)) files and serve it as static gzip file, which any modern browser can/should handle.
Solution 0:
I just compressed the files, and hoped that the browser will understand it...
(Clearly the link was packed in the script html tag, so if the browser would get that gz is a gzip... it should maybe work).
It did not ;)
Solution 1:
I enabled mod_compress (a suggested on multiple pages) and and tried to serve static js.gz file.
https://www.drupal.org/project/javascript_aggregator/issues/601540
https://www.cyberciti.biz/tips/lighttpd-mod_compress-gzip-compression-tutorial.html
Without success (browser takes it as binary gzip, and not as application/javascript type).
(some pages suggested enabling mod_deflate, but it does not seem to exist)
Solution 2:
(mod_compress kept on) I did the above, and started fiddling with the Content-Type, Content-Encoding in the HTML (in the script html tag). This did not work at all, as the Content-Type can be somehow influenced in HTML, but it seems that the Content-Encoding can not.
https://www.geeksforgeeks.org/http-headers-content-type/
(I do not install php (which could do it) to save memory, sd card lifetime etc.).
Solution 3:
I added "Content-Encoding" => "gzip" line to the 10-simple-vhost.conf default configuration file in the setenv.add-response-header. This looked as a dirty crazy move, but I wanted to check if the browser accepts my js.gz file... It did not.
And furthermore nothing loaded at all.
Question:
What would be an easy way to do it ? (without php).
Maybe something like htaccess in Apache ?
EDIT 1:
It seems that nginx can do it out-of-the-box:
Serve static gzip files using node.js
http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html
I am also digging into the headers story in lighttpd:
https://community.splunk.com/t5/Security/How-to-Disable-http-response-gzip-encoding/m-p/64396
EDIT 2:
Yes... after some thinking, I got to the idea that it seems that this file could be cached for a long time anyway, so maybe I should not care so much :)
Your solution (below) to set the response header is a workable one for your situation.
However, I would recommend using lighttpd mod_deflate with deflate.cache-dir (lighttpd 1.4.56 and later)
When configured properly, lighttpd will serve gzipped Content-Encoding to clients which support the compression, and lighttpd will serve plain content to clients which do not support the compression. lighttpd will compress each file as it is served and will save the compressed file in deflate.cache-dir so that lighttpd does not have to re-compress the file the next time the file is requested. lighttpd will detect if the original file has changed and will re-compress it into the cache the next time the file is requested.
It seems that I was writing the question so long, that I was near to the solution.
I created an module file 12-static_gzip.conf, with following content:
$HTTP["url"] =~ ".gz" {
setenv.add-response-header = (
"Content-Encoding" => "gzip"
)
}
I have not found any similar trick for lighttpd, so I applied here a similar solution which I would use for Apache. Expected behavior was, that it will just respond the Content-Encoding header for the gz files, without using php or any additional modules... and it works !!!
The mod_compress module or any other of this kind is disabled and no other changes are needed.
Clearly, the http negotiation is more complex, so I am not sure if this will work for all browsers, but it surely work very nicely for Chrome.
I am also planning to create some ESP32 web servers, where drive and memory are even more critical, so I will try to apply similar solution.
Nevertheless, the questions still hold...
is there a better/cleaner solution ?
Are there some caveats to be expected ? Browser compatibility etc. ?

How to change filename when downloading file from server using javascript

I am downloading some images from facebook just for learning HTML and JS. But I don't want the filename to be some long string (contains some long string of numbers and chars ).
For eg I am using HTML5 download attribute
<a href="https://fbcdn-sphotos-g-a.akamaihd.net/hphotos-ak-xlt1/v/t1.0-9/12109181_503948273111743_2421725301227286538_n.jpg?oh=08c71f2236eaacc243ccd36475b4634e&oe=56BAA86C&__gda__=1459095933_f07fc4bb7bf54f48ac0b9286f8bc92c6"
download="imagename.jpg">
Download Image
</a>
Or this is JSFiddle of above code
When I click this link the file is download but with different name. My question is how do I change the filename something like images.jpg
Is it possible? If yes how should I go further.
The default filename is sent by the server through HTTP header:
Content-Disposition: attachment; filename='somefile'
Code that runs on the client has very limited control over files for security reasons. The only fix I can see is to have some server code which downloads the file from the other domain and then send it back with a new filename. So no, JS can't fix that for you.
I am relatively sure that the download attribute will only rename files that you are hosting, and not remote files.
You question is similar to this one:
Using download attribute with remote file
There is a workaround solution mentioned in that answer, but it's probably out of scope for just simple learning exercise.

Link and execute external JavaScript file hosted on GitHub

When I try to change the linked reference of a local JavaScript file to a GitHub raw version my test file stops working. The error is:
Refused to execute script from ... because its MIME type (text/plain) is not executable, and strict MIME type checking is enabled.
Is there a way to disable this behavior or is there a service that allows linking to GitHub raw files?
Working code:
<script src="bootstrap-wysiwyg.js"></script>
Non-working code:
<script src="https://raw.github.com/mindmup/bootstrap-wysiwyg/master/bootstrap-wysiwyg.js"></script>
There is a good workaround for this, now, by using jsdelivr.net.
Steps:
Find your link on GitHub, and click to the "Raw" version.
Copy the URL.
Change raw.githubusercontent.com to cdn.jsdelivr.net
Insert /gh/ before your username.
Remove the branch name.
(Optional) Insert the version you want to link to, as #version (if you do not do this, you will get the latest - which may cause long-term caching)
Examples:
http://raw.githubusercontent.com/<username>/<repo>/<branch>/path/to/file.js
Use this URL to get the latest version:
http://cdn.jsdelivr.net/gh/<username>/<repo>/path/to/file.js
Use this URL to get a specific version or commit hash:
http://cdn.jsdelivr.net/gh/<username>/<repo>#<version or hash>/path/to/file.js
For production environments, consider targeting a specific tag or commit-hash rather than the branch. Using the latest link may result in long-term caching of the file, causing your link to not be updated as you push new versions. Linking to a file by commit-hash or tag makes the link unique to version.
Why is this needed?
In 2013, GitHub started using X-Content-Type-Options: nosniff, which instructs more modern browsers to enforce strict MIME type checking. It then returns the raw files in a MIME type returned by the server, preventing the browser from using the file as-intended (if the browser honors the setting).
For background on this topic, please refer to this discussion thread.
This is no longer possible. GitHub has explicitly disabled JavaScript
hotlinking, and newer versions of browsers respect that setting.
Heads up: nosniff header support coming to Chrome and Firefox
rawgithub.com redirects to rawgit.com So the above example would now be
http://rawgit.com/user/package/master/link.min.js
GitHub Pages is GitHub’s official solution to this problem.
raw.githubusercontent makes all files use the text/plain MIME type, even if the file is a CSS or JavaScript file. So going to https://raw.githubusercontent.com/‹user›/‹repo›/‹branch›/‹filepath› will not be the correct MIME type but instead a plaintext file, and linking it via <link href="..."/> or <script src="..."></script> won’t work—the CSS won’t apply / the JS won’t run.
GitHub Pages hosts your repo at a special URL, so all you have to do is check-in your files and push. Note that in most cases, GitHub Pages requires you to commit to a special branch, gh-pages.
On your new site, which is usually https://‹user›.github.io/‹repo›, every file committed to the gh-pages branch (the most recent commit) is present at this url. So then you can link to your js file via <script src="https://‹user›.github.io/‹repo›/file.js"></script>, and this will be the correct MIME type.
Do you have build files?
Personally, my recommendation is to run this branch parallel to master. On the gh-pages branch, you can edit your .gitignore file to check in all the dist/build files you need for your site (e.g. if you have any minified/compiled files), while keeping them ignored on your master branch. This is useful because you typically don’t want to track changes in build files in your regular repo. Every time you want to update your hosted files, simply merge master into gh-pages, rebuild, commit, and then push.
(protip: you can merge and rebuild in the same commit with these steps:)
$ git checkout gh-pages
$ git merge --no-ff --no-commit master # prepare the merge but don’t commit it (as if there were a merge conflict)
$ npm run build # (or whatever your build process is)
$ git add . # stage the newly built files
$ git merge --continue # commit the merge
$ git push origin gh-pages
https://raw.githack.com/
found this site supply a CDN for
remove nosniff http header
fix mime type by ext name
and this site:
https://rawgit.com/
NOTE: RawGit has reached the end of its useful life
You can also use a browser extension to remove the X-Content-Type-Options response header for raw.githubusercontent.com files. There are a couple of browser extensions to modify response headers.
Requestly: Chrome & Firefox
Modify Header Value: Firefox
Remove X-Content-Type-Options response header using Requestly
Install Requestly for your browser
Open Rules Page
Click create rule & Select Modify Headers
In Source field, enter Url -> Contains -> raw.githubusercontent.com
In Response Headers section, Remove -> X-Content-Type-Options
How to test
I created a simple JS Fiddle to test out if we can use raw github files as scripts in our code. Here is the Fiddle with the following code
<center id="msg"></center>
<script src="https://raw.githubusercontent.com/sachinjain024/practicebook/master/web-extensions-master/storage/background.js"></script>
<script>
try {
if (typeof BG.Methods !== 'undefoned') {
document.getElementById('msg').innerHTML = 'Script evaluated successfully!';
}
} catch (e) {
document.getElementById('msg').innerHTML = 'Problem evaluating script';
}
</script>
If you see Script evaluated successfully!, It means you are able to use raw github file in your code
Otherwise Problem evaluating script indicates that there is some problem while executing the script from raw github source.
Note This will only work on your machine So you won't be able to deploy to production. This approach lets you quickly use the files in any Github repostiory without much hassle.
Disclaimer: I am the author of Requestly So you can blame for anything you don't like.
My use case was to load 'bookmarklets' direclty from my Bitbucket account which has same restrictions as Github. The work around I came up with was to AJAX for the script and run eval on the response string, below snippet is based on that approach.
<script>
var sScriptURL ='<script-URL-here>';
var oReq = new XMLHttpRequest();
oReq.addEventListener("load",
function fLoad() {eval(this.responseText + '\r\n//# sourceURL=' + sScriptURL)});
oReq.open("GET", sScriptURL); oReq.send(); false;
</script>
Note that appending of sourceURL comment is to allow for debuging of the script within browser's developer tools.
To make things clear and short
//raw.githubusercontent.com --> //rawgit.com
Note that this is handled by rawgit's development hosting and not their cdn for production hosting
When a file is uploaded to github you can use it as external source or free hosting. Troy Alford has explained it well above. But to make it easier let me tell you some easy steps then you can use a github raw file in your site:
Here is your file's link:
https://raw.githubusercontent.com/mindmup/bootstrap-wysiwyg/master/bootstrap-wysiwyg.js
Now to execute it you have to remove https:// and the dot( . ) between raw and githubusercontent
Like this:
rawgithubusercontent.com/mindmup/bootstrap-wysiwyg/master/bootstrap-wysiwyg.js
Now when you will visit this link you will get a link that can be used to call your javascript:
Here is the final link:
https://rawgit.com/mindmup/bootstrap-wysiwyg/master/bootstrap-wysiwyg.js
Similarly if you host a css file you have to do it as mentioned above. It is the easiest way to get simple link to call your external css or javascript file hosted on github.
I hope this is helpful.
Referance URL: http://101helper.blogspot.com/2015/11/store-blogger-codes-on-github-boost-blogger-speed.html
I found the error was shown due to the comments at the beginning of file , You can solve this issue , by simply creating your own file without comment and push to git, it shows no error
For proof you can try these two file with same code of easy pagination :
without comment
with comment
I had the same issue as you, what I did is change to
<script type="application/javascript" src="bootstrap-wysiwyg.js"></script>
It works for me.
Example
original
https://raw.githubusercontent.com/antelove19/qrcodejs/master/qrcode.min.js
cdn.jsdelivr.net
https://cdn.jsdelivr.net/gh/antelove19/qrcodejs/qrcode.min.js
Most simple way:
<script type="text/plain" src="http://raw.githubusercontent.com/user/repo/branch/file.js"></script>
Served by GitHub, and very reliable.
With text/plain
Without text/plain
raw.github.com is not truely raw access to file asset,
but a view rendered by Rails.
So accessing raw.github.com is much heavier than needed.
I don't know why raw.github.com is implemented as a Rails view.
Instead of fix this route issue, GitHub added a X-Content-Type-Options: nosniff header.
Workaround:
Put the script to user.github.io/repo
Use a third party CDN like rawgit.com.
Alternatively, if generating your markup server-side, you can just fetch and inject.
For example, in JSTL you could do this:
<script type="text/javascript">
<c:import url="https://raw.github.com/mindmup/bootstrap-wysiwyg/master/bootstrap-wysiwyg.js" />
</script>
They don't allow hotlinking for a reason, so probably bad form if you want to be a good citizen. I'd suggest you cache that javascript and only actually re-fetch periodically as you see fit.

Is there any way to 'simulate' right-click save-as command or force download of file in the browser with JavaScript?

I have this situation where we have media files stored on a global CDN. Our web app is hosted on it's own server and then when the media assets are needed they are called from the CDN url. Recently we had a page where the user can download file attachments, however some of the file types were opening in the browser instead of downloading (such as MP3). The only way around this was to manually specify the HTTP response to attach the file but the only way I could achieve this was to download the file from CDN to my server and then feed it back to the user, which defeats the purpose of having it on the global CDN. Instead I am wondering if there is some client side solution for this?
EDIT: Just found this somewhere, though I'm not sure if it will work right in all the browsers?
<body>
<script>
function downloadme(x){
myTempWindow = window.open(x,'','left=10000,screenX=10000');
myTempWindow.document.execCommand('SaveAs','null','download.pdf');
myTempWindow.close();
}
</script>
<a href=javascript:downloadme('/test.pdf');>Download this pdf</a>
</body>
RE-EDIT: Oh well, so much for that idea -> Does execCommand SaveAs work in Firefox?
Does your CDN allow you to specify the HTTP headers? Amazon cloudfront does, for example.
I found an easy solution to this that worked for me. Add a URL parameter to the file name. This will trick the browser into bypassing it's built in file mappings. For examaple, instead of http://mydomain.com/file.pdf , set your client side link up to point to http://mydomain.com/file.pdf? (added a question mark)

Forcing cache expiration from a JavaScript file

I have an old version of a JS file cached on users' browsers, with expiration set to 10 years (since then, I have learned how to set expires headers correctly on my web server). I have made updates to the JS file, and I want my users to benefit from them.
Is there any way my web server can force users' browsers to clear the cache for this one file, short of serving a differently named JS file?
In the future, if expires headers are not set correctly (paranoia), can my JS file automatically expire itself and force a reload after, say, a day has passed since it was cached?
EDIT: Ideally I want to solve this problem without changing HTML markup on the page that hosts the script.
In short... no.
You can add something to the end of the source address of the script tag. Browsers will treat this as a different file to the one they have currently cached.
<script src="/js/something.js?version=2"></script>
Not sure about your other options.
In HTML5 you can use Application Cache, that way you can control when the cache should expire
You need to add the path to the manifest
<!DOCTYPE HTML><html manifest="demo.appcache">
In your demo.appcache file you can just place each file that you want to cache
CACHE MANIFEST
# 2013-01-01 v1.0.0
/myjsfile.js
When you want the browser to download a new file you can update the manifest
CACHE MANIFEST
# 2013-02-01 v1.0.1
/myjsfile.js
Just be sure to modify the cache manifest with the publish date or the version (or something else) that way when the browser sees that the manifest has change it will download all files in it.
If the manifest is not change, the browser will not update the local file, even if that file was modify on the server.
For further information please take a look at HTML5 Application Cache
You could add a dummy parameter to your URLs
<script src='oldscriptname.js?foo=bar'></script>
[e: f; b]
The main problem is that if you set up the expiration with a simple "Expires" header, then the browsers that have the file cached won't even bother to contact you for it. Even if there were a way for the script to whack the browser in the head and clear the cache, your old script doesn't do that, so you have no way to get that functionality out to the clients.
You can force to reload an cacheated document with on javascript:
window.location.reload(true);
The true command indicate the browser must to reload the page without cache.

Categories