I am trying to download multiple files that the user has selected for download. However, the browser is cancelling all but last download request. If I am increasing the delay between the requests to say about 1 second, then those files get downloaded, but even in this case, sometimes some files are missed. The files are being downloaded from amazon s3 urls(i.e. these are CORS).
I am doing this by creating an anchor element with the url and then calling the click event on it using javascript.
downloadFile(url) {
let a = document.createElement('a');
a.id = url;
// a.setAttribute('target', 'blank');
a.download = '';
a.href = url;
// firefox doesn't support `a.click()`...
// console.log('dowmloading ' + url);
a.dispatchEvent(new MouseEvent('click'));
a.remove();
}
Then on download button click event I'm doing this:
let delay = 0;
urlList.forEach(url => {
return setTimeout(downloadFile.bind(null, url), 100 * ++delay);
});
Is there a way to accomplish this?
Why is the browser cancelling the requests?
Why is the browser cancelling the requests?
Because you are essentially "canceling navigation" by clicking the next link, before the browser is fully done with what it has to do when you clicked the previous one.
If this weren’t downloads, but regular links to pages, then this is the behavior you want - if the user clicks a link to page A first, but then looses patience and clicks on another link to page B, then the request for page A gets cancelled at that point - no need in loading two resources, when only one can be displayed at the same time anyway.
I don’t think there is much you can do about this, if you do not want to figure out a "magic number" of a timeout that somehow makes it "work" - especially since you won’t know if that works in general, or maybe just on your machine, with your internet connection, etc.
I am trying to download multiple files that the user has selected for download.
You could have those selected files wrapped into a container format - like for example ZIP (more than just a "container", strictly speaking) - dynamically on the server, so that the user just has to download one file only (which they will then have to unpack again on their own machine.)
Or you change your file selection process to begin with. Instead of having the user mark the files they want using checkboxes or something like that, present them direct links to the files instead maybe? Then they can click each one they want one after another, and the "normal" download functionality will take place.
#misorude is right about why they're getting canceled but a workaround is to use an iframe instead of an anchor tag to download a file.
a typescript implementation of a download file below:
export function downloadFile(downloadLink: string): void {
const iframe = document.createElement("iframe");
iframe.setAttribute("sandbox", "allow-downloads allow-scripts");
iframe.src = downloadLink;
iframe.setAttribute("style", "display: none");
document.body.appendChild(iframe);
}
So it's similar to your implementation - only thing is, you'll have stray (non-displayed) iframes on you DOM. There's no way to tell when the iframe is done downloading the file so there isn't an adequate way to know when to remove them. But at least you wouldn't get your network requests canceled.
Related
The anchor tag click automates the download. Are there events attached to anchor tags that I can listen on ?
downloadMyFile(){
const link = document.createElement('a');
link.setAttribute('href', 'abc.net/files/test.ino');
link.setAttribute('download', `products.csv`);
document.body.appendChild(link);
link.click();
link.remove();
}
You will not be able to be notified by the client when the download completes this way.
You have 2 possible solutions, which mostly depends on the size of the file.
Option 1: Use an ajax call instead, you can stream the whole file in memory and then make the browser download it to a file (which will be instant). This means you have full view on the different download events.
// Get the content somehow
// and then make the browser download
window.location.href = window.URL.createObjectURL(content);
Option 2: Monitor it with the server. I'd suggest you add a UID to the request like:
link.setAttribute('href', 'abc.net/files/test.ino?uid='+myUID);
Have the server keep track of that request and store details with the UID, you can then have another route report the status of the request when provided the UID. The problem with this is that you'd have to poll every now and then to know if the download is finished.
Since we do not know exactly the use for your request it is hard to tell if there are other possibilities. But IMHO there is no real use for you to know if the file has completed downloading. I cannot figure out why you'd want that in the beggining. I see it is a CSV file, they usually are not that big and the download should be real quick... Unless it is because it takes a lot of time to start since it has to be generated before? In this case I suggest you see a popular question/answer I made a while back. How to display a loading animation while file is generated for download?
Edit: Since the file may be big and not fit in memory, you could write the file on disk using the FileSystem API. This way you will be able to pipe the stream coming from your request directly to the filesystem. Since a stream has a close event, you'll be able to know when it is all done.
In my angularjs (angularjs 1.3) app there is a place where the user can download a pdf file.
This is done in a controller using:
$window.location.href = 'pdf/123456';
which saves the file on the users computer. The url in the broswer is never really changed, the user is still on the same page in the angular app.
When I set the locations this way however, ongoing requests get cancelled when using Firefox. Using Chrome there is no problem. The only solution I've come up with is to wait with other requests until the pdf is downloaded but since that can take some time I would like to start them before the download is complete.
Is there any way of fixing this? Can I download the file in any other way? I don't want to open a popup window.
Another way to download a file is to create an <a> tag with the file as target and simulate a click on it. Like this:
var a = document.createElement('a');
a.href = 'pdf/123456';
a.download = 'document_name';
a.target = '_blank';
a.click();
I am trying to use window.location.href in a loop to download multiple files
I have a table in which i can select file's, then i run a loop of selected and
try navigate to the file path to download the files.
I keep only getting the last file to download.
I think it's due to the location herf only taking action after my javascript finishes and not as the code runs.
When i have a break point on the window.location.herf it still only downloads the last file and only when i let the code run through.
Is there a better way to initiate multiple downloads from a javascript loop.
$("#btnDownload").click(function () {
var table = $('#DocuTable').DataTable();
var rows_selected = table.rows('.selected').data();
$.each(rows_selected, function (i, v) {
window.location.href = v.FilePath;
});
});
In some browsers (at least Google Chrome) support the follow:
$("<a download/>").attr("href", "https://code.jquery.com/jquery-3.1.0.min.js").get(0).click();
$("<a download/>").attr("href", "https://code.jquery.com/jquery-3.1.0.min.js").get(0).click();
$("<a download/>").attr("href", "https://code.jquery.com/jquery-3.1.0.min.js").get(0).click();
JSFiddle: https://jsfiddle.net/padk08zc/
I would make use of iframes and a script to force the download of the files as Joe Enos and cmizzi have suggested.
The answer here will help with JavaScript for opening multiple iframes for each file:
Download multiple files with a single action
The answers for popular languages will help with forcing downloads if the URL is actually something that can be served correctly over the web:
PHP: How to force file download with PHP
.Net: Force download of a file on web server - ASP .NET C#
NodeJS: Download a file from NodeJS Server using Express
Ruby: Force browser to download file instead of opening it
Ensure you change the links to point to your download script and also make sure you add the appropriate security checks. You wouldn't want to allow anyone to abuse your script.
Though this looks like an old post and I stumbled on this while trying to solve a similar issue. So, just giving a solution which might help. I was able to download the files but not in the same tab. You can just replace the event handler with download which is provided below. The urls is an array of presigned S3 URLs.
The entire code looks like below:
download(urls: any) {
var self = this;
var url = urls.pop();
setTimeout(function(){
var a = document.createElement('a');
a.setAttribute('href', url);
document.body.appendChild(a);
a.setAttribute('download', '');
a.setAttribute('target', '_blank');
a.click();
// a.remove();
}, 1000)
}
I am using angular and ASP.NET Web API to allow users to download files that are generated on the server.
HTML Markup for download link:
<img src="/content/images/table_excel.png">
<a ng-click="exportToExcel(report.Id)">Excel Model</a>
<a id="report_{{report.Id}}" target="_self"></a>
The last anchor tag is there to serve as a place holder for an automatic click event. The visible anchor calls the exportToExcel method to initiate the call to the server and begin creating the file.
$scope.exportToExcel = function(reportId) {
reportService.excelExport(reportId, function (result) {
var url = "/files/report_" + reportId + "/" + result.data.Model.fileName;
var dLink = document.getElementById("report_" + reportId);
dLink.href = url;
dLink.setAttribute('download', result.data.Model.fileName);
dLink.click();
});
}
The Web API code creates an Excel file. The file, on the server is about 279k, but when it is downloaded on the client it is only 7k. My first thought was that the automatic click might be happening before the file is completely written. So, I added a 10 second $timeout around the click event as a test. It failed with the same result.
This seems to only be happening on our remote QA server. On my local development server I always get the entire file back. I am at a loss as to why this might be happening. We have similar functionality where files are constructed from a database blob and saved to the local disk for download. The same method is employed for the client side download and that seems to work fine. I am wondering if anyone else has run into a similar issue.
Update
After the comment by SilentTremmor we think it actually may be IIS or some sort of Sever issue. Originally, we didn't think it could be, but after some digging it may be. It seems the instance of the client code is only allowing 7k of data to be downloaded. It doesn't matter what we try to download the result is always the same.
It turns out the API application was writing the file to a different instance of our application. The client code had no idea and was trying to download a file that did not exist. So, when the download link was creating the file it was empty, thus the small file size.
I've been using the Microsoft Technet site and you can download the ISO files by clicking a link on the page. The element is like this:
<a href="javascript:void(0)" onmouseout="HideToolTip()"
onmouseover="ShowToolTip(event,'Click here to download.')"
onclick="javascript:RunDownload('39010^313^164',event)"
class="detailsLink">Download</a>
I wasn't able to find the RunDownload() method in the scripts. And I wondered what it is likely to do. I mean usually when I provide a link for someone to download I provide an anchor to it:
download
But this is working differently what is the script doing? Because even when I ran 'Fiddler' I wasn't able to see the actual download location.
there's no such thing as a "javascript download" link. Javascript can open a new window, or simulate a click on a link.
What you have to find is which url the function triggered by this click will lead to.
here's an example of how to do it:
Suppose we have a:
<a id="download">download Here §§§</a>
then this jQuery code:
$('#download').click( function() {
window.location.href = 'http://example.org/download/ISO.ISO';
} );
will redirect to the URL http://example.org/download/ISO.ISO. Whether this url starts a download or not depends on HTTP headers and your browser, not on what javascript do.
Download location can be a url-rewritten path. This mean that maybe some parameters are given with HTTP Post and some HTTP handler in the Web server or web application may be getting some arguments from the HTTP request and write file bytes to an HTTP response, which absolutely hides where the file is located in the actual server's file system.
Maybe this is what's behind the scenes and prevents you to know the file location.
For example, we can have this:
http://mypage.com/downloads/1223893893
And you requested an executable like "whatever.exe" for downloading it to your hard disk. Where's the "http:/mypage.com/downloads/whatever.exe"? Actually, it doesn't exist. It's a byte array saved in a long database in some record, and "mypage" web application handles a request for a file that's identified as "1223893893" which can be a combination of an identifier, date time or whichever argument.
What I think the function RunDownload might do is that it might inform the server using get request to the server that another download is about to happen , or it might need to run the download background by setting the target attribute to an iframe so the user won't need to open another tab and download the file on the same page.
Download
JS
var runDownload=function(){
e.preventDefault();
increaseDownloadCountOnTheServer(location);
window.location.href="filelocation.exe";
}