My app creates an excel file, server side, from a database extraction.
A post request sends parameters to the server that the server then uses to query the database.
The server uses these parameters to extract data convert the data to an excel file (xlsx), then saves the file with a certain file-name as per the parameters sent to the server.
The server responds to the post request by sending the file-name to the browser.
The browser then creates a link using the filename and other predefined parameters to download the file by the following instructions:
var link = 'http://host-name/path-to-file/excel-file.xlxs'; // the link that is created by the js in the browser
window.location = link; // the file is downloaded
This works in chrome, firefox, opera and safari, in these browsers, the file downloads no problem.
However; when running in Microsoft-edge, the file is not downloaded and this appears in the page.
Someone was facing similar issue in some versions of IE and had to set Cache-Control header to make the download working properly:
response.Cache.SetCacheability(HttpCacheability.Private);
Source
The issue here is that this method of downloading files is not actually downloading the file. I was using javascript to instruct the browser to open the excel file, window.location = link;. Which tells the browser, go to a that link, and open whatever you find at that address. Which is normally an HTML file or something else transpiled into HTML. This can in some cases be also be a .pdf or the sort of file that modern web-browsers are able to interpret and run.
Now, the reason this was mostly working is; browsers like chrome and firefox are smart enough to know that they cannot interpret and display excel files, so instead, they download them. Pretty smart right. However; microsoft-edge is not so clever as its more proven compatriots. It tries to interpret and run the file, which of course it cannot. What this then leads to; is a grand display of nonsense; as you can see from screen-grab in my question above.
My problem here was actually a deeper rooted issue of technology mismatch. I had since migrated to using a more modern stack, replacing my plain node.js server with express. Moving the front-end out of a cross-origin tomcat java-container application-server model (which was causing most of my headaches on a daily bases since I was coding javascript) to a same-origin environment using webpack along with express.
And as you might know, using webpack brings a whole new dimension to the front-end that was not available before when we were using the 'old approach' to web-dev.
Most of the improvements in using webpack came from its ability to bring 'node.js' to the front end.
It has made my life as a dev 150% easier and the type of problem as described in my question above is now a thing of the past. javascript for the win! The moral for me here is that sometimes that aren't any quick fixes, and you just have to do things properly.
Related
A Javascript file which I'm currently frequently changing, gets loaded corrupted by the browser (Chrome, Firefox). First of all, the actual file loaded is an old one and not the currently saved one. Second, the file frequently seems to be either just partly loaded (e.g. the last few characters don't appear) or I get a Unexpected token ILLEGAL error message.
During development, I'm disabling caching, so that's not the reason for the 'old' javascript version. Also, 'Empty Cache and Hard reload' on Chrome doesn't change anything either.
After looking at Javascript files getting corrupted automatically, I've ensured the file is UTF-8 encoded.
Any help, tip would be greatly appreciated!
If you're sure about the client side not doing any caching then what remains are the server side and whatever is between:
Is there a proxy? Those pieces of software can sometime create big problems because of their interpretation of caching policies or just because of bugs.
What is the server serving the files? How is the script updated on the server? Often you can run into problems if the server clock and the client uploading the file are not perfectly synchronized because a server-side caching may think the file didn't actually change. Problems may happen if when uploading the file you're also uploading metadata like modification datetime instead of having the server setting the modification time equal to the upload time.
I would like to do a file (of any kind) download using IE9 without any redirection. By redirection I mean providing a URL of the resource I am trying to download to the current document forcing a download if the MIME type is not a document type.
So I am left with getting the data using the XHR object and find a way to save it on disk. Since I am using IE9, I can't use any File API provided in IE10+.
So forget about:
using Blob
using FileSaver (https://github.com/eligrey/FileSaver.js/)
using Blob and typedarray polyfills needs debugging and I can't make them work
Right now I am getting the data after the REST call and trying to write it into a document like in this post: Javascript Save CSV file in IE 8/IE 9 without using window.open()
But, the problem is document.write() seems to encode anything written to it in UCS-2, so binaries sent from the back-end are reinterpreted and the file gets corrupted. I am guessing that only text-based files could be saved then.
Last and not the least, I SHOULD not use flash.
Does anyone have an idea in mind to resolve the encoding issue or another technique to do the download?
If it can help, I am using angularjs as a front-end JS framework
For the limitation we had, passing the token as a parameter to get the file downloadable resource was the solution. When we uplift IE, we shall change the solution tough.
It implies removing any security chain filter and do a manual validation on the token in the backend.
I'm trying to do the following:
A grid with a lot of files is shown to the user
The user selects as many files as he wants
The user should be prompted for each file for the target location
Each file should be downloaded one after another
I can't find a good solution for this because:
I need a cross browser solution (no plugins) but i can rely on IE10+ and HTML5
The files should not be downloaded as a zip file or any other archive
Using document.write for inserting multiple iframes feels bad and is discouraged by most browsers
I managed to build a possible example of downloading multiple files using the HTML FileSystem API. I ran into a few problems while building this which I'll note down below. Beware that this is just an example and could be improved by a lot (code-wise and feature-wise).
I stopped developing because I was unable to transfer binary files but maybe someone can give me a clue on how to do this. (I struggle with binary ajax transfers and JSON at the moment. (I can't say if it's possible to transfer images/binaries over ajax at all).
Published Sourcecode on GitHub:
https://github.com/posixpascal/FileSystem-API-Example
A few things to note:
Your users have to click 'Allow this webpage to download multiple files' as soon as the popup is visible. Otherwise it won't work.
This uses heavy I/O operations on the server side (at least with my
code). one should rewrite that before using this script.
Be aware of this issue: https://code.google.com/p/chromium/issues/detail?id=94314
Users with non-latin characters in their Windows Username aren't able to download the files.
You can't resume the download if you using TEMPORARY FileSystem Storage. (Chrome throws an error on my machine when I try to access the downloaded files twice)
Also be aware of loops, because it can screw other peoples browsers.
Youtube Example: https://www.youtube.com/watch?v=F9T4i4qrYtc&list=UUi1sRIczZxhsuWPPUK7xxTA
Live Example: http://pascalraszyk.de/_broken_do_not_use/
All I can say is that this is not a solution at the moment and the API isn't ready for the mass. You can add further support by using Flash and other utils to compensate lack of FileSystem API support.
How it works:
As soon as the user clicks on the download link, my script gathers information about the files using a server side PHP script. After that it requests a few chunks until the filesize from the locally stored file matches the one sent by the php script.
As soon as the file is ready, I create an invisible a tag and set href to "filesystem:myurl.de/theFile" and trigger a click event on that link. I also add 'download' property so the browser is forced to download .txt files as well.
This is not a fully solution to your problem but you can check the sourcecode and hopefully built something to suit your needs. I guess you already moved on to a different approach to download multiple files.
I found a solution that works (for me) in all browsers. It's does not feel that good on the code side (at first) but it seems pretty stable to me on different browsers and different machines.
Chrome will ask the user to allow the site to download multiple files. IE doesn't care at all.
var onDownload = function(){
var docs = module.getSelectedElements();
for(var i = 0; i < docs.length; i ++) {
(function(){
var doc = docs[i];
window.setTimeout(function(){
$jq("#downloadIframe").attr("src", doc.url);
}, i * 500);
})();
}
};
I am working on project for desktop application. I am using Qt controls with visual c++.
I am loading an html file in the QWebView as,
m_pWebView->load(QUrl("../../../demo/index_Splash_Screen.html"));
Now, what i want is, say, I have some .zip files in my location "c:\demo", I want list (or array of file names) of the files present in that directory.
How can i do this through javascript ?
PS: I went through this link, but it didnt match my requirement. I have not worked with of html, javascript and jquery. Please help me.
I'm afraid you cannot access local files or directories using javascript due to security issues.
Edit: I hadn't thought about the file api so thought for a moment this might not be true, but without some user input to give permission, this still cannot be done.
This question has a good response from PhilNicholas:
I'm afraid I may be the bearer of bad news for your design: The action
you are requesting expressly violates the security model as specified
in the File API spec. The client implementation of FileReader() must
make sure that "all files that are being read by FileReader objects
have first been selected by the user." (W3C File API , 13. Security
Considerations: http://www.w3.org/TR/FileAPI/#security-discussion).
It would be a huge security risk of browser scripts could just
arbitrarily open and read any file from a path without any user
interaction. No browser manufacturer would allow unfettered access to
the entire file system like that.
Thinking about it however, if it is all being run locally, you could use ajax to query a server side script that could return the directory you request.
If it is a Windows application then you could access the local filesystem by using ActiveX objects. You might have a look at this link Reading a txt file from Javascript
Note that activeX usage is possible only when using IE as browser/engine; I used to need it a while ago for developing an HTML application (.hta files).
I am all too aware of the fact that even with the new FileAPI it's not possible to access the local path of a file added using a file input field or drag-and-drop. Whether or not this is good, bad or ugly is not the issue here. According to the FileAPI specs local file access is not to be implemented, and so I'm not holding my breath.
But let's just pretend I'm in a situation with the following fixed parameters:
Developing an HTML5 application only to be used internally at a company
.NET used for backend (needed due to interop with APIs)
Can specify/control exactly which browser and version should be used with the application
Need to access files that are usually located on a network share, but possibly also locally at a user's workstation
And by access I don't mean access file data, but rather be able to relay a file drag-and-drop/select event to some other API by feeding the third party the file's local path, so that the third party can pick up the file and do some sort of work on it. This can be likened to using an input[type=file] field as you would an OpenFileDialog in .NET - i.e. the point is to feed the application a file path, not an actual file.
I realise that out of the box this is probably not possible. But I also think that there must be some sort of solution to the problem.
Some ideas I've been toying with are:
Using browser specific methods for allowing "secure features"
Not sure if possible - tired using some of these features to no avail
Would limit the app to a specific version of a browser as the functionality could potentially be removed in the future
Something like a Chrome extension could possibly do the trick
Using some sort of companion application installed locally on a clients computer that takes care of all on-disk file handling, possibly communicating with the HTML5 client using websockets or the like.
A potentially pretty messy solution
Would probably confuse the users a bit at first
Submitting the selected file data to the server, storing it at specific path and sending this new path to the third party.
Would constitute a lot of sending files over the company network, some 100+ MB in size
Would not be able to do any in-place changes to a file a user has selected
... and that's about it.
Any snazzy suggestions? Wise words? Helpful links? Snarky comments?
Thanks.
Edit: For anyone curious about it, this was very simple using Silverlight as per jgauffin's suggestion below.
From the Silverlight codebehind (using elevated privileges):
private void fileBtn_Click(object sender, RoutedEventArgs e)
{
//prompt file select dialog in Silverlight:
var dlg = new OpenFileDialog();
dlg.ShowDialog();
//call JavaScript method and feed it the file path:
HtmlPage.Window.Invoke("onFileSelected", dlg.File.FullName);
}
You'll probably have to use something that runs in the browser like flash or silverlight.
Since it's an internal app I would use silverlight as everything else is in .NET. It should be enought to only make the file access part in the plugin.
Here is an article about local file access: https://www.wintellect.com/silverlight-4-s-new-local-file-system-support/
does the server hosting the site have access to the network of pc's?
you could just list all the files that way.. build a small ajax script like a file dialog that will have php or whatever sending back the structure
no plugins needed, works on all browsers... :)