I need to create a small browser-based application that helps users download/save, and possibly print to the default printer, a large number of files from a webserver we have no control over (but we have all the URIs beforehand).
These files are hidden behind a "single sign-on" (SSO) that can only be performed via a browser and requires a user. Hence, it must be a browser-based solution, where we piggyback on to the session established by the SSO.
The users' platform is Windows 7.
The point is to save the users from going through a lot of clicks per file (download, where to save, etc.) when they need to perform this operation (daily).
At this point all the files are PDF, but that might change in the future.
A browser-agnostic solution is preferred (and I assume more robust when it comes to future browser updates).
But we can base it on a particular browser if needed.
How would you do this from Javascript?
As the comments to my question says, this isn't really allowed by the browsers for security reasons.
My workaround for now (only tested using IE11) is to manually change the security settings of the users browser, and then download the files as a blob into a javascript variable using AJAX, followed by upload of same blob to my own server again using AJAX.
"My own server" is a Django site created for this purpose, that also knows which files to download for the day, and provide the javascript needed. The user goes to this site to initiate the daily download after performing the SSO in a separate browser tab.
On the server I can then perform whatever operations needed for said files.
Many thanks to this post https://stackoverflow.com/a/13887220/833320 for the handling of binary data in AJAX.
1) In IE, add the involved sites to the "Local Intranet Zone", and enable "Access data sources across domains" for this zone to overcome the CORS protection otherwise preventing this.
Of course, consider the security consequences involved in this...
2) In javascript (browser), download the file as a blob and POST the resulting data to my own server:
var x = new XMLHttpRequest();
x.onload = function() {
// Create a form
var fd = new FormData();
fd.append('csrfmiddlewaretoken', '{{ csrf_token }}'); // Needed by Django
fd.append('file', x.response); // x.response is a Blob object
// Upload to your server
var y = new XMLHttpRequest();
y.onload = function() {
alert('File uploaded!');
};
y.open('POST', '/django/upload/');
y.send(fd);
};
x.open('GET', 'https://external.url', true);
x.responseType = 'blob'; // <-- This is necessary!
x.send();
3) Finally (in Django view for '/django/upload/'), receive the uploaded data and save as file - or whatever...
filedata = request.FILES['file'].read()
with open('filename', 'wb') as f:
f.write(filedata)
Thanks all, for your comments.
And yes, the real solution would be to overcome the SSO (that requieres the user), so it all could be done by the server itself.
But at least I learned a little about getting/posting binary data using modern XMLHttpRequests. :)
Actually, I had a problem like it, I wanted to download a binary file(an image) and store it and then use it when I need it, So I decided to download it with Fetch API Get call:
const imageAddress = 'an-address-to-my-image.jpg'; // sample address
fetch(imageAddress)
.then(res => res.blob) // <-- This is necessary!
.then(blobFileToBase64)
.then(base64FinalAnswer => console.log(base64FinalAnswer))
The blobFileToBase64 is a helper function that converts blob binary file to a base64 data string:
const blobToBase64 = blob => {
const reader = new FileReader();
reader.readAsDataURL(blob);
return new Promise(resolve => {
reader.onloadend = () => {
resolve(reader.result);
};
});
};
In the end, I have the base64FinalAnswer and I can do anything with it.
Related
I'm writing a webapp in Angular where authentication is handled by a JWT token, meaning that every request has an "Authentication" header with all the necessary information.
This works nicely for REST calls, but I don't understand how I should handle download links for files hosted on the backend (the files reside on the same server where the webservices are hosted).
I can't use regular <a href='...'/> links since they won't carry any header and the authentication will fail. Same for the various incantations of window.open(...).
Some solutions I thought of:
Generate a temporary unsecured download link on the server
Pass the authentication information as an url parameter and manually handle the case
Get the data through XHR and save the file client side.
All of the above are less than satisfactory.
1 is the solution I am using right now. I don't like it for two reasons: first it is not ideal security-wise, second it works but it requires quite a lot of work especially on the server: to download something I need to call a service that generates a new "random" url, stores it somewhere (possibly on the DB) for a some time, and returns it to the client. The client gets the url, and use window.open or similar with it. When requested, the new url should check if it is still valid, and then return the data.
2 seems at least as much work.
3 seems a lot of work, even using available libraries, and lot of potential issues. (I would need to provide my own download status bar, load the whole file in memory and then ask the user to save the file locally).
The task seems a pretty basic one though, so I'm wondering if there is anything much simpler that I can use.
I'm not necessarily looking for a solution "the Angular way". Regular Javascript would be fine.
Here's a way to download it on the client using the download attribute, the fetch API, and URL.createObjectURL. You would fetch the file using your JWT, convert the payload into a blob, put the blob into an objectURL, set the source of an anchor tag to that objectURL, and click that objectURL in javascript.
let anchor = document.createElement("a");
document.body.appendChild(anchor);
let file = 'https://www.example.com/some-file.pdf';
let headers = new Headers();
headers.append('Authorization', 'Bearer MY-TOKEN');
fetch(file, { headers })
.then(response => response.blob())
.then(blobby => {
let objectUrl = window.URL.createObjectURL(blobby);
anchor.href = objectUrl;
anchor.download = 'some-file.pdf';
anchor.click();
window.URL.revokeObjectURL(objectUrl);
});
The value of the download attribute will be the eventual file name. If desired, you can mine an intended filename out of the content disposition response header as described in other answers.
Technique
Based on this advice of Matias Woloski from Auth0, known JWT evangelist, I solved it by generating a signed request with Hawk.
Quoting Woloski:
The way you solve this is by generating a signed request like AWS does, for example.
Here you have an example of this technique, used for activation links.
backend
I created an API to sign my download urls:
Request:
POST /api/sign
Content-Type: application/json
Authorization: Bearer...
{"url": "https://path.to/protected.file"}
Response:
{"url": "https://path.to/protected.file?bewit=NTUzMDYzZTQ2NDYxNzQwMGFlMDMwMDAwXDE0NTU2MzU5OThcZDBIeEplRHJLVVFRWTY0OWFFZUVEaGpMOWJlVTk2czA0cmN6UU4zZndTOD1c"}
With a signed URL, we can get the file
Request:
GET https://path.to/protected.file?bewit=NTUzMDYzZTQ2NDYxNzQwMGFlMDMwMDAwXDE0NTU2MzU5OThcZDBIeEplRHJLVVFRWTY0OWFFZUVEaGpMOWJlVTk2czA0cmN6UU4zZndTOD1c
Response:
Content-Type: multipart/mixed; charset="UTF-8"
Content-Disposition': attachment; filename=protected.file
{BLOB}
frontend (by jojoyuji)
This way you can do it all on a single user click:
function clickedOnDownloadButton() {
postToSignWithAuthorizationHeader({
url: 'https://path.to/protected.file'
}).then(function(signed) {
window.location = signed.url;
});
}
An alternative to the existing "fetch/createObjectURL" and "download-token" approaches already mentioned is a standard Form POST that targets a new window. Once the browser reads the attachment header on the server response, it will close the new tab and begin the download. This same approach also happens to work nicely for displaying a resource like a PDF in a new tab.
This has better support for older browsers and avoids having to manage a new type of token. This will also have better long-term support than basic auth on the URL, since support for username/password on the url is being removed by browsers.
On the client-side we use target="_blank" to avoid navigation even in failure cases, which is particularly important for SPAs (single page apps).
The major caveat is that the server-side JWT validation has to get the token from the POST data and not from the header. If your framework manages access to route handlers automatically using the Authentication header, you may need to mark your handler as unauthenticated/anonymous so that you can manually validate the JWT to ensure proper authorization.
The form can be dynamically created and immediately destroyed so that it is properly cleaned up (note: this can be done in plain JS, but JQuery is used here for clarity) -
function DownloadWithJwtViaFormPost(url, id, token) {
var jwtInput = $('<input type="hidden" name="jwtToken">').val(token);
var idInput = $('<input type="hidden" name="id">').val(id);
$('<form method="post" target="_blank"></form>')
.attr("action", url)
.append(jwtInput)
.append(idInput)
.appendTo('body')
.submit()
.remove();
}
Just add any extra data you need to submit as hidden inputs and make sure they are appended to the form.
Pure JS version of James' answer
function downloadFile (url, token) {
let form = document.createElement('form')
form.method = 'post'
form.target = '_blank'
form.action = url
form.innerHTML = '<input type="hidden" name="jwtToken" value="' + token + '">'
console.log('form:', form)
document.body.appendChild(form)
form.submit()
document.body.removeChild(form)
}
I would generate tokens for download.
Within angular make an authenticated request to obtain a temporary token (say an hour) then add it to the url as a get parameter. This way you can download files in any way you like (window.open ...)
There are two ways I can upload files using Ajax (XHR2). First, I can read the file content as array buffer or binary string and then simply stream using XHR send method. For example, as shown here:
function uploadFile(img, file) {
const reader = new FileReader();
const xhr = new XMLHttpRequest();
xhr.upload.addEventListener("progress", function(e) {
if (e.lengthComputable) {
const percentage = Math.round((e.loaded * 100) / e.total);
// Do something with percentage
}
});
xhr.upload.addEventListener("load", (e) => console.log('Do something more'));
xhr.open("POST", "some-url");
xhr.overrideMimeType('text/plain; charset=x-user-defined-binary');
reader.onload = function(evt) {
xhr.send(evt.target.result);
};
reader.readAsBinaryString(file);
}
Second, I can use FormData to upload my file as shown here:
var formData = new FormData();
// HTML file input, chosen by user
formData.append("userfile", fileInputElement.files[0]);
var request = new XMLHttpRequest();
request.open("POST", "some-url");
request.send(formData);
Are the two methods equivalent? Is there any advantage of using FileReader instead of FormData? Is one more performant than the other?
First, there is a third option you omitted which is to send the File directly through xhr.send(file) just like you did with the ArrayBuffer.
That being said, there doesn't exist any possible advantage to first reading the file in memory through FileReader.
When doing a file upload from a File on disk, the browser doesn't load the full file in memory but streams it through the request. This is how you can upload gigs of data even though it wouldn't fit in memory. This also is more friendly with the HDD since it allows for other processes to access it between each chunk instead of locking it.
When reading the File through a FileReader you are asking the browser to read the full file to memory, and then when you send it through XHR the data from memory is being used. You are thus limited by the memory available, bloating it for no good reasons, and even asking the CPU to work here while the data could have gone from the disk to the network card almost directly.
As to what's the difference between formdata.append(file); xhr.send(formdata); and xhr.send(file), basically only request headers. The former will wrap the request as a multipart/form-data enctype request, while the latter will send it as is.
So you'd handle both requests differently on the receiving end.
Short version
I have a webapp using Magnolia, I need to upload a comment with posibility of multiple files, I want to use AJAX, before saving the files as assets I want to be able to check the user's permission, I figured I need a custom Java-based REST endpoint, I made it work, but I have issues saving "jcr:data" into an asset.
Long version
I have a webapp, I have registered users (using PUR), I have different roles for users (for simplicity let's say User and Editor) and I have a Post and a Comment content types. Every User can create a Post and add files, every Post holds the creator's UUID, array of Comment UUIDs and array of file UUIDs (from Assets app), every Post has a comment section and every Comment can have files attached (zero or multiple) to it. Every Editor can post a comment to every Post, but Users can only post comments to their own Posts.
My form for comments looks something like this:
<form action="" method="post" id="comment-form" enctype="multipart/form-data">
<input type="file" name="file" id="file" multiple />
<textarea name="text"></textarea>
<button type="button" onclick="sendComment();">Send</button>
</form>
I tried using Javascript model to process the data and I was able to save the asset correctly, however only one. I couldn't access the other files in the model.
I tried solving it (and improving user experience) by using AJAX and a REST endpoint. I opted not to use the Nodes endpoint API, because I didn't know how to solve the permission issue. I know I can restrict access to REST for each role, but not based on ownership of the Post. So I created my own Java-based endpoint (copied from documentation).
In the sendComment() function in Javascript I create an object with properties like name, extension, mimeType, ..., and data. I read in the documentation that you should send the data using the Base64 format, so I used FileReader() to accomplish that:
var fileObject = {
// properties like name, extension, mimeType, ...
}
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function() {
// this part is easy
};
xhttp.open("PUT", "http://localhost:8080/myApp/.rest/assets/v1/saveAsset", true);
xhttp.setRequestHeader("Content-type", "application/json");
var reader = new FileReader();
reader.onload = function() {
fileObject.data = reader.result;
// I also tried without the 'data:image/png;base64,' part by reader.result.split(",")[1];
xhttp.send(JSON.stringify(fileObject));
};
reader.readAsDataURL(file); //where file is the value of the input element.files[i]
In Java I created a POJO class that has the same properties as the javascript object. Including a String data.
The code for the endpoint looks like this:
public Response saveAsset(Asset file) {
// Asset is my custom POJO class
Session damSession;
Node asset;
Node resource;
try {
damSession = MgnlContext.getJCRSession("dam");
asset = damSession.getRootNode().addNode(file.getName(), "mgnl:asset");
asset.setProperty("name", file.getName());
asset.setProperty("type", file.getExtension());
resource = asset.addNode("jcr:content", "mgnl:resource");
InputStream dataStream = new ByteArrayInputStream(file.getData().getBytes());
ValueFactory vf = damSession.getValueFactory();
Binary dataBinary = vf.createBinary(dataStream);
resource.setProperty("jcr:data", dataBinary);
resource.setProperty("fileName", file.getName());
resource.setProperty("extension", file.getExtension());
resource.setProperty("size", file.getSize());
resource.setProperty("jcr:mimeType", file.getMimeType());
damSession.save();
return Response.ok(LinkUtil.createLink(asset)).build();
} catch (RepositoryException e) {
return Response.ok(e.getMessage()).build(); //I know it's not ok, but that's not important at the moment
}
}
The asset gets created, the properties get saved apart from the jcr:data. If I upload an image and then download it either by the link I get as a response or directly from the Assets app, it cannot be opened, I get the format is not supported message. The size is 0, image doesn't show in the Assets app, seems like the data is simply not there, or it's in the wrong format.
How can I send the file or the file data to the Java endpoint? And how should I receive it? Does anybody know what am I missing? I honestly don't know what else to do with it.
Thank you
The input stream had to be decoded from Base64.
InputStream dataStream = new ByteArrayInputStream(Base64.decodeBase64(file.getData().getBytes()));
...and it only took me less than 3 months.
Noticed it after going through the source code for REST module that for unmarshalling Binary Value it had to be encoded to Base64, so I tried decoding it and it started to work.
In my Vue app I receive a PDF as a blob, and want to display it using the browser's PDF viewer.
I convert it to a file, and generate an object url:
const blobFile = new File([blob], `my-file-name.pdf`, { type: 'application/pdf' })
this.invoiceUrl = window.URL.createObjectURL(blobFile)
Then I display it by setting that URL as the data attribute of an object element.
<object
:data="invoiceUrl"
type="application/pdf"
width="100%"
style="height: 100vh;">
</object>
The browser then displays the PDF using the PDF viewer. However, in Chrome, the file name that I provide (here, my-file-name.pdf) is not used: I see a hash in the title bar of the PDF viewer, and when I download the file using either 'right click -> Save as...' or the viewer's controls, it saves the file with the blob's hash (cda675a6-10af-42f3-aa68-8795aa8c377d or similar).
The viewer and file name work as I'd hoped in Firefox; it's only Chrome in which the file name is not used.
Is there any way, using native Javascript (including ES6, but no 3rd party dependencies other than Vue), to set the filename for a blob / object element in Chrome?
[edit] If it helps, the response has the following relevant headers:
Content-Type: application/pdf; charset=utf-8
Transfer-Encoding: chunked
Content-Disposition: attachment; filename*=utf-8''Invoice%2016246.pdf;
Content-Description: File Transfer
Content-Encoding: gzip
Chrome's extension seems to rely on the resource name set in the URI, i.e the file.ext in protocol://domain/path/file.ext.
So if your original URI contains that filename, the easiest might be to simply make your <object>'s data to the URI you fetched the pdf from directly, instead of going the Blob's way.
Now, there are cases it can't be done, and for these, there is a convoluted way, which might not work in future versions of Chrome, and probably not in other browsers, requiring to set up a Service Worker.
As we first said, Chrome parses the URI in search of a filename, so what we have to do, is to have an URI, with this filename, pointing to our blob:// URI.
To do so, we can use the Cache API, store our File as Request in there using our URL, and then retrieve that File from the Cache in the ServiceWorker.
Or in code,
From the main page
// register our ServiceWorker
navigator.serviceWorker.register('/sw.js')
.then(...
...
async function displayRenamedPDF(file, filename) {
// we use an hard-coded fake path
// to not interfere with legit requests
const reg_path = "/name-forcer/";
const url = reg_path + filename;
// store our File in the Cache
const store = await caches.open( "name-forcer" );
await store.put( url, new Response( file ) );
const frame = document.createElement( "iframe" );
frame.width = 400
frame.height = 500;
document.body.append( frame );
// makes the request to the File we just cached
frame.src = url;
// not needed anymore
frame.onload = (evt) => store.delete( url );
}
In the ServiceWorker sw.js
self.addEventListener('fetch', (event) => {
event.respondWith( (async () => {
const store = await caches.open("name-forcer");
const req = event.request;
const cached = await store.match( req );
return cached || fetch( req );
})() );
});
Live example (source)
Edit: This actually doesn't work in Chrome...
While it does set correctly the filename in the dialog, they seem to be unable to retrieve the file when saving it to the disk...
They don't seem to perform a Network request (and thus our SW isn't catching anything), and I don't really know where to look now.
Still this may be a good ground for future work on this.
And an other solution, I didn't took the time to check by myself, would be to run your own pdf viewer.
Mozilla has made its js based plugin pdf.js available, so from there we should be able to set the filename (even though once again I didn't dug there yet).
And as final note, Firefox is able to use the name property of a File Object a blobURI points to.
So even though it's not what OP asked for, in FF all it requires is
const file = new File([blob], filename);
const url = URL.createObjectURL(file);
object.data = url;
In Chrome, the filename is derived from the URL, so as long as you are using a blob URL, the short answer is "No, you cannot set the filename of a PDF object displayed in Chrome." You have no control over the UUID assigned to the blob URL and no way to override that as the name of the page using the object element. It is possible that inside the PDF a title is specified, and that will appear in the PDF viewer as the document name, but you still get the hash name when downloading.
This appears to be a security precaution, but I cannot say for sure.
Of course, if you have control over the URL, you can easily set the PDF filename by changing the URL.
I believe Kaiido's answer expresses, briefly, the best solution here:
"if your original URI contains that filename, the easiest might be to simply make your object's data to the URI you fetched the pdf from directly"
Especially for those coming from this similar question, it would have helped me to have more description of a specific implementation (working for pdfs) that allows the best user experience, especially when serving files that are generated on the fly.
The trick here is using a two-step process that perfectly mimics a normal link or button click. The client must (step 1) request the file be generated and stored server-side long enough for the client to (step 2) request the file itself. This requires you have some mechanism supporting unique identification of the file on disk or in a cache.
Without this process, the user will just see a blank tab while file-generation is in-progress and if it fails, then they'll just get the browser's ERR_TIMED_OUT page. Even if it succeeds, they'll have a hash in the title bar of the PDF viewer tab, and the save dialog will have the same hash as the suggested filename.
Here's the play-by-play to do better:
You can use an anchor tag or a button for the "download" or "view in browser" elements
Step 1 of 2 on the client: that element's click event can make a request for the file to be generated only (not transmitted).
Step 1 of 2 on the server: generate the file and hold on to it. Return only the filename to the client.
Step 2 of 2 on the client:
If viewing the file in the browser, use the filename returned from the generate request to then invoke window.open('view_file/<filename>?fileId=1'). That is the only way to indirectly control the name of the file as shown in the tab title and in any subsequent save dialog.
If downloading, just invoke window.open('download_file?fileId=1').
Step 2 of 2 on the server:
view_file(filename, fileId) handler just needs to serve the file using the fileId and ignore the filename parameter. In .NET, you can use a FileContentResult like File(bytes, contentType);
download_file(fileId) must set the filename via the Content-Disposition header as shown here. In .NET, that's return File(bytes, contentType, desiredFilename);
client-side download example:
download_link_clicked() {
// show spinner
ajaxGet(generate_file_url,
{},
(response) => {
// success!
// the server-side is responsible for setting the name
// of the file when it is being downloaded
window.open('download_file?fileId=1', "_blank");
// hide spinner
},
() => { // failure
// hide spinner
// proglem, notify pattern
},
null
);
client-side view example:
view_link_clicked() {
// show spinner
ajaxGet(generate_file_url,
{},
(response) => {
// success!
let filename = response.filename;
// simplest, reliable method I know of for controlling
// the filename of the PDF when viewed in the browser
window.open('view_file/'+filename+'?fileId=1')
// hide spinner
},
() => { // failure
// hide spinner
// proglem, notify pattern
},
null
);
I'm using the library pdf-lib, you can click here to learn more about the library.
I solved part of this problem by using api Document.setTitle("Some title text you want"),
Browser displayed my title correctly, but when click the download button, file name is still previous UUID. Perhaps there is other api in the library that allows you to modify download file name.
The JavaScript process generates a lot of data (200-300MB). I would like to save this data for further analysis but the best I found so far is saving using this example http://jsfiddle.net/c2U2T/ which is not an option for me, because it looks like it requires all the data being available before starting the downloading. But what I need is something like
var saver = new Saver();
saver.save(); // The Save As ... dialog appears
saver.onaccepted = function () { // user accepted saving
for (var i = 0; i < 1000000; i++) {
saver.write(Math.random());
}
};
Of course, instead of the Math.random() will be some meaningful construction.
#dader - I would build upon dader's example.
Use HTML5 FileSystem API - but instead of writing to the file each and every line (more IO than it is worth), you can batch some of the lines in memory in a javascript object/array/string, and only write it to the file when they reach a certain threshold. You are thus appending to a local file as the process chugs (makes it easy to pause/restart/stop etc)
Of note is the following, which is an example of how you can spawn the dialoge to request the amount of data that you would need (it sounds large). Tested in chrome.:
navigator.persistentStorage.queryUsageAndQuota(
function (usage, quota) {
var availableSpace = quota - usage;
var requestingQuota = args.size + usage;
if (availableSpace >= args.size) {
window.requestFileSystem(PERSISTENT, availableSpace, persistentStorageGranted, persistentStorageDenied);
} else {
navigator.persistentStorage.requestQuota(
requestingQuota, function (grantedQuota) {
window.requestFileSystem(PERSISTENT, grantedQuota - usage, persistentStorageGranted, persistentStorageDenied);
}, errorCb
);
}
}, errorCb);
When you are done you can use Javascript to open a new window with the url of that blob object that you saved which you can retrieve via: fileEntry.toURL()
OR - when it is done crunching you can just display that URL in an html link and then they could right click on it and do whatever Save Link As that they want.
But this is something that is new and cool that you can do entirely in the browser without needing to involve a server in any way at all. Side note, 200-300MB of data generated by a Javascript Process sounds absolutely huge... that would be a concern for whether you are storing the "right" data...
What you actually are trying to do is a kind of streaming. I mean FileAPI is not suited for the task. Instead, I could suggest two options :
The first, using XHR facility, ie ajax, by splitting your data into several chunks which will sequencially be sent to the server, each chunk in its own request along with an id ( for identifying the stream ) and a position index ( for identifying the chunk position ). I won't recommend that, since it adds work to break up and reassemble data, and since there's a better solution.
The second way of achieving this is to use Websocket API. It allows you to send data sequentially to the server as it is generated. Following a usual stream API. I think you definitely need this.
This page may be a good place to start at : http://binaryjs.com/
That's all folks !
EDIT considering your comment :
I'm not sure to perfectly get your point though but, what about HTML5's FileSystem API ?
There are a couple examples here : http://www.html5rocks.com/en/tutorials/file/filesystem/ among which this sample that allows you to append data to an existant file. You can also create a new file, etc. :
function onInitFs(fs) {
fs.root.getFile('log.txt', {create: false}, function(fileEntry) {
// Create a FileWriter object for our FileEntry (log.txt).
fileEntry.createWriter(function(fileWriter) {
fileWriter.seek(fileWriter.length); // Start write position at EOF.
// Create a new Blob and write it to log.txt.
var blob = new Blob(['Hello World'], {type: 'text/plain'});
fileWriter.write(blob);
}, errorHandler);
}, errorHandler);
}
EDIT 2 :
What you're trying to do is not possible using javascript as said on SO here. Tha author nonetheless suggest to use Java Applet to achieve needed behaviour.
To put it in a nutshell, HTML5 Filesystem API only provides a sandboxed filesystem, ie located in some hidden directory of the browser. So if you want to access the true filesystem, using java would be just fine considering your use case. I guess there is an interface between java and javascript here.
But if you want to make your data only available from the browser ( constrained by same origin policy ), use FileSystem API.