I'm getting a PDF file from an external web resource using ajax.
I want to store this PDF in the google drive using google script API from within google docs.
Sample of that ajax call:
$.ajax({
url: "<fancyurl>",
contentType: 'application/octet-stream',
responsetype: 'blob',
type: 'GET',
success: function(response) {
// or converted response.. etc..
google.script.run.withSuccessHandler(yay).withUserObject(this).createFile(response);
});
The response from the webresource is an octet-stream:
I've tried to import the original response, the response converted to blob, a uint8 blob and the base64string. All end up in errors or corrupt files.
var blob = Utilities.newBlob(data, 'application/pdf' ,'asdf.pdf');
// blob.setName('asdf.pdf');
// blob.setContentTypeFromExtension();
DriveApp.createFile(blob);
Where data is the response or converted response.
Does anybody know how to solve this or what google script expects as a valid input?
In my experience, when the binary data is retrieved using ajax, the binary data is converted to the text data. By this, I'm worried that responsetype of blob and arraybuffer might not be able to be used. I thought that this might be the reason of your issue. So, in this answer, I would like to propose using "XMLHttpRequest" instead of "ajax".
The modified script is as follows.
Modified script:
From:
$.ajax({
url: "<fancyurl>",
contentType: 'application/octet-stream',
responsetype: 'blob',
type: 'GET',
success: function(response) {
// or converted response.. etc..
google.script.run.withSuccessHandler(yay).withUserObject(this).createFile(response);
});
To:
const xhr = new XMLHttpRequest();
xhr.open('GET', "<fancyurl>", true);
xhr.responseType = 'arraybuffer';
xhr.onload = function(e) {
google.script.run.withSuccessHandler(yay).withUserObject(this).createFile([...new Int8Array(this.response)]);
};
xhr.send();
In this modification, the binary data is converted to the array buffer and converted it to int8 array. By this, this data can be used as the byte array with Google Apps Script.
In this case, although I cannot see your whole script of Google Apps Script, you can use your Google Apps Script as follows.
function createFile(data) {
var blob = Utilities.newBlob(data, 'application/pdf', 'asdf.pdf');
DriveApp.createFile(blob);
}
Related
I'm trying to post a base64-encoded PDF file to a Zendesk file upload API endpoint but the file URL returned from the API shows that the file is corrupted.
First I receive the PDF as a base64-encoded string from a separate API call. Let's call it base64String.
If I do window.open("data:application/pdf;base64," + base64String) I can view the PDF in my browser.
Now I am trying to follow the documentation here for uploading files via the API. I can successfully complete a cURL call as shown in the example. However, the jQuery AJAX call will corrupt the PDF file.
client.request({
url: '/api/v2/uploads.json?filename=test.pdf',
type: 'POST',
data: atob(base64String),
contentType: 'application/binary'
}).then(function(data) {
window.open(data.upload.attachment.content_url); // corrupt file
}, function(response) {
console.log("Failed to upload file to Zendesk.");
console.log(response);
});
Like I said, this will succeed but when I visit the content_url the PDF does not display. I am quite sure the file is being corrupt in the POST request.
I have tried uploading the file as a base64 string (without decoding with atob()) with no luck among other things.
UPDATE
I'm still not able to view the PDF after converting the base64 string to blob.
var blob = base64ToBlob(base64String);
console.log(blob); // Blob {size:39574, type: "application/pdf"}
client.request({
url: '/api/v2/uploads.json?filename=test.pdf',
type: 'POST',
data: blob,
processData: false,
contentType: 'application/pdf'
}).then(function(data) {
window.open(data.upload.attachment.content_url); // corrupt file
}, function(response) {
console.log("Failed to upload file to Zendesk.");
console.log(response);
});
function base64ToBlob(byteString) {
// write the bytes of the string to an ArrayBuffer
var ab = new ArrayBuffer(byteString.length);
var ia = new Uint8Array(ab);
for (var i = 0; i < byteString.length; i++) {
ia[i] = byteString.charCodeAt(i);
}
// write the ArrayBuffer to a blob, and you're done
var blob = new Blob([ab], {type: 'application/pdf'});
return blob;
};
I learned that the Zendesk app framework uses a jQuery AJAX wrapper for requests and the arraybuffer type is unsupported, so the file was getting corrupted. The app framework team has fixed the issue.
I'm trying to chain together the ImageOptim API with the OCR.space API.
Both great API's by the way, I highly recommend them! The issue at hand though is that the OCR api does not accept images over 1 mb or 2600x2600 px in the free tier and thus many sources will need to be optimised before being sent.
Im running this jQuery ajax call to ImageOptim from a cordova wrapped html file:
var compress = function(image) {
console.log("starting compress");
$.ajax({
url: "https://im2.io/eX4mp1E4pI/2600x2600,quality=low",
method: "POST",
data: {
file: image
},
processData: false,
contentType: false,
crossDomain: true
}).done(function(res) {
window.compressedImg = res;
formData.append("file", res);
runOCR();
}).fail(function(jqXHR, textStatus) {
console.log("Request failed: " + textStatus);
});
};
Please note:
this (in my experience), will fail in the browser due to cross domain calls being blocked in the browser but not from cordova.
OCR compatible compression not added in yet (but would require a file size as well as dimension argument)
The output from this call is a raw png as a string, i.e. what you get when you open a .png file in a text editor. I've tried loads of ways to handle this but cannot understand how to use this data in the next ajax call (below), does it need to be saved to disk and then uploaded, if so - how? (because I tried writing it to localstorage but it would still be treated as a string).
The OCR.space call;
var formData = new FormData();
formData.append("language", "MYLANGUAGE");
formData.append("apikey", "MYAPIKEY");
formData.append("isOverlayRequired", false);
function runOCR2() {
jQuery.ajax({
url: 'https://api.ocr.space/parse/image',
data: formData,
dataType: 'form/multipart',
cache: false,
contentType: false,
processData: false,
method: 'POST',
success: function(ocrParsedResult) {
console.log(ocrParsedResult);
}
});
}
Please note; Vars are not set here but I keep them together in this question for clarity.
The response from this call is:
responseText: "{\"ParsedResults\":null,\"OCRExitCode\":99,\"IsErroredOnProcessing\":true,\"ErrorMessage\":\"No file uploaded or UR…"
i.e. the call works but the image parameter is not a valid image.
Any ideas on how to trea the returned string so that it is readable as an image for the next api call?
Usually when you are uploading files using formData you just pass file reference like
form.append('myfile',$("#fileInput").files[0]) and browser handles the encoding stuff behind the screens .It manually converts file to byte-stream and prepares appropriate boundary to help server distinguish where image begins and ends
but here scenario is different you don't have the file bound to any physical file control instead its created dynamically and you get a bytestream of that .To account for the above fact you need to tell browser explicitly that it's a independent raw binary stuff and should be treated as such
A Blob object represents a file-like object of immutable, raw data. Blobs represent data that isn't necessarily in a JavaScript-native format.
var blob = new Blob([res], {type : 'image/png'}); //res is the converted image ImageOptim API
var formData = new FormData();
var fileName = 'myimage.png'; //filename that server will see it as
formData.append('anything', blob, fileName);
formData.append("language", "MYLANGUAGE");
formData.append("apikey", "MYAPIKEY");
formData.append("isOverlayRequired", false);
function runOCR2() {
$.ajax({
url: "https://api.ocr.space/parse/image",
type: "POST",
cache: false,
contentType: false,
processData: false,
data: formData,
success: function(response){alert(response);}
});
}
I'm testing FileSaver saveAs function. Here's my code to fetch report data with post request
$.ajax({
type: "POST",
url: '/rest/report/test',
contentType: 'application/json',
async: false,
data: JSON.stringify({"date": "11.11.2015"}),
success: function (response) {
console.log(response);
saveAs(response,"test.xlsx");
}
});
That fails with error: Uncaught TypeError: Failed to execute 'createObjectURL' on 'URL': No function was found that matched the signature provided.
But I can see result console.log(response); - it shows file content. Is it possible to make my code to download file?
saveAs tries to execute createObjectURL on your text, and fails.
The reason is that saveAs does not accept plain text as argument. It only accepts Blob objects.
Text files
If your server returns text, you can create Blob from your text by using new Blob() constructor.
Here is the working example:
document.getElementById('download').onclick = function() {
var text = "Hello world!";
var blob = new Blob([text], {
type: "text/plain; encoding=UTF-8"
});
saveAs(blob, "result.txt");
};
<script src="http://eligrey.com/demos/FileSaver.js/FileSaver.js"></script>
Download
Binary files
As long as you have binary file, you can use native XMLHttpRequest and use responseType = blob.
var xhr = new XMLHttpRequest();
xhr.open('POST', '/rest/report/test/', true);
xhr.responseType = 'blob';
xhr.onload = function(e) {
if (this.status == 200) {
var blob = this.response;
saveAs(blob, 'download.xlsx');
}
};
xhr.send();
A bit late for an answer but useful for other programmer
jQuery doesn't allow to download binary files, this is what is needed here. Your call without the datatype probably default to text. Creating a blob from a text string can cause artefact to corrupt the file as the binary data was initially interpret as text.
As proposed by Yeldar answer, you call you own XMLHttpRequest but this don’t allow you to keep your code clean. jQuery allows to create Ajax transports plugins so you will be able to get a binary array contain in a blob and bypassing your costly conversion by directly calling saveAs on your response.
http://www.henryalgus.com/reading-binary-files-using-jquery-ajax/
Github code: https://github.com/henrya/js-jquery/tree/master/BinaryTransport
I'm using FileSaver.js and Blob.js into an Angular JS application to save a PDF returned by a REST service (which returns an array of bytes representing the file).
var headers = {headers: {"Authorization":"Bearer "+token, "Accept":"application/pdf"}};
$http.get(URL, headers)
.success(function (data) {
var blob = new Blob([data], {type: 'application/pdf'});
saveAs(blob, 'contract.pdf');
});
the file gets saved with the right type and the number of pages is correct, but it's totally blank.
Opening it with an editor, it turned out the it contains only the first part of the data returned by the server, like it's truncated.
Thank everyone for helping out!
$http.get probably isn't handling binary data correctly. Try $http({method: "GET", url: URL, responseType: "arraybuffer", ...}) (see angularjs) to get a binary object you can put in for data.
You can also use responseType: "blob" so you don't even have to create var blob, but I think that responseType has less browser support.
Adding a response type to the config argument worked for me. Try:
var config = { responseType: 'blob', headers: {"Authorization":"Bearer "+token,
"Accept":"application/pdf"}};
$http.get(URL, config)
.success(function (data) {
var blob = new Blob([data], {type: 'application/pdf'});
saveAs(blob, 'contract.pdf');
});
My app uses many blob references to local image files created with with createObjectURL.
I need to convert these blob url references into what is essentially a javascript file object in order to upload them.
To retrieve the blobs I have jquery's Ajax function as show below:
var bloburl = 'blob:3c9230a9-55ea-4357-a2d3-db97673a1947';
$.ajax({
url : bloburl, //Blob URL reference
type : 'GET',
processData : false,
contentType : false,
success: function(file) {
//Here I am attempting to build an object that can be read by the FileReader API
var blob = new Blob([file], { type: 'image/jpeg' });
self.cache.filestoupload.push(blob);
}
});
Once the self.cache.filestoupload array has been populated the app begins to attack the first file in the array, slicing the first few bytes, then reading it as a binary string via the fileReader API. This slice of the file is then passed into a webworker and the image exif data is read. Then finally it begins to upload the full file.
var binaryReader = new FileReader();
binaryReader.onload = function (e) {
worker.postMessage({
guid: fileindex,
binary_string: binaryReader.result
});
};
binaryReader.readAsBinaryString(filePart);
Everything works perfectly with files retrieved in the standard manner from an HTML <input> element.
However the very same file when referenced via a blob URL, retrieved via an ajax request and then sliced and passed into the fileReader API fails. No exif data is returned and although the file uploads to the server the very same PHP script which handles the uploads fine also fails.
It appears that I am not retrieving the file correctly. Where am I going wrong?
To summarize, I need to be able to retrieve files referenced with the createObjectURL and pass them into the fileReader in the same format as if they were a standard javascript file object
UPDATE:
Okay I have made it work using a standard xhr request as follows:
var xhr = new XMLHttpRequest();
xhr.open('GET', url, true);
xhr.responseType = 'blob';
xhr.onreadystatechange = function(e) {
if (xhr.readyState == 4) {
var myBlob = this.response;
self.cache.filestoupload.push(myBlob);
}
};
xhr.send();
How can I do the same using jQuery's $.Ajax method?
You cannot currently do this in jQuery 1.x or 2.x. See: https://github.com/jquery/jquery/pull/1525