Send file via Post using new SDK Firefox Addon - javascript

I'm searching to send a zip file to a server using the "Request" class from the new Firefox SDK for addons. This is my code:
var Request = require("sdk/request").Request;
var file = new FileUtils.File(pathToZipFile);
Request({
url: serverURL,
content: file,
onComplete: function (response) {
for (var headerName in response.headers) {
console.log(headerName + " : " + response.headers[headerName]);
}
console.log("Response " + response.text );
}
}).post();
But the error is:
[Exception... "Component returned failure code: 0x80520009 (NS_ERROR_FILE_INVALID_PATH) [nsILocalFile.target]" nsresult: "0x80520009 (NS_ERROR_FILE_INVALID_PATH)" location: "JS frame :: resource://gre/modules/commonjs/toolkit/loader.js -> resource://gre/modules/commonjs/sdk/querystring.js :: stringify/< :: line 70" data: no]
I have tried to do some checks and:
The server is on and receives normal GET and POST without files
The zip file is present and the path is right
Do you see any errors?
Thanks a lot

The only way to do it with the Request module is to base a base64 encoded string to the content key. If you don't use this, then you can send data such as a Blob or DOMFile (new File()) instance.
But as we see in the SDK code, the request module sends the data variable on request (if its not a HEAD or GET request).
https://github.com/mozilla/addon-sdk/blob/master/lib/sdk/request.js#L110
The data var is made by running stringify on anything passed to the content key:
https://github.com/mozilla/addon-sdk/blob/master/lib/sdk/request.js#L76
Stringify makes it a string:
https://github.com/mozilla/addon-sdk/blob/f5fab7b242121dccfa4e55ac80489899bb9f2a41/lib/sdk/querystring.js#L30
So you have to send base64 encoded string. Or a binary string. Which sucks.
You can use the sdk/io module to read the file as an ArrayBuffer and then turn that ArrayBuffer into a base64 string or binary string.
This shows how to get a binary string: https://stackoverflow.com/a/16365505/1828637

Related

How to extract the Content-Type of a file sent via multipart/form-data

I receive a Request in my cloudflare worker and want to upload the data to Google cloud storage. My problem is that I can't extract the content type from the multipart/form-data data I receive in order to upload it with the correct content type to GCS.
When I read the request with await req.formData() I can get('file')from the formData and it returns the raw file data that I need for the GCS, but I can't seem to see anywhere the file content-type that I need (I can see it only when looking at the raw Request body).
Here is my (striped down) code :
event.respondWith((async () => {
const req = event.request
const formData = await req.formData()
const file = formData.get('file')
const filename = formData.get('filename')
const oauth = await getGoogleOAuth()
const gcsOptions = {
method: 'PUT',
headers: {
Authorization: oauth.token_type + ' ' + oauth.access_token,
'Content-Type': 'application/octet-stream' //this should by `'Content-Type': file.type`
},
body: file,
}
const gcsRes = await fetch(
`https://storage.googleapis.com/***-media/${filename}`,
gcsOptions,
)
if (gcsRes.status === 200) {
return new Response(JSON.stringify({filename}), gcsRes)
} else {
return new Response('Internal Server Error', {status: 500, statusText: 'Internal Server Error'})
}
})())
Reminder - the code is part of our cloudflare worker code.
It seems to me this should be straight forward, determining the type of file you extract from the multipart/form-data data.
Am I missing something?
Unfortunately, as of this writing, the Cloudflare Workers implementation of FormData is incomplete and does not permit extracting the Content-Type. In fact, it appears our implementation currently interprets all entries as text and return strings, which means binary content will be corrupted. This is a bug which will require care to fix since we don't want to break already-deployed scripts that might rely on the buggy behavior.
Thanks Kenton for your response.
What I ended up doing:
As the Cloudflare Workers don't support the multipart/form-data of Blob or any type other than String, I ended up using the raw bytes in the ArrayBuffer data type. After converting it to an Uint8Array I parsed it byte by byte to determine the file type and the start and end indexes of the file data. Once I found the start and end of the transferred file I was able create an array of the file data, add it to the request and send it to the GCS as I showed above.

Node.js Request module returns truncated data

I'm using request on my nodejs server to call an external JSON rest service.
This is a simplified example of my code :
var request = require("request");
request("http://www.sitepoint.com", function(error, response, body) {
var myJson = eval('(' + body+ ')');
});
It works well 90% of the time, but sometime I get this error :
Uncaught Syntax Error: Unexpected Token ILLEGAL
This error never refers to the same char into the received JSON, So in my understanding, the stream sent back by the rest service is truncated and cannot be converted to JSON.
How can I ensure myself that the request is done completly and the data complete ?

How to read RAW JSON in Angular JS from HTTP get method, when response type is arraybuffer?

I am trying to read ByteArray to show PDF form Java into Angular JS using
method : 'GET'
url : '',
cache : isCache||false,
responseType: 'arraybuffer'
This is working fine when everything okay.
But when I throw an exception with some proper JSON and marking HTTP Status as bad request, I can't read JSON response even after changing config to respone.config.responseType='application/json'.
It showing only empty ArrayBuffer {} on response.data.
But important thing here is, I can see JSON response in google chrome developer tool request.
I googled, searched on stack overflow but didn't find much.
Below lines added later
I am adding response object picture and data received pic from chrome network connection.
First Image : Response object from error function of angular JS.
Second Image - Server returning proper JSON message
Third Image - Request and Response Header pics
Only problem I am facing is not able read data of response as I set response type to arraybuffer
Instead of expecting arraybuffer why not expect application/json all the time, then when you return your data that's supposed to create your pdf, do a base64 of the data, put it in a json object and return it to the client
Even when you throw an exception you still expect JSON. Your response from server side could be something like:
{
responseCode: // your response code according to whether the request is success or not,
responseMessage: // success or fail
data: // your base64 encoded pdf data(the data that if the request is successful will be returned), note it will be returned as a normal string
} // I'm hoping you know how to base64 encode data
Then in your client side(Javascript), perform an if
if(data.responseCode == //errorCode) {
alert(data.responseMessage);
} else {
// Unpack data
var myPdfData = // base64 decode(data.data)
// Do logic to create and open/download file
}
You don't even need to decode the data, you just create a blob object, then trigger a download from the blob
If the responsetype is arraybuffer, to view it, you should convert it into a blob and create a objectUrl from that blob, and the download it or open in a new tab/browser window to view it.
Js:
$http.get('your/api/url').then(function(response) {
var blob = new Blob([response.data], { type: 'application/pdf' });
var downloadUrl = URL.createObjectURL(blob);
$timeout(function () {
var link = document.createElement('a');
link.download = 'SOME_FILE_NAME'+ '.pdf';
link.href = downloadUrl;
link.click();
}, 100);
}, function(errorResponse){
alert(errorResponse.error.message);
});

cordova-plugin-file-transfer: How do you upload a file to S3 using a signed URL?

I am able to upload to S3 using a file picker and regular XMLHttpRequest (which I was using to test the S3 setup), but cannot figure out how to do it successfully using the cordova file transfer plugin.
I believe it is either to do with the plugin not constructing the correct signable request, or not liking the local file uri given. I have tried playing with every single parameter from headers to uri types, but the docs aren't much help, and the plugin source is bolognese.
The string the request needs to sign match is like:
PUT
1391784394
x-amz-acl:public-read
/the-app/317fdf654f9e3299f238d97d39f10fb1
Any ideas, or possibly a working code example?
A bit late, but I just spent a couple of days struggling with this so in case anybody else is having problems, this is how managed to upload an image using the javascript version of the AWS SDK to create the presigned URL.
The key to solving the problem is in the StringToSign element of the XML SignatureDoesNotMatch error that comes back from Amazon. In my case it looked something like this:
<StringToSign>
PUT\n\nmultipart/form-data; boundary=+++++org.apache.cordova.formBoundary\n1481366396\n/bucketName/fileName.jpg
</StringToSign>
When you use the aws-sdk to generate a presigned URL for upload to S3, internally it will build a string based on various elements of the request you want to make, then create an SHA1 hash of it using your AWS secret. This hash is the signature that gets appended to the URL as a parameter, and what doesn't match when you get the SignatureDoesNotMatch error.
So you've created your presigned URL, and passed it to cordova-plugin-file-transfer to make your HTTP request to upload a file. When that request hits Amazon's server, the server will itself build a string based on the request headers etc, hash it and compare that hash to the signature on the URL. If the hashes don't match then it returns the dreaded...
The request signature we calculated does not match the signature you provided. Check your key and signing method.
The contents of the StringToSign element I mentioned above is the string that the server builds and hashes to compare against the signature on the presigned URL. So to avoid getting the error, you need to make sure that the string built by the aws-sdk is the same as the one built by the server.
After some digging about, I eventually found the code responsible for creating the string to hash in the aws-sdk. It is located (as of version 2.7.12) in:
node_modules/aws-sdk/lib/signers/s3.js
Down the bottom at line 168 there is a sign method:
sign: function sign(secret, string) {
return AWS.util.crypto.hmac(secret, string, 'base64', 'sha1');
}
If you put a console.log in there, string is what you're after. Once you make the string that gets passed into this method the same as the contents of StringToSign in the error message coming back from Amazon, the heavens will open and your files will flow effortlessly into your bucket.
On my server running node.js, I originally created my presigned URL like this:
var AWS = require('aws-sdk');
var s3 = new AWS.S3(options = {
endpoint: 'https://s3-eu-west-1.amazonaws.com',
accessKeyId: "ACCESS_KEY",
secretAccessKey: "SECRET_KEY"
});
var params = {
Bucket: 'bucketName',
Key: imageName,
Expires: 60
};
var signedUrl = s3.getSignedUrl('putObject', params);
//return signedUrl
This produced a signing string like this, similar to the OP's:
PUT
1481366396
/bucketName/fileName.jpg
On the client side, I used this presigned URL with cordova-plugin-file-transfer like so (I'm using Ionic 2 so the plugin is wrapped in their native wrapper):
let success = (result: any) : void => {
console.log("upload success");
}
let failed = (err: any) : void => {
let code = err.code;
alert("upload error - " + code);
}
let ft = new Transfer();
var options = {
fileName: filename,
mimeType: 'image/jpeg',
chunkedMode: false,
httpMethod:'PUT',
encodeURI: false,
};
ft.upload(localDataURI, presignedUrlFromServer, options, false)
.then((result: any) => {
success(result);
}).catch((error: any) => {
failed(error);
});
Running the code produced the signature doesn't match error, and the string in the <StringToSign> element looks like this:
PUT
multipart/form-data; boundary=+++++org.apache.cordova.formBoundary
1481366396
/bucketName/fileName.jpg
So we can see that cordova-plugin-file-transfer has added in its own Content-Type header which has caused a discrepancy in the signing strings. In the docs relating to the options object that get passed into the upload method it says:
headers: A map of header name/header values. Use an array to specify more than one value. On iOS, FireOS, and Android, if a header named Content-Type is present, multipart form data will NOT be used. (Object)
so basically, if no Content-Type header is set it will default to multipart form data.
Ok so now we know the cause of the problem, it's a pretty simple fix. On the server side I added a ContentType to the params object passed to the S3 getSignedUrl method:
var params = {
Bucket: 'bucketName',
Key: imageName,
Expires: 60,
ContentType: 'image/jpeg' // <---- content type added here
};
and on the client added a headers object to the options passed to cordova-plugin-file-transfer's upload method:
var options = {
fileName: filename,
mimeType: 'image/jpeg',
chunkedMode: false,
httpMethod:'PUT',
encodeURI: false,
headers: { // <----- headers object added here
'Content-Type': 'image/jpeg',
}
};
and hey presto! The uploads now work as expected.
I run into such issues with this plugin
The only working way I found to upload a file with a signature is the method of Christophe Coenraets : http://coenraets.org/blog/2013/09/how-to-upload-pictures-from-a-phonegap-app-to-amazon-s3/
With this method you will be able to upload your files using the cordova-plugin-file-transfer
First, I wanted to use the aws-sdk on my server to sign with getSignedUrl()
It returns the signed link and you only have to upload to it.
But, using the plugin it always end with 403 : signatures don't match
It may be related to the content length parameter but I didn't found for now a working solution with aws-sdk and the plugin

trouble decoding url in javascript

I am urlencoding a url string using php and then passing it via curl to a phantomjs script where I am trying to decode it using javascript.
I am starting with:
localhost:7788/hi there/how are you/
which gets turned into:
localhost:7788/hi+there%2Fhow+are+you
on the php side by the urlencode() function.
On the phantomjs side , I have:
// Create serever and listen port
server.listen(port, function(request, response) {
function urldecode(str) {
return decodeURIComponent((str+'').replace(/\+/g, '%20'));
}
// Print some information Just for debbug
console.log("We got some requset !!!");
console.log("request method: ", request.method); // request.method POST or GET
console.log("Get params: ", request.url); //
url= urldecode(request.url);
//-- Split requested params
var requestparams = request.url.split('/');
console.log(urldecode(requestparams[1]));
console.log(urldecode(requestparams[2]));
The output at the console is :
.....
request method: GET
Get params: /hi%2Bthere/how%2Bare%2Byou
hi+there
how+are+you
Why are the '+' signs not replaced with spaces? I'm trying to get rid of them and it looks to me that the function 'urldecode' should do this.
You should use rawurlencode() instead of urlencode() in the PHP side, so spaces are encoded with %20 and not+ signs, so javascript can decode them well with decodeURIComponent().

Categories