Creating signed S3 and Cloudfront URLs via the AWS SDK - javascript

Has anyone successfully used the AWS SDK to generate signed URLs to objects in an S3 bucket which also work over CloudFront? I'm using the JavaScript AWS SDK and it's really simple to generate signed URLs via the S3 links. I just created a private bucket and use the following code to generate the URL:
var AWS = require('aws-sdk')
, s3 = new AWS.S3()
, params = {Bucket: 'my-bucket', Key: 'path/to/key', Expiration: 20}
s3.getSignedUrl('getObject', params, function (err, url) {
console.log('Signed URL: ' + url)
})
This works great but I also want to expose a CloudFront URL to my users so they can get the increased download speeds of using the CDN. I setup a CloudFront distribution which modified the bucket policy to allow access. However, after doing this any file could be accessed via the CloudFront URL and Amazon appeared to ignore the signature in my link. After reading some more on this I've seen that people generate a .pem file to get signed URLs working with CloudFront but why is this not necessary for S3? It seems like the getSignedUrl method simply does the signing with the AWS Secret Key and AWS Access Key. Has anyone gotten a setup like this working before?
Update:
After further research it appears that CloudFront handles URL signatures completely different from S3 [link]. However, I'm still unclear as to how to create a signed CloudFront URL using Javascript.

Update: I moved the signing functionality from the example code below into the aws-cloudfront-sign package on NPM. That way you can just require this package and call getSignedUrl().
After some further investigation I found a solution which is sort of a combo between this answer and a method I found in the Boto library. It is true that S3 URL signatures are handled differently than CloudFront URL signatures. If you just need to sign an S3 link then the example code in my initial question will work just fine for you. However, it gets a little more complicated if you want to generate signed URLs which utilize your CloudFront distribution. This is because CloudFront URL signatures are not currently supported in the AWS SDK so you have to create the signature on your own. In case you also need to do this, here are basic steps. I'll assume you already have an S3 bucket setup:
Configure CloudFront
Create a CloudFront distribution
Configure your origin with the following settings
Origin Domain Name: {your-s3-bucket}
Restrict Bucket Access: Yes
Grant Read Permissions on Bucket: Yes, Update Bucket Policy
Create CloudFront Key Pair. Should be able to do this here.
Create Signed CloudFront URL
To great a signed CloudFront URL you just need to sign your policy using RSA-SHA1 and include it as a query param. You can find more on custom policies here but I've included a basic one in the sample code below that should get you up and running. The sample code is for Node.js but the process could be applied to any language.
var crypto = require('crypto')
, fs = require('fs')
, util = require('util')
, moment = require('moment')
, urlParse = require('url')
, cloudfrontAccessKey = '<your-cloudfront-public-key>'
, expiration = moment().add('seconds', 30) // epoch-expiration-time
// Define your policy.
var policy = {
'Statement': [{
'Resource': 'http://<your-cloudfront-domain-name>/path/to/object',
'Condition': {
'DateLessThan': {'AWS:EpochTime': '<epoch-expiration-time>'},
}
}]
}
// Now that you have your policy defined you can sign it like this:
var sign = crypto.createSign('RSA-SHA1')
, pem = fs.readFileSync('<path-to-cloudfront-private-key>')
, key = pem.toString('ascii')
sign.update(JSON.stringify(policy))
var signature = sign.sign(key, 'base64')
// Finally, you build the URL with all of the required query params:
var url = {
host: '<your-cloudfront-domain-name>',
protocol: 'http',
pathname: '<path-to-s3-object>'
}
var params = {
'Key-Pair-Id=' + cloudfrontAccessKey,
'Expires=' + expiration,
'Signature=' + signature
}
var signedUrl = util.format('%s?%s', urlParse.format(url), params.join('&'))
return signedUrl

For my code to work with Jason Sims's code, I also had to convert policy to base64 and add it to the final signedUrl, like this:
sign.update(JSON.stringify(policy))
var signature = sign.sign(key, 'base64')
var policy_64 = new Buffer(JSON.stringify(policy)).toString('base64'); // ADDED
// Finally, you build the URL with all of the required query params:
var url = {
host: '<your-cloudfront-domain-name>',
protocol: 'http',
pathname: '<path-to-s3-object>'
}
var params = {
'Key-Pair-Id=' + cloudfrontAccessKey,
'Expires=' + expiration,
'Signature=' + signature,
'Policy=' + policy_64 // ADDED
}

AWS includes some built in classes and structures to assist in the creation of signed URLs and Cookies for CloudFront. I utilized these alongside the excellent answer by Jason Sims to get it working in a slightly different pattern (which appears to be very similar to the NPM package he created).
Namely, the AWS.CloudFront.Signer type description which abstracts the process of creating signed URLs and Cookies.
export class Signer {
/**
* A signer object can be used to generate signed URLs and cookies for granting access to content on restricted CloudFront distributions.
*
* #param {string} keyPairId - The ID of the CloudFront key pair being used.
* #param {string} privateKey - A private key in RSA format.
*/
constructor(keyPairId: string, privateKey: string);
....
}
And either an options with a policy JSON string or without a policy with a url and expiration time.
export interface SignerOptionsWithPolicy {
/**
* A CloudFront JSON policy. Required unless you pass in a url and an expiry time.
*/
policy: string;
}
export interface SignerOptionsWithoutPolicy {
/**
* The URL to which the signature will grant access. Required unless you pass in a full policy.
*/
url: string
/**
* A Unix UTC timestamp indicating when the signature should expire. Required unless you pass in a full policy.
*/
expires: number
}
Sample implementation:
import aws, { CloudFront } from 'aws-sdk';
export async function getSignedUrl() {
// https://abc.cloudfront.net/my-resource.jpg
const url = <cloud front url/resource>;
// Create signer object - requires a public key id and private key value
const signer = new CloudFront.Signer(<public-key-id>, <private-key>);
// Setup expiration time (one hour in the future, in this case)
const expiration = new Date();
expiration.setTime(expiration.getTime() + 1000 * 60 * 60);
const expirationEpoch = expiration.valueOf();
// Set options (Without policy in this example, but a JSON policy string can be substituted)
const options = {
url: url,
expires: expirationEpoch
};
return new Promise((resolve, reject) => {
// Call getSignedUrl passing in options, to be handled either by callback or synchronously without callback
signer.getSignedUrl(options, (err, url) => {
if (err) {
console.error(err.stack);
reject(err);
}
resolve(url);
});
});
}

Related

s3 pre-signed url Javascript Code not Returning Full url

I am trying to generate pre-signed url's for files in my s3 bucket so that users of my site will not have the links to the actual files. I have been trying to use this code below:
const AWS = require('aws-sdk');
const s3 = new AWS.S3()
AWS.config.update({accessKeyId: 'AKIAVXSBEXAMPLE', secretAccessKey: 'EXAMPLE5ig8MDGZD8p8iTj7t3KEXAMPLE'})
// Tried with and without this. Since s3 is not region-specific, I don't
// think it should be necessary.
AWS.config.update({region: 'eu-west-2'})
const myBucket = 'bucketexample'
const myKey = 'example.png'
const signedUrlExpireSeconds = 60 * 5
const url = s3.getSignedUrl('getObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
})
setTimeout(function(){ console.log("url", url); }, 3000);
console.log("url:", url)
However all it returns is this: "https://s3.amazonaws.com/"
I have also tried using this code:
const AWS = require('aws-sdk');
var s3 = new AWS.S3();
var params = {Bucket: 'bucketexample', Key: 'example.png'};
s3.getSignedUrl('putObject', params, function (err, url) {
console.log('The URL is', url);
});
Which does not return anything. Does anyone know why they are not returning a working urls?
I've had a similar issue. When the AWS SDK returns https://s3.amazonaws.com/ it is because of the machine not having the proper permissions. This can be frustrating, and I think that AWS should return a descriptive error message instead of just returning the wrong url.
I would recommend that you configure your AWS credentials for your machine or give it a role in AWS. Although you should be able to put in your credentials via code like you did in your code snippet, that didn't work for me as well for whatever reason.
What worked for me was updating my machine's default AWS credentials or adding proper roles for the deployed servers.
I had similar issue where I was getting incomplete signed URL like when I used :
let url = await s3.getSignedUrl("getObject", {
Bucket: bucket,
Key: s3FileLocation,
Expires: ttlInSeconds,
});
Then I used the Promise function instead :
// I used it inside an async function
let url = await s3.getSignedUrlPromise("getObject", {
Bucket: bucket,
Key: s3FileLocation,
Expires: ttlInSeconds,
});
And it returned complete signed URL.
Note:
In case you receive this message on tryign to access the object using signed URL.
<Error>
<Code>InvalidArgument</Code>
<Message>Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.</Message>
Then use this while initializing s3 :
const s3 = new AWS.S3({"signatureVersion":"v4"});

How to use Kinesis Video Stream WebRTC SDK in the browser without providing credentials?

I want to use kinesis video streams webrtc javascript sdk for producing video stream from a web page.
The sdk readme says i need to supply accessKeyId and secrectAccessKey
signalingClient = new KVSWebRTC.SignalingClient({
channelARN,
channelEndpoint: endpointsByProtocol.WSS,
clientId,
role: KVSWebRTC.Role.VIEWER,
region,
credentials: {
accessKeyId,
secretAccessKey,
},
systemClockOffset: kinesisVideoClient.config.systemClockOffset,
});
Is there a way to make this more secure and avoid supplying the secret access key inside the javascript code?
Doesn't it mean anyone viewing my web page source can take these credentials from the web page and use them to access the signaling channel?
Can I use amplify-js Auth class to use the signaling client with an authenticated user?
Turns out I can use credentials inside the backend, and send a presigned link to the client using the class SigV4RequestSigner.
There's no need to supply credentials on the client side.
Found it in the documentation:
This is a useful class to use in a NodeJS backend to sign requests and send them back to a client so that the client does not need to have AWS credentials.
When creating the SignalingClient you can either specify the credentials or a requestSigner that returns a Promise<string>, see:
https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-js/blob/master/README.md#class-signalingclient
credentials {object} Must be provided unless a requestSigner is provided.
Be aware that when not using credentials in the browser you will also need to run the KinesisVideoSignalingChannels related code on the server side, because this class does not supports request signer.
For Kinesis, one of the possibilities is to implement in your NodeJS backend a function for signing your URLs.
const endpointsByProtocol = getSignalingChannelEndpointResponse.ResourceEndpointList.reduce((endpoints, endpoint) => {
endpoints[endpoint.Protocol] = endpoint.ResourceEndpoint;
return endpoints;
}, {});
console.log('[VIEWER] Endpoints: ', endpointsByProtocol);
const region = "us-west-2";
const credentials = {
accessKeyId: "XAXAXAXAXAX",
secretAccessKey: "SECRETSECRET"
};
const queryParams = {
'X-Amz-ChannelARN': channelARN,
'X-Amz-ClientId': formValues.clientId
}
const signer = new SigV4RequestSigner(region, credentials);
const url = await signer.getSignedURL(endpointsByProtocol.WSS, queryParams);
console.log(url);

Getting 403 (Forbidden) when uploading to S3 with a signed URL

I'm trying to generate a pre-signed URL then upload a file to S3 through a browser. My server-side code looks like this, and it generates the URL:
let s3 = new aws.S3({
// for dev purposes
accessKeyId: 'MY-ACCESS-KEY-ID',
secretAccessKey: 'MY-SECRET-ACCESS-KEY'
});
let params = {
Bucket: 'reqlist-user-storage',
Key: req.body.fileName,
Expires: 60,
ContentType: req.body.fileType,
ACL: 'public-read'
};
s3.getSignedUrl('putObject', params, (err, url) => {
if (err) return console.log(err);
res.json({ url: url });
});
This part seems to work fine. I can see the URL if I log it and it's passing it to the front-end. Then on the front end, I'm trying to upload the file with axios and the signed URL:
.then(res => {
var options = { headers: { 'Content-Type': fileType } };
return axios.put(res.data.url, fileFromFileInput, options);
}).then(res => {
console.log(res);
}).catch(err => {
console.log(err);
});
}
With that, I get the 403 Forbidden error. If I follow the link, there's some XML with more info:
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your key and signing method.
</Message>
...etc
Your request needs to match the signature, exactly. One apparent problem is that you are not actually including the canned ACL in the request, even though you included it in the signature. Change to this:
var options = { headers: { 'Content-Type': fileType, 'x-amz-acl': 'public-read' } };
Receiving a 403 Forbidden error for a pre-signed s3 put upload can also happen for a couple of reasons that are not immediately obvious:
It can happen if you generate a pre-signed put url using a wildcard content type such as image/*, as wildcards are not supported.
It can happen if you generate a pre-signed put url with no content type specified, but then pass in a content type header when uploading from the browser. If you don't specify a content type when generating the url, you have to omit the content type when uploading. Be conscious that if you are using an upload tool like Uppy, it may attach a content type header automatically even when you don't specify one. In that case, you'd have to manually set the content type header to be empty.
In any case, if you want to support uploading any file type, it's probably best to pass the file's content type to your api endpoint, and use that content type when generating your pre-signed url that you return to your client.
For example, generating a pre-signed url from your api:
const AWS = require('aws-sdk')
const uuid = require('uuid/v4')
async function getSignedUrl(contentType) {
const s3 = new AWS.S3({
accessKeyId: process.env.AWS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY
})
const signedUrl = await s3.getSignedUrlPromise('putObject', {
Bucket: 'mybucket',
Key: `uploads/${uuid()}`,
ContentType: contentType
})
return signedUrl
}
And then sending an upload request from the browser:
import Uppy from '#uppy/core'
import AwsS3 from '#uppy/aws-s3'
this.uppy = Uppy({
restrictions: {
allowedFileTypes: ['image/*'],
maxFileSize: 5242880, // 5 Megabytes
maxNumberOfFiles: 5
}
}).use(AwsS3, {
getUploadParameters(file) {
async function _getUploadParameters() {
let signedUrl = await getSignedUrl(file.type)
return {
method: 'PUT',
url: signedUrl
}
}
return _getUploadParameters()
}
})
For further reference also see these two stack overflow posts: how-to-generate-aws-s3-pre-signed-url-request-without-knowing-content-type and S3.getSignedUrl to accept multiple content-type
If you're trying to use an ACL, make sure that your Lambda IAM role has the s3:PutObjectAcl for the given Bucket and also that your bucket allows for the s3:PutObjectAcl for the uploading Principal (user/iam/account that's uploading).
This is what fixed it for me after double checking all my headers and everything else.
Inspired by this answer https://stackoverflow.com/a/53542531/2759427
1) You might need to use S3V4 signatures depending on how the data is transferred to AWS (chunk versus stream). Create the client as follows:
var s3 = new AWS.S3({
signatureVersion: 'v4'
});
2) Do not add new headers or modify existing headers. The request must be exactly as signed.
3) Make sure that the url generated matches what is being sent to AWS.
4) Make a test request removing these two lines before signing (and remove the headers from your PUT). This will help narrow down your issue:
ContentType: req.body.fileType,
ACL: 'public-read'
Had the same issue, here is how you need to solve it,
Extract the filename portion of the signed URL.
Do a print that you are extracting your filename portion correctly with querystring parameters. This is critical.
Encode to URI Encoding of the filename with query string parameters.
Return the url from your lambda with encoded filename along with other path or from your node service.
Now post from axios with that url, it will work.
EDIT1:
Your signature will also be invalid, if you pass in wrong content type.
Please ensure that the content-type you have you create the pre-signed url is same as the one you are using it for put.
Hope it helps.
As others have pointed out the solution is to add the signatureVerision.
const s3 = new AWS.S3(
{
apiVersion: '2006-03-01',
signatureVersion: 'v4'
}
);
There is very detailed discussion around the same take a look https://github.com/aws/aws-sdk-js/issues/468
This code was working with credentials and a bucket I created several years ago, but caused a 403 error on recently created credentials/buckets:
const s3 = new AWS.S3({
region: region,
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY,
})
The fix was simply to add signatureVersion: 'v4'.
const s3 = new AWS.S3({
signatureVersion: 'v4',
region: region,
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY,
})
Why? I don't know.
TLDR: Check that your bucket exists and is accessible by the AWS Key that is generating the Signed URL..
All of the answers are very good and most likely are the real solution, but my issue actually stemmed from S3 returning a Signed URL to a bucket that didn't exist.
Because the server didn't throw any errors, I had assumed that it must be the upload that was causing the problems without realizing that my local server had an old bucket name in it's .env file that used to be the correct one, but has since been moved.
Side note: This link helped https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403/
It was while checking the uploading users IAM policies that I discovered that the user had access to multiple buckets, but only 1 of those existed anymore.
Did you add the CORS policy to the S3 bucket? This fixed the problem for me.
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
I encountered the same error twice with different root causes / solutions:
I was using generate_presigned_url.
The solution for me was switching to generate_presigned_post (doc) which returns a host of essential information such as
"url":"https://xyz.s3.amazonaws.com/",
"fields":{
"key":"filename.ext",
"AWSAccessKeyId":"ASIAEUROPRSWEDWOMM",
"x-amz-security-token":"some-really-long-string",
"policy":"another-long-string",
"signature":"the-signature"
}
Add these fields to your request headers, don't forget to keep file last!
That time I forgot to give proper permissions to the Lambda. Interestingly, Lambda can create good looking signed upload URLs which you won't have permission to use. The solution is to enrich the policy with S3 actions:
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::my-own-bucket/*"
]
}
using python boto3 when you upload a file the permissions are private by default. you can make the object public using ACL='public-read'
s3.put_object_acl(
Bucket='gid-requests', Key='potholes.csv', ACL='public-read')
I did all that's mentioned here and allowed these permissions for it to work:

JavaScript : Hosting User Images in S3 Bucket For Application

The "Background" outlines the problem in depth- I added it to make this question a good guide for using S3 to host images(like a profile image)
You can skip right to "HERE I'M HAVING TROUBLE" to help directly.
-----------------------------------------------Background------------------------------------------------------------------
Note: Feel Free to Critique My Assumptions on how to Host Images Properly for future readers.
So for a quick prototype- I'm hosting user avatar images in an AWS S3 Bucket,but I want to model roughly how it is done in production.
Here are my 3 assumptions on how to model industry standard image hosting.(based off sites I've studied):
For Reading - you can use public endpoints(no tokens needed)
To Secure Reading Access - use hashing to store the resources(image).The application will give the hashed URL to users with access.
For Example 2 hashes (1 for file path and 1 for image):
https://myapp.com/images/1024523452345233232000/f567457645674242455c32cbc7.jpg
^The above can be done with S3 using a hashed "Prefix" and a hashed file name.
Write Permissions - The user should be logged into the app and be given temp credentials to write to the storage(ie add an image).
So with those 3 assumptions this is what I'm doing and the problem:
(A Simple Write - Use Credentials)
<script type="text/javascript">
AWS.config.credentials = ...;
AWS.config.region = 'us-west-2';
var bucket = new AWS.S3({params: {Bucket: 'myBucket'}});
...
var params = {Key: file.name, ContentType: file.type, Body: file};
bucket.upload(params, function (err, data) {
//Do Something Amazing!!
});
</script>
---------------------------------HERE I'M HAVING TROUBLE----------------------------------------------------------(A Simple Read -Give The User a Signed URL) Error 403 Permissions
<script type="text/javascript">
AWS.config.credentials..
AWS.config.region = 'us-west-2';
var s3 = new AWS.S3();
var params = {Bucket: 'myBucket', Key: 'myKey.jpg'};
document.addEventListener('DOMContentLoaded', function () {
s3.getSignedUrl('getObject', params, function (err, url) {
// Getting a 403 Permissions Error!!!
});
});
</script>
I figure the signed URL isn't needed, but I thought it would get me around the permission error but I have to set the permission manually to public to read the image.
QUESTION:
So how should I make the endpoints completely public(read-able) for those who have gained access to the URL, but only write-able when the user has credentials?
To make an object publicly-downloadable when you are uploading it, apply the canned (predefined) ACL called "public-read" with the putObject request.
var params = {
Key: file.name,
ContentType: file.type,
Body: file,
ACL: 'public-read'
};
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property

cordova-plugin-file-transfer: How do you upload a file to S3 using a signed URL?

I am able to upload to S3 using a file picker and regular XMLHttpRequest (which I was using to test the S3 setup), but cannot figure out how to do it successfully using the cordova file transfer plugin.
I believe it is either to do with the plugin not constructing the correct signable request, or not liking the local file uri given. I have tried playing with every single parameter from headers to uri types, but the docs aren't much help, and the plugin source is bolognese.
The string the request needs to sign match is like:
PUT
1391784394
x-amz-acl:public-read
/the-app/317fdf654f9e3299f238d97d39f10fb1
Any ideas, or possibly a working code example?
A bit late, but I just spent a couple of days struggling with this so in case anybody else is having problems, this is how managed to upload an image using the javascript version of the AWS SDK to create the presigned URL.
The key to solving the problem is in the StringToSign element of the XML SignatureDoesNotMatch error that comes back from Amazon. In my case it looked something like this:
<StringToSign>
PUT\n\nmultipart/form-data; boundary=+++++org.apache.cordova.formBoundary\n1481366396\n/bucketName/fileName.jpg
</StringToSign>
When you use the aws-sdk to generate a presigned URL for upload to S3, internally it will build a string based on various elements of the request you want to make, then create an SHA1 hash of it using your AWS secret. This hash is the signature that gets appended to the URL as a parameter, and what doesn't match when you get the SignatureDoesNotMatch error.
So you've created your presigned URL, and passed it to cordova-plugin-file-transfer to make your HTTP request to upload a file. When that request hits Amazon's server, the server will itself build a string based on the request headers etc, hash it and compare that hash to the signature on the URL. If the hashes don't match then it returns the dreaded...
The request signature we calculated does not match the signature you provided. Check your key and signing method.
The contents of the StringToSign element I mentioned above is the string that the server builds and hashes to compare against the signature on the presigned URL. So to avoid getting the error, you need to make sure that the string built by the aws-sdk is the same as the one built by the server.
After some digging about, I eventually found the code responsible for creating the string to hash in the aws-sdk. It is located (as of version 2.7.12) in:
node_modules/aws-sdk/lib/signers/s3.js
Down the bottom at line 168 there is a sign method:
sign: function sign(secret, string) {
return AWS.util.crypto.hmac(secret, string, 'base64', 'sha1');
}
If you put a console.log in there, string is what you're after. Once you make the string that gets passed into this method the same as the contents of StringToSign in the error message coming back from Amazon, the heavens will open and your files will flow effortlessly into your bucket.
On my server running node.js, I originally created my presigned URL like this:
var AWS = require('aws-sdk');
var s3 = new AWS.S3(options = {
endpoint: 'https://s3-eu-west-1.amazonaws.com',
accessKeyId: "ACCESS_KEY",
secretAccessKey: "SECRET_KEY"
});
var params = {
Bucket: 'bucketName',
Key: imageName,
Expires: 60
};
var signedUrl = s3.getSignedUrl('putObject', params);
//return signedUrl
This produced a signing string like this, similar to the OP's:
PUT
1481366396
/bucketName/fileName.jpg
On the client side, I used this presigned URL with cordova-plugin-file-transfer like so (I'm using Ionic 2 so the plugin is wrapped in their native wrapper):
let success = (result: any) : void => {
console.log("upload success");
}
let failed = (err: any) : void => {
let code = err.code;
alert("upload error - " + code);
}
let ft = new Transfer();
var options = {
fileName: filename,
mimeType: 'image/jpeg',
chunkedMode: false,
httpMethod:'PUT',
encodeURI: false,
};
ft.upload(localDataURI, presignedUrlFromServer, options, false)
.then((result: any) => {
success(result);
}).catch((error: any) => {
failed(error);
});
Running the code produced the signature doesn't match error, and the string in the <StringToSign> element looks like this:
PUT
multipart/form-data; boundary=+++++org.apache.cordova.formBoundary
1481366396
/bucketName/fileName.jpg
So we can see that cordova-plugin-file-transfer has added in its own Content-Type header which has caused a discrepancy in the signing strings. In the docs relating to the options object that get passed into the upload method it says:
headers: A map of header name/header values. Use an array to specify more than one value. On iOS, FireOS, and Android, if a header named Content-Type is present, multipart form data will NOT be used. (Object)
so basically, if no Content-Type header is set it will default to multipart form data.
Ok so now we know the cause of the problem, it's a pretty simple fix. On the server side I added a ContentType to the params object passed to the S3 getSignedUrl method:
var params = {
Bucket: 'bucketName',
Key: imageName,
Expires: 60,
ContentType: 'image/jpeg' // <---- content type added here
};
and on the client added a headers object to the options passed to cordova-plugin-file-transfer's upload method:
var options = {
fileName: filename,
mimeType: 'image/jpeg',
chunkedMode: false,
httpMethod:'PUT',
encodeURI: false,
headers: { // <----- headers object added here
'Content-Type': 'image/jpeg',
}
};
and hey presto! The uploads now work as expected.
I run into such issues with this plugin
The only working way I found to upload a file with a signature is the method of Christophe Coenraets : http://coenraets.org/blog/2013/09/how-to-upload-pictures-from-a-phonegap-app-to-amazon-s3/
With this method you will be able to upload your files using the cordova-plugin-file-transfer
First, I wanted to use the aws-sdk on my server to sign with getSignedUrl()
It returns the signed link and you only have to upload to it.
But, using the plugin it always end with 403 : signatures don't match
It may be related to the content length parameter but I didn't found for now a working solution with aws-sdk and the plugin

Categories