For Amazon S3 Javascript SDK, do the request build events as below link for PutObjects also work for managed uploads?
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Request.html#build-event
I need to add a customer header. I have tried a test as below in angular:
var bucket = new AWS.S3.ManagedUpload({partSize: 5 * 1024 * 1024, params: {Bucket: 'test', Key: key, Body: body, ContentType: contentType}});
bucket
.on('build', () =>{
console.log('build');
}
)
.on('httpUploadProgress', (progress) => {
console.log('Progress', progress);
})
.send((err, data) => {
});
But I am not getting a console log on build. If build doesn't apply is there any other way to apply custom headers to ManagedUploads? Many thanks in advance.
Related
So, first time posting, I usually just find the answer I need by looking through similar questions but this time I'm stumped.
First off, I'm self-taught and about on par with an entry-level developer at absolute best (for reference my highest score on CodeSignal for javascript is 725).
Here is my problem:
I'm working on an SSG eCommerce website using the Nuxt.js framework. The products are digital and so they need to be fulfilled by providing a time-limited download link when a customer makes a purchase. I have the product files stored in a private amazon s3 bucket. I also have a Netlify Serverless Function that when called with a GET request, generates and returns a pre-signed URL for the file (at the moment there is only one product but ideally it should generate pre-signed URLs based on a filename sent as a JSON event body key string since more products are planned in the near future, But I can figure that out once the whole thing is working).
The website is set up to generate dynamic routes based on the user's order number so they can view their previous orders(/pages/account/orders/_id.vue). I have placed a download button, nested in an element on this page so that each order has a button to download the files. The idea is that button press calls a function I defined in the methods object. The function makes an XMLHttpRequest to the endpoint URL of the netlify function. Netlify function returns pre-signed URL to function which returns pre-signed URL to the href property so that file can be downloaded by the user.
But no matter what I try, it fails to download the file. When the page loads it successfully calls the Netlify function and I get a response code 200 but the href property remains blank. Am I going about this the wrong way? There is clearly something I'm not understanding correctly, Any input is greatly appreciated.
Here is my code....
The download button:
<a
:download=<<MY_PRODUCT_NAME>>
:href="getmyurl()"
>
<BaseButton
v-if="order.status === 'complete'"
fit="auto"
appearance="light"
label="Download"
/>
</a>
function that button calls:
methods: {
getmyurl() {
let myurl = "";
const funcurl = <<MY_NETLIFY_FUNCTION_URL>>;
let xhr = new XMLHttpRequest();
xhr.open('GET', funcurl);
xhr.send();
xhr.onload = function() {
if (xhr.status != 200) {
alert(`Error ${xhr.status}: ${xhr.statusText}`);
} else {
myurl = xhr.response.Geturl
};
};
return myurl
},
Netlify function:
require( "dotenv" ).config();
const AWS = require('aws-sdk');
let s3 = new AWS.S3({
accessKeyId: process.env.MY_AWS_ACCESS_KEY,
secretAccessKey: process.env.MY_AWS_SECRET_KEY,
region: process.env.MY_AWS_REGION,
signatureVersion: 'v4',
});
exports.handler = function( event, context, callback ) {
var headers = {
"Access-Control-Allow-Origin" : "*",
"Access-Control-Allow-Headers": "Content-Type"
};
if ( event.httpMethod === "OPTIONS" ) {
callback(
null,
{
statusCode: 200,
headers: headers,
body: JSON.stringify( "OK" )
}
);
return;
}
try {
var resourceKey = process.env.MY_FILE_NAME
var getParams = {
Bucket: process.env.MY_S3_BUCKET,
Key: resourceKey,
Expires: ( 60 * 60 ),
ResponseCacheControl: "max-age=604800"
};
var getUrl = s3.getSignedUrl( "getObject", getParams );
var response = {
statusCode: 200,
headers: headers,
body: JSON.stringify({
getUrl: getUrl
})
};
} catch ( error ) {
console.error( error );
var response = {
statusCode: 400,
headers: headers,
body: JSON.stringify({
message: "Request could not be processed."
})
};
}
callback( null, response );
}
I have the following template.yaml from a SAM application
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Description: >
image-resizing-lambda-js
Sample SAM Template for image-resizing-lambda-js
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 3
MemorySize: 1536
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: hello-world/
Handler: app.lambdaHandler
Runtime: nodejs10.x
Architectures:
- x86_64
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /hello
Method: post
imagemagicklambdalayer:
Type: AWS::Serverless::Application
Properties:
Location:
ApplicationId: arn:aws:serverlessrepo:us-east-1:145266761615:applications/image-magick-lambda-layer
SemanticVersion: 1.0.0
Outputs:
# ServerlessRestApi is an implicit API created out of Events key under Serverless::Function
# Find out more about other implicit resources you can reference within SAM
# https://github.com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.rst#api
HelloWorldApi:
Description: "API Gateway endpoint URL for Prod stage for Hello World function"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"
HelloWorldFunction:
Description: "Hello World Lambda Function ARN"
Value: !GetAtt HelloWorldFunction.Arn
HelloWorldFunctionIamRole:
Description: "Implicit IAM Role created for Hello World function"
Value: !GetAtt HelloWorldFunctionRole.Arn
With the following code.
Now I have read that I need to do the following to use ImageMagick with node on aws lambda is the following
Install the custom layer https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:145266761615:applications~image-magick-lambda-layer
Link any lambda function using ImageMagick with that custom layer
It is the link any lambda function using ImageMagick with the custom layer I am confused. Do I need to do something different in my app.js code that points the imagemagick call in my code to the layer somehow. I am not entirely sure what a layer is. But my understanding is it is needed for ImageMagick to work.
Any help would be greatly appreciated
const axios = require("axios");
// const url = 'http://checkip.amazonaws.com/';
//const sharp = require("sharp");
const gm = require("gm");
const imageMagick = gm.subClass({ imageMagick: true });
let response;
/**
*
* Event doc: https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html#api-gateway-simple-proxy-for-lambda-input-format
* #param {Object} event - API Gateway Lambda Proxy Input Format
*
* Context doc: https://docs.aws.amazon.com/lambda/latest/dg/nodejs-prog-model-context.html
* #param {Object} context
*
* Return doc: https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html
* #returns {Object} object - API Gateway Lambda Proxy Output Format
*
*/
exports.lambdaHandler = async (event, context) => {
try {
// const ret = await axios(url);
const parsedBody = JSON.parse(event.body);
response = {
statusCode: 200,
body: JSON.stringify({
message: parsedBody.imagePath,
// location: ret.data.trim()
}),
};
const WEB_WIDTH_MAX = 420;
const WEB_Q_MAX = 85;
const url = "https://miro.medium.com/max/512/1*V395S0MUwmZo8dX2aezpMg.png";
const data = imageMagick(url)
.resize(WEB_WIDTH_MAX)
.quality(WEB_Q_MAX)
// .gravity('Center')
.strip()
// .crop(WEB_WIDTH_MAX, WEB_HEIGHT_MAX)
.toBuffer("png", (err, buffer) => {
if (err) {
console.log("An error occurred while saving IM to buffer: ", err);
return false; /* stop the remaining sequence and prevent sending an empty or invalid buffer to AWS */
} else {
console.log("buffer", buffer);
}
});
// gmToBuffer(data).then(console.log);
} catch (err) {
console.log(err);
return err;
}
return response;
};
Currently I get the following error when running sam build
Plugin 'ServerlessAppPlugin' raised an exception: 'NoneType' object has no attribute 'get'
Before addign the imagemagicklamdalayer section I was able to run sam build and have imagemagick run with the following error under sam local start-api after hitting the endpoint
An error occurred while saving IM to buffer: Error: Stream yields empty buffer
I have got it to build with the following as the layer in the template.yaml
Layers:
- "arn:aws:serverlessrepo:us-east-1:145266761615:applications/image-magick-lambda-layer"
Now I get an error on running the function like the following
File "/usr/local/Cellar/aws-sam-cli/1.46.0/libexec/lib/python3.8/site-packages/botocore/regions.py", line 230, in _endpoint_for_partition
raise NoRegionError()
botocore.exceptions.NoRegionError: You must specify a region.
Not sure where I am to specifiy this region
UPDATE:
Ok two things one I set the region in my aws config file and I set the arn to the deployed layer. I have gotten the following with something that hasn't built yet in about 5 minutes so we shall see if it ever does.
It tries to build the layer when the function is invoked
Invoking app.lambdaHandler (nodejs12.x)
arn:aws:lambda:us-east-1:396621406187:layer:image-magick:1 is already cached. Skipping download
Image was not found.
Building image...................................................................
UPDATE:
so it seemed to finally build but the module for ImageMagick could not be found. I have seen other examples use this layer and are calling imagemagick from a module called gm which is what I am doing in my code.
Does including this layer not give access to gm module so i can use imageMagick.
{"errorType":"Runtime.ImportModuleError","errorMessage":"Error: Cannot find module 'gm'\nRequire stack:\n- /var/task/app.js\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js","stack":["Runtime.ImportModuleError: Error: Cannot find module 'gm'","Require stack:","- /var/task/app.js","- /var/runtime/UserFunction.js","- /var/runtime/index.js"," at _loadUserApp (/var/runtime/UserFunction.js:100:13)",
I'm trying to generate a presigned post to give the browser privileges to upload / delete a specific file from a bucket, but it seems the createPresignedPost is not generating some of the required fields. getSignedUrl works.
const signedUrl = await new Promise<PresignedPost>( (resolve, reject) => {
this.s3.createPresignedPost({
Bucket: this.env.config.s3.buckets.images,
Fields: { key },
Conditions: [
["content-length-range", 0, 10 * 1024 * 1024]
],
Expires: 3600,
}, (err, preSigned) => { if (err) { reject(err) } else { resolve(preSigned) }});
});
// This works, but doesn't allow the object to be deleted, and does not allow setting a maximum file size
//
// const rawUrl = new URL(await this.s3.getSignedUrlPromise('putObject', {
// Bucket: this.env.config.s3.buckets.images,
// Key: key,
// Expires: 3600,
// }));
//
// const signedUrl = {
// url: rawUrl.origin + rawUrl.pathname,
// fields: Object.fromEntries(Array.from(rawUrl.searchParams.entries()))
// };
The createPresignedPost generates:
{
url: 'https://s3.eu-west-3.amazonaws.com/xxx-images',
fields: {
key: 'incoming/ae83pfxu7kf4dfdv4hbvorsxq31hadtjcp97ehwt30ds5',
bucket: 'xxx-images',
'X-Amz-Algorithm': 'AWS4-HMAC-SHA256',
'X-Amz-Credential': 'xxx',
'X-Amz-Date': '20200509T145014Z',
Policy:
'eyJleHBpcmF0aW9uIjoiMjAyMC0wNS0wOVQxNTo1MDoxNFoiLCJjb25kaXRpb25zIjpbWyJjb250ZW50LWxlbmd0aC1yYW5nZSIsMCwxMDQ4NTc2MF0seyJrZXkiOiJpbmNvbWluZy9hZTgzcGZ4dTdrZjRkZmR2NGhidm9yc3hxMzFoYWR0amNwOTdlaHd0MzBkczUifSx7ImJ1Y2tldCI6InByZWZsaWdodGVtYWlsLWltYWdlcyJ9LHsiWC1BbXotQWxnb3JpdGhtIjoiQVdTNC1ITUFDLVNIQTI1NiJ9LHsiWC1BbXotQ3JlZGVudGlhbCI6IkFLSUE1RE5UN0lOWjJKTU5TQVhILzIwMjAwNTA5L2V1LXdlc3QtMy9zMy9hd3M0X3JlcXVlc3QifSx7IlgtQW16LURhdGUiOiIyMDIwMDUwOVQxNDUwMTRaIn1dfQ==',
'X-Amz-Signature': 'xxx' } }
Trying to PUT a file with those parameters gives:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AuthorizationQueryParametersError</Code>
<Message>Query-string authentication version 4 requires the X-Amz-Algorithm, X-Amz-Credential, X-Amz-Signature, X-Amz-Date, X-Amz-SignedHeaders, and X-Amz-Expires parameters.</Message>
<RequestId>xxx</RequestId>
<HostId>xxx</HostId>
</Error>
The older API call generates the missing 'X-Amz-SignedHeaders', and 'X-Amz-Expires' parameters too.
Can anyone help me in what am I doing wrong?
You should use POST instead of PUT since you are using createPresignedPost to generate the URL.
My goal is to make sure that all videos that are being uploaded to my application is the right format and that they are formatted to fit minimum size.
I did this before using ffmpeg however i have recently moved my application to an amazon server.
This gives me the option to use Amazon Elastic Transcoder
However by the looks of it from the interface i am unable to set up automatic jobs that look for video or audio files and converts them.
For this i have been looking at their SDK / api references but i am not quite sure how to use that in my application.
My question is has anyone successfully started transcoding jobs in node.js and know how to convert videos from one format to another and / or down set the bitrate? I would really appreciate it if someone could point me in the right direction with some examples of how this might work.
However by the looks of it from the interface i am unable to set up
automatic jobs that look for video or audio files and converts them.
The Node.js SDK doesn't support it but you can do the followings: if you store the videos in S3 (if not move them to S3 because elastic transcoder uses S3) you can run a Lambda function on S3 putObject triggered by AWS.
http://docs.aws.amazon.com/lambda/latest/dg/with-s3.html
My question is has anyone successfully started transcoding jobs in
node.js and know how to convert videos from one format to another and
/ or down set the bitrate? I would really appreciate it if someone
could point me in the right direction with some examples of how this
might work.
We used AWS for video transcoding with node without any problem. It was time consuming to find out every parameter, but I hope these few line could help you:
const aws = require('aws-sdk');
aws.config.update({
accessKeyId: config.AWS.accessKeyId,
secretAccessKey: config.AWS.secretAccessKey,
region: config.AWS.region
});
var transcoder = new aws.ElasticTranscoder();
let transcodeVideo = function (key, callback) {
// presets: http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/system-presets.html
let params = {
PipelineId: config.AWS.transcode.video.pipelineId, // specifies output/input buckets in S3
Input: {
Key: key,
},
OutputKeyPrefix: config.AWS.transcode.video.outputKeyPrefix,
Outputs: config.AWS.transcode.video.presets.map(p => {
return {Key: `${key}${p.suffix}`, PresetId: p.presetId};
})
};
params.Outputs[0].ThumbnailPattern = `${key}-{count}`;
transcoder.createJob(params, function (err, data) {
if (!!err) {
logger.err(err);
return;
}
let jobId = data.Job.Id;
logger.info('AWS transcoder job created (' + jobId + ')');
transcoder.waitFor('jobComplete', {Id: jobId}, callback);
});
};
An example configuration file:
let config = {
accessKeyId: '',
secretAccessKey: '',
region: '',
videoBucket: 'blabla-media',
transcode: {
video: {
pipelineId: '1450364128039-xcv57g',
outputKeyPrefix: 'transcoded/', // put the video into the transcoded folder
presets: [ // Comes from AWS console
{presetId: '1351620000001-000040', suffix: '_360'},
{presetId: '1351620000001-000020', suffix: '_480'}
]
}
}
};
If you want to generate master playlist you can do it like this.
".ts" files can not playable via hls players. Generate ".m3u8" file
async function transcodeVideo(mp4Location, outputLocation) {
let params = {
PipelineId: elasticTranscoderPipelineId,
Input: {
Key: mp4Location,
AspectRatio: 'auto',
FrameRate: 'auto',
Resolution: 'auto',
Container: 'auto',
Interlaced: 'auto'
},
OutputKeyPrefix: outputLocation + "/",
Outputs: [
{
Key: "hls2000",
PresetId: "1351620000001-200010",
SegmentDuration: "10"
},
{
Key: "hls1500",
PresetId: "1351620000001-200020",
SegmentDuration: "10"
}
],
Playlists: [
{
Format: 'HLSv3',
Name: 'hls',
OutputKeys: [
"hls2000",
"hls1500"
]
},
],
};
let jobData = await createJob(params);
return jobData.Job.Id;
}
async function createJob(params) {
return new Promise((resolve, reject) => {
transcoder.createJob(params, function (err, data) {
if(err) return reject("err: " + err);
if(data) {
return resolve(data);
}
});
});
}
I'm trying to upload files to my Amazon S3 Bucket. S3 and amazon is set up.
This is the error message from Amazon:
Conflicting query string parameters: acl, policy
Policy and signature is encoded, with Crypto.js for Node.js
var crypto=Npm.require("crypto");
I'm trying to build POST request with Meteor HTTP.post method. This could be wrong as well.
var BucketName="mybucket";
var AWSAccessKeyId="MY_ACCES_KEY";
var AWSSecretKey="MY_SECRET_KEY";
//create policy
var POLICY_JSON={
"expiration": "2009-01-01T00:00:00Z",
"conditions": [
{"bucket": BucketName},
["starts-with", "$key", "uploads/"],
{"acl": 'public-read'},
["starts-with", "$Content-Type", ""],
["content-length-range", 0, 1048576],
]
}
var policyBase64=encodePolicy(POLICY_JSON);
//create signature
var SIGNATURE = encodeSignature(policyBase64,AWSSecretKey);
console.log('signature: ', SIGNATURE);
This is the POST request I'm using with Meteor:
//Send data----------
var options={
"params":{
"key":file.name,
'AWSAccessKeyId':AWSAccessKeyId,
'acl':'public-read',
'policy':policyBase64,
'signature':SIGNATURE,
'Content-Type':file.type,
'file':file,
"enctype":"multipart/form-data",
}
}
HTTP.call('POST','https://'+BucketName+'.s3.amazonaws.com/',options,function(error,result){
if(error){
console.log("and HTTP ERROR:",error);
}else{
console.log("result:",result);
}
});
and her I'm encoding the policy and the signature:
encodePolicy=function(jsonPolicy){
// stringify the policy, store it in a NodeJS Buffer object
var buffer=new Buffer(JSON.stringify(jsonPolicy));
// convert it to base64
var policy=buffer.toString("base64");
// replace "/" and "+" so that it is URL-safe.
return policy.replace(/\//g,"_").replace(/\+/g,"-");
}
encodeSignature=function(policy,secret){
var hmac=crypto.createHmac("sha256",secret);
hmac.update(policy);
return hmac.digest("hex");
}
A can't figure out whats going on. There might already be a problem at the POST method, or the encryption, because I don't know these methods too well. If someone could point me to the right direction, to encode, or send POST request to AmazonS3 properly, it could help a lot.
(I don't like to use filepicker.io, because I don't want to force the client to sign up there as well.)
Thanks in advance!!!
Direct uploads to S3 you can use the slingshot package:
meteor add edgee:slingshot
On the server side declare your directive:
Slingshot.createDirective("myFileUploads", Slingshot.S3Storage, {
bucket: "mybucket",
allowedFileTypes: ["image/png", "image/jpeg", "image/gif"],
acl: "public-read",
authorize: function () {
//You can add user restrictions here
return true;
},
key: function (file) {
return file.name;
}
});
This directive will generate policy and signature automatically.
And them just upload it like this:
var uploader = new Slingshot.Upload("myFileUploads");
uploader.send(document.getElementById('input').files[0], function (error, url) {
Meteor.users.update(Meteor.userId(), {$push: {"profile.files": url}});
});
Why don't you use the aws-sdk package? It packs all the needed methods for you. For example, here's the simple function for adding a file to bucket:
s3.putObject({
Bucket: ...,
ACL: ...,
Key: ...,
Metadata: ...,
ContentType: ...,
Body: ...,
}, function(err, data) {
...
});
check out the S3 meteor package. The readme has a very comprehensive walkthrough of how to get started
First thing is to add the package for s3 file upload.
For Installation: ADD (AWS SDK Smart Package)
$ meteor add peerlibrary: aws-sdk
1.Create Directive upload.js and paste this code.
angular.module('techno')
.directive("fileupload", [function () {
return {
scope: {
fileupload: "="
},
link: function(scope,element, attributes){
$('.button-collapse').sideNav();
element.bind("change", function (event) {
scope.$apply(function () {
scope.fileupload = event.target.files[0];
});
})
}};
}]);
2.Get Access key and paste it in your fileUpload.js file.
AWS.config.update({
accessKeyId: ' AKIAJ2TLJBEUO6IJLKMN ',
secretAccessKey: lqGE9o4WkovRi0hCFPToG0B6w9Okg/hUfpVr6K6g'
});
AWS.config.region = 'us-east-1';
let bucket = new AWS.S3();
3.Now put this upload code in your directive fileUpload.js
vm.upload = (Obj) =>{
vm.loadingButton = true;
let name = Obj.name;
let params = {
Bucket: 'technodheeraj',
Key: name,
ContentType: 'application/pdf',
Body: Obj,
ServerSideEncryption: 'AES256'
};
bucket.putObject(params, (err, data) => {
if (err) {
console.log('---err------->', err);
}
else {
vm.fileObject = {
userId: Meteor.userId(),
eventId: id,
fileName: name,
fileSize: fileObj.size,
};
vm.call("saveFile", vm.fileObject, (error, result) => {
if (!error){
console.log('File saved successfully');
}
})
}
})
};
4.Now in “saveFile” method paste this code
saveFile: function(file){
if(file){
return Files.insert(file);
}
};
5.In HTML paste this code
<input type="file" name="file" fileupload="file">
<button type="button" class="btn btn-info " ng-click="vm.upload(file)"> Upload File</button>