I'm trying to create a user in a AWS User Pool from an AWS Lambda
I tried with this script took from what seems to be the official JavascriptSDK for the AWS but can't get it working. http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CognitoIdentityServiceProvider.html#adminCreateUser-property
I keep getting this error:ypeError: cognitoidentityserviceprovider.adminCreateUser is not a function
var cognitoidentityserviceprovider = new AWS.CognitoIdentityServiceProvider({apiVersion: '2016-04-18'});
var params = {
UserPoolId: 'eu-west-1_XXXXXXXX', /* required */
Username: 'me#example.com', /* required */
DesiredDeliveryMediums: [
'EMAIL'
],
ForceAliasCreation: false,
MessageAction: 'SUPPRESS',
TemporaryPassword: 'tempPassword1',
UserAttributes: [
{
Name: 'email', /* required */
Value: 'me#example.com'
},
{
Name: 'name', /* required */
Value: 'Me'
},
{
Name: 'last_name', /* required */
Value: 'lastme'
}
/* more items */
]
};
cognitoidentityserviceprovider.adminCreateUser(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
callback(null, data);
});
For adminCreateUser (you basically require the aws sdk, configure credentials, instantiate the client and call the specific operation).
var aws = require('aws-sdk');
aws.config.update({accessKeyId: 'akid', secretAccessKey: 'secret'});
var CognitoIdentityServiceProvider = aws.CognitoIdentityServiceProvider;
var client = new CognitoIdentityServiceProvider({ apiVersion: '2016-04-19 });
//your code goes here
Note that there might be different ways in which you can configure the AWS credentials to call the operation. You do need credentials as this is an authenticated operation. Other admin operations are similar, you just need to pass the appropriate parameters as JSON in the call.
According to this, the AWS SDK for JavaScript version 2.7.25 which contains the adminCreateUser operation should be available.
http://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html
Try bundling the latest aws-sdk into your uploaded package instead of relying on the one made available by default.
Source: AWS Cognito adminCreateUser from Lambda
Related
My aws cognito setup is CUSTOM_AUTH. I already attached the lambda challenges in the settings. In the param shape of the AdminInitiateAuth... it says that if I want to start custom auth with password verification, I need to specify ChallengeName: SRP_A and SRP_A: (The SRP_A Value).
Which I did as shown below:
const command = new AdminInitiateAuthCommand({
AuthFlow: 'CUSTOM_AUTH',
AuthParameters: {
USERNAME: username,
CHALLENGE_NAME: 'SRP_A',
SRP_A: '{SRP_A value}',
SECRET_HASH: secretHash,
},
ClientId: clientId,
UserPoolId: poolId,
});
Note that the docs says ChallengeName: SRP_A but that will throw error, it should be CHALLENGE_NAME: 'SRP_A'.
Also if I change the AuthFlow to ADMIN_NO_SRP_AUTH and put PASSWORD in the params then the auth will be a success, but what I need is the CUSTOM_AUTH
I am using #aws-sdk/client-cognito-identity-provider library if that matters../
I try to create a dataflow job to index a bigquery table into elasticSearchwith the node package google-cloud/dataflow.v1beta3.
The job is working fine when it's created and launched from the google cloud console, but I have the following error when I try it in node:
Error: 3 INVALID_ARGUMENT: (b69ddc3a5ef1c40b): Cannot set worker pool zone. Please check whether the worker_region experiments flag is valid. Causes: (b69ddc3a5ef1cd76): An internal service error occurred.
I tried to specify the experiments params in various ways but I always end up with the same error.
Does anyone managed to get a similar dataflow job working? Or do you have information about dataflow experiments?
Here is the code:
const { JobsV1Beta3Client } = require('#google-cloud/dataflow').v1beta3
const dataflowClient = new JobsV1Beta3Client()
const response = await dataflowClient.createJob({
projectId: 'myGoogleCloudProjectId',
location: 'europe-west1',
job: {
launch_parameter: {
jobName: 'indexation-job',
containerSpecGcsPath: 'gs://dataflow-templates-europe-west1/latest/flex/BigQuery_to_Elasticsearch',
parameters: {
inputTableSpec: 'bigQuery-table-gs-adress',
connectionUrl: 'elastic-endpoint-url',
index: 'elastic-index',
elasticsearchUsername: 'username',
elasticsearchPassword: 'password'
}
},
environment: {
experiments: ['worker_region']
}
}
})
Thank you very much for your help.
After many attempts I manage yesterday to find how to specify the worker region.
It looks like this:
await dataflowClient.createJob({
projectId,
location,
job: {
name: 'jobName',
type: 'Batch',
containerSpecGcsPath: 'gs://dataflow-templates-europe-west1/latest/flex/BigQuery_to_Elasticsearch',
pipelineDescription: {
inputTableSpec: 'bigquery-table',
connectionUrl: 'elastic-url',
index: 'elastic-index',
elasticsearchUsername: 'username',
elasticsearchPassword: 'password',
project: projectId,
appName: 'BigQueryToElasticsearch'
},
environment: {
workerPools: [
{ region: 'europe-west1' }
]
}
}
})
It's not working yet, I need to find the correct way to provide the other parameters, but now the dataflow job is created in the google cloud console.
For anyone who would be struggling with this issue, I finally found how to launch a dataflow job from a template.
There is a function launchFlexTemplate that work the same way as the job creation in the google cloud console.
Here is the final function working correctly:
const { FlexTemplatesServiceClient } = require('#google-cloud/dataflow').v1beta3
const response = await dataflowClient.launchFlexTemplate({
projectId: 'google-project-id',
location: 'europe-west1',
launchParameter: {
jobName: 'job-name',
containerSpecGcsPath: 'gs://dataflow-templates-europe-west1/latest/flex/BigQuery_to_Elasticsearch',
parameters: {
apiKey: 'elastic-api-key', //mandatory but not used if you provide username and password
connectionUrl: 'elasticsearch endpoint',
index: 'elasticsearch index',
elasticsearchUsername: 'username',
elasticsearchPassword: 'password',
inputTableSpec: 'bigquery source table', //projectid:datasetId.table
//parameters to upsert elasticsearch index
propertyAsId: 'table index use for elastic _id',
usePartialUpdate: true,
bulkInsertMethod: 'INDEX'
}
}
My goal is to make sure that all videos that are being uploaded to my application is the right format and that they are formatted to fit minimum size.
I did this before using ffmpeg however i have recently moved my application to an amazon server.
This gives me the option to use Amazon Elastic Transcoder
However by the looks of it from the interface i am unable to set up automatic jobs that look for video or audio files and converts them.
For this i have been looking at their SDK / api references but i am not quite sure how to use that in my application.
My question is has anyone successfully started transcoding jobs in node.js and know how to convert videos from one format to another and / or down set the bitrate? I would really appreciate it if someone could point me in the right direction with some examples of how this might work.
However by the looks of it from the interface i am unable to set up
automatic jobs that look for video or audio files and converts them.
The Node.js SDK doesn't support it but you can do the followings: if you store the videos in S3 (if not move them to S3 because elastic transcoder uses S3) you can run a Lambda function on S3 putObject triggered by AWS.
http://docs.aws.amazon.com/lambda/latest/dg/with-s3.html
My question is has anyone successfully started transcoding jobs in
node.js and know how to convert videos from one format to another and
/ or down set the bitrate? I would really appreciate it if someone
could point me in the right direction with some examples of how this
might work.
We used AWS for video transcoding with node without any problem. It was time consuming to find out every parameter, but I hope these few line could help you:
const aws = require('aws-sdk');
aws.config.update({
accessKeyId: config.AWS.accessKeyId,
secretAccessKey: config.AWS.secretAccessKey,
region: config.AWS.region
});
var transcoder = new aws.ElasticTranscoder();
let transcodeVideo = function (key, callback) {
// presets: http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/system-presets.html
let params = {
PipelineId: config.AWS.transcode.video.pipelineId, // specifies output/input buckets in S3
Input: {
Key: key,
},
OutputKeyPrefix: config.AWS.transcode.video.outputKeyPrefix,
Outputs: config.AWS.transcode.video.presets.map(p => {
return {Key: `${key}${p.suffix}`, PresetId: p.presetId};
})
};
params.Outputs[0].ThumbnailPattern = `${key}-{count}`;
transcoder.createJob(params, function (err, data) {
if (!!err) {
logger.err(err);
return;
}
let jobId = data.Job.Id;
logger.info('AWS transcoder job created (' + jobId + ')');
transcoder.waitFor('jobComplete', {Id: jobId}, callback);
});
};
An example configuration file:
let config = {
accessKeyId: '',
secretAccessKey: '',
region: '',
videoBucket: 'blabla-media',
transcode: {
video: {
pipelineId: '1450364128039-xcv57g',
outputKeyPrefix: 'transcoded/', // put the video into the transcoded folder
presets: [ // Comes from AWS console
{presetId: '1351620000001-000040', suffix: '_360'},
{presetId: '1351620000001-000020', suffix: '_480'}
]
}
}
};
If you want to generate master playlist you can do it like this.
".ts" files can not playable via hls players. Generate ".m3u8" file
async function transcodeVideo(mp4Location, outputLocation) {
let params = {
PipelineId: elasticTranscoderPipelineId,
Input: {
Key: mp4Location,
AspectRatio: 'auto',
FrameRate: 'auto',
Resolution: 'auto',
Container: 'auto',
Interlaced: 'auto'
},
OutputKeyPrefix: outputLocation + "/",
Outputs: [
{
Key: "hls2000",
PresetId: "1351620000001-200010",
SegmentDuration: "10"
},
{
Key: "hls1500",
PresetId: "1351620000001-200020",
SegmentDuration: "10"
}
],
Playlists: [
{
Format: 'HLSv3',
Name: 'hls',
OutputKeys: [
"hls2000",
"hls1500"
]
},
],
};
let jobData = await createJob(params);
return jobData.Job.Id;
}
async function createJob(params) {
return new Promise((resolve, reject) => {
transcoder.createJob(params, function (err, data) {
if(err) return reject("err: " + err);
if(data) {
return resolve(data);
}
});
});
}
I have an application using Node and the AWS-SDK package. I am copying objects from one bucket to another using the copyObject method. I'm getting an error that says SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
I've been able to successfully run the code on my local machine and it copies the files from one bucket to another. The error occurs on our AWS server, which I deployed the application to. The full error is:
{ [SignatureDoesNotMatch: The request signature we calculated does not
match the signature you provided. Check your key and signing method.]
message: 'The request signature we calculated does not match the signature you provided. Check your key and signing method.',
code: 'SignatureDoesNotMatch',
region: null,
time: Mon Jul 11 2016 12:11:36 GMT-0400 (EDT),
requestId: <requestId>,
extendedRequestId: <extendedRequestId>,
cfId: undefined,
statusCode: 403,
retryable: false,
retryDelay: 66.48076744750142 }
Also, I'm able to perform the listObjects command. The error is only happening on copyObject.
So far, I've tried
setting correctClockSkew to true
checked the servers time (same as local computer)
checked the key/secret (loading from a config file and is working locally)
checked the file names (there are no strange characters. Alphanumeric, '.', '-' and '/')
Here is the code causing the problem:
AWS.config.update({
accessKeyId: <accessKeyId>,
secretAccessKey: <secretAccessKey>,
correctClockSkew: true
});
var s3 = new AWS.S3();
var params = {
Bucket: <bucket>,
Prefix: <prefix>
};
s3.listObjects(params, function(err, data) {
if (data.Contents.length) {
async.each(data.Contents, function(file, cb) {
var file_name = file.Key.substr(file.Key.indexOf('/')+1);
var copy_params = {
Bucket: <bucket2>,
CopySource: <bucket> + '/' + file.Key,
Key: file_name,
ACL: 'public-read'
};
s3.copyObject(copy_params, function(copyErr, copyData){
if (copyErr) {
console.log('Error:', copyErr);
}
else {
cb();
}
});
}, function(err){
...
}
});
} else {
...
}
});
Not sure if you've found a solution to this or not, but this was an issue raised on github and the solution seems to simply URL encode your CopySource parameter with encodeURI():
https://github.com/aws/aws-sdk-js/issues/1949
I'm creating an app in nodejs to send an email using MailChimp. I've tried to use https://apidocs.mailchimp.com/sts/1.0/sendemail.func.php but changed it to use 3.0 api because 1.0 seems to no longer work (big surprise). I've setup my app with
var apiKey = '<<apiKey>>',
toEmail = '<<emailAddress>>',
toNames = '<<myName>>',
message = {
'html': 'Yo, this is the <b>html</b> portion',
'text': 'Yo, this is the *text* portion',
'subject': 'This is the subject',
'from_name': 'Me!',
'from_email': '',
'to_email': toEmail,
'to_name': toNames
},
tags = ['HelloWorld'],
params = {
'apikey': apiKey,
'message': message,
'track_opens': true,
'track_clicks': false,
'tags': tags
},
url = 'https://us13.api.mailchimp.com/3.0/SendEmail';
needle.post(url, params, function(err, headers) {
if (err) {
console.error(err);
}
console.log(headers);
}
});
I keep getting a 401 response (not authorized because I'm not sending the API key properly)
I have to use needle due to the constraints on the server.
There is no "SendEmail" endpoint in API v3.0. MailChimp's STS was a pre-cursor to its Mandrill transactional service and may only still work for user accounts that have existing STS campaigns. No new STS campaigns can be created. If you have a monthly, paid MailChimp account, you should look into Mandrill. If not, I've had good luck with Mailgun.
You should use HTTP Basic authentication in MailChimp API 3.0.
needle.get('https://<dc>.api.mailchimp.com/3.0/<endpoint>', { username: 'anystring', password: 'your_apikey' },
function(err, resp) {
// your code here...
});
EDIT
#TooMuchPete is right, the SendMail endpoint is not valid in MailChimp API v3.0. I didn't notice that and I've edited my answer.