I have the following template.yaml from a SAM application
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Description: >
image-resizing-lambda-js
Sample SAM Template for image-resizing-lambda-js
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 3
MemorySize: 1536
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: hello-world/
Handler: app.lambdaHandler
Runtime: nodejs10.x
Architectures:
- x86_64
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /hello
Method: post
imagemagicklambdalayer:
Type: AWS::Serverless::Application
Properties:
Location:
ApplicationId: arn:aws:serverlessrepo:us-east-1:145266761615:applications/image-magick-lambda-layer
SemanticVersion: 1.0.0
Outputs:
# ServerlessRestApi is an implicit API created out of Events key under Serverless::Function
# Find out more about other implicit resources you can reference within SAM
# https://github.com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.rst#api
HelloWorldApi:
Description: "API Gateway endpoint URL for Prod stage for Hello World function"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"
HelloWorldFunction:
Description: "Hello World Lambda Function ARN"
Value: !GetAtt HelloWorldFunction.Arn
HelloWorldFunctionIamRole:
Description: "Implicit IAM Role created for Hello World function"
Value: !GetAtt HelloWorldFunctionRole.Arn
With the following code.
Now I have read that I need to do the following to use ImageMagick with node on aws lambda is the following
Install the custom layer https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:145266761615:applications~image-magick-lambda-layer
Link any lambda function using ImageMagick with that custom layer
It is the link any lambda function using ImageMagick with the custom layer I am confused. Do I need to do something different in my app.js code that points the imagemagick call in my code to the layer somehow. I am not entirely sure what a layer is. But my understanding is it is needed for ImageMagick to work.
Any help would be greatly appreciated
const axios = require("axios");
// const url = 'http://checkip.amazonaws.com/';
//const sharp = require("sharp");
const gm = require("gm");
const imageMagick = gm.subClass({ imageMagick: true });
let response;
/**
*
* Event doc: https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html#api-gateway-simple-proxy-for-lambda-input-format
* #param {Object} event - API Gateway Lambda Proxy Input Format
*
* Context doc: https://docs.aws.amazon.com/lambda/latest/dg/nodejs-prog-model-context.html
* #param {Object} context
*
* Return doc: https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html
* #returns {Object} object - API Gateway Lambda Proxy Output Format
*
*/
exports.lambdaHandler = async (event, context) => {
try {
// const ret = await axios(url);
const parsedBody = JSON.parse(event.body);
response = {
statusCode: 200,
body: JSON.stringify({
message: parsedBody.imagePath,
// location: ret.data.trim()
}),
};
const WEB_WIDTH_MAX = 420;
const WEB_Q_MAX = 85;
const url = "https://miro.medium.com/max/512/1*V395S0MUwmZo8dX2aezpMg.png";
const data = imageMagick(url)
.resize(WEB_WIDTH_MAX)
.quality(WEB_Q_MAX)
// .gravity('Center')
.strip()
// .crop(WEB_WIDTH_MAX, WEB_HEIGHT_MAX)
.toBuffer("png", (err, buffer) => {
if (err) {
console.log("An error occurred while saving IM to buffer: ", err);
return false; /* stop the remaining sequence and prevent sending an empty or invalid buffer to AWS */
} else {
console.log("buffer", buffer);
}
});
// gmToBuffer(data).then(console.log);
} catch (err) {
console.log(err);
return err;
}
return response;
};
Currently I get the following error when running sam build
Plugin 'ServerlessAppPlugin' raised an exception: 'NoneType' object has no attribute 'get'
Before addign the imagemagicklamdalayer section I was able to run sam build and have imagemagick run with the following error under sam local start-api after hitting the endpoint
An error occurred while saving IM to buffer: Error: Stream yields empty buffer
I have got it to build with the following as the layer in the template.yaml
Layers:
- "arn:aws:serverlessrepo:us-east-1:145266761615:applications/image-magick-lambda-layer"
Now I get an error on running the function like the following
File "/usr/local/Cellar/aws-sam-cli/1.46.0/libexec/lib/python3.8/site-packages/botocore/regions.py", line 230, in _endpoint_for_partition
raise NoRegionError()
botocore.exceptions.NoRegionError: You must specify a region.
Not sure where I am to specifiy this region
UPDATE:
Ok two things one I set the region in my aws config file and I set the arn to the deployed layer. I have gotten the following with something that hasn't built yet in about 5 minutes so we shall see if it ever does.
It tries to build the layer when the function is invoked
Invoking app.lambdaHandler (nodejs12.x)
arn:aws:lambda:us-east-1:396621406187:layer:image-magick:1 is already cached. Skipping download
Image was not found.
Building image...................................................................
UPDATE:
so it seemed to finally build but the module for ImageMagick could not be found. I have seen other examples use this layer and are calling imagemagick from a module called gm which is what I am doing in my code.
Does including this layer not give access to gm module so i can use imageMagick.
{"errorType":"Runtime.ImportModuleError","errorMessage":"Error: Cannot find module 'gm'\nRequire stack:\n- /var/task/app.js\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js","stack":["Runtime.ImportModuleError: Error: Cannot find module 'gm'","Require stack:","- /var/task/app.js","- /var/runtime/UserFunction.js","- /var/runtime/index.js"," at _loadUserApp (/var/runtime/UserFunction.js:100:13)",
Related
getting error at apigatewayv2-integrations (Property 'LambdaProxyIntegration' does not exist on type 'typeof')
i am new to aws cloud services and trying to create a sample project for learning and stuck there from two days.
import * as cdk from '#aws-cdk/core';
import * as ec2 from '#aws-cdk/aws-ec2';
import * as lambda from '#aws-cdk/aws-lambda';
import * as rds from '#aws-cdk/aws-rds';
import * as apigw from '#aws-cdk/aws-apigatewayv2';
import * as integrations from '#aws-cdk/aws-apigatewayv2-integrations';
export class SampleStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
// Create the VPC needed for the Aurora Serverless DB cluster
const vpc = new ec2.Vpc(this, 'AuroraVPC');
// Create the Serverless Aurora DB cluster; set the engine to Postgres
const cluster = new rds.ServerlessCluster(this, 'AuroraTestCluster', {
engine: rds.DatabaseClusterEngine.AURORA_POSTGRESQL,
parameterGroup: rds.ParameterGroup.fromParameterGroupName(this, 'ParameterGroup', 'default.aurora-postgresql10'),
defaultDatabaseName: 'TestDB',
vpc,
scaling: { autoPause: cdk.Duration.seconds(0) } // Optional. If not set, then instance will pause after 5 minutes
});
// Create the Lambda function that will map GraphQL operations into Postgres
const postFn = new lambda.Function(this, 'MyFunction', {
runtime: lambda.Runtime.NODEJS_12_X,
code: new lambda.AssetCode('lambda-functions'),
handler: 'index.handler',
memorySize: 1024,
environment: {
CLUSTER_ARN: cluster.clusterArn,
SECRET_ARN: cluster.secret?.secretArn || '',
DB_NAME: 'TestDB'
},
});
// Grant access to the cluster from the Lambda function
cluster.grantDataApiAccess(postFn);
// create the API Gateway with one method and path
const api = new apigw.HttpApi(this, 'Endpoint', {
defaultIntegration: new integrations.LambdaProxyIntegration({
handler: postFn
}),
});
new cdk.CfnOutput(this, "HTTP API URL", {
value: api.url ?? "Something went wrong with the deploy",
});
}``
}
There is no more LambdaProxyIntegration from 29 Nov 2021, it was moved to: HttpLambdaIntegration
Git commit: https://github.com/aws/aws-cdk/commit/29039e8bd13a4fdb7f84254038b3331c179273fd
Check you packages versions and try to change it to:
new integrations.HttpLambdaIntegration('IntegrationId', handler, {"props": "props"}),
Docs: https://docs.aws.amazon.com/cdk/api/v1/docs/#aws-cdk_aws-apigatewayv2-integrations.HttpLambdaIntegration.html
Can the tinylicious server be launched at a port other than 3000? I've tried something like "PORT=4100 tinylicious" and I can see the terminal log saying:
#federation/shell-app: [1] info: Listening on port 4100 {"label":"winston","timestamp":"2021-03-08T19:23:37.861Z"}
but later it fails within my code, indicating something went wrong with the service call:
main.js:15815 ERROR TypeError: Cannot read property 'shapeClicked' of undefined
at Layer.onClick [as zzClickFunc] (collabmap.component.js:45)
at JS:24817
at Array.<anonymous> (JS:8190)
at window.<computed> (JS:1111)
at Object.<anonymous> (JS:51778)
at j (JS:51777)
and indeed, the Network tab reveals it's still posting via 3000:
Request URL: http://localhost:3000/documents/tinylicious
Referrer Policy: strict-origin-when-cross-origin
I know tinylicious is not the full Fluid server and it's just for testing purposes, so it might have been hardwired to 3000, but maybe someone has an idea how to launch it on a different port.
Tinylicious server port is definitely configurable.
If you override their libraries, then you will be able to run your app using any port possible.
You must've noticed this function:
getTinyliciousContainer();
within its libraries - get-tinylicious-container and tinylicious-driver, you will see one of their files in the tinylicious-driver:
insecureTinyliciousUrlResolver.ts, in which every damn host:port is hardcoded to localhost:3000.
Therefore, just copy their code from their getTinyliciousContainer and tinylicious-driver, and make your own version of getTinyliciousContainer. In the future, you need to copy these codes to configure for Routerlicious anyways, as Tinylicious is very lightweight, and is recommended just for testing purposes.
The file you need to modify in #fluidframework/tinylicious-driver is insecureTinyliciousUrlResolver.ts:
export class InsecureTinyliciousUrlResolver implements IUrlResolver {
public async resolve(request: IRequest): Promise<IResolvedUrl> {
const url = request.url.replace(`http://${serviceHostName}:${servicePort}/`,"");
const documentId = url.split("/")[0];
const encodedDocId = encodeURIComponent(documentId);
const documentRelativePath = url.slice(documentId.length);
const serviceHostName = "YOUR-PREFERRED-HOST-NAME";
const servicePort = "YOUR-PREFERRED-PORT";
const documentUrl = `fluid://${serviceHostName}:${servicePort}/tinylicious/${encodedDocId}${documentRelativePath}`;
const deltaStorageUrl = `http://${serviceHostName}:${servicePort}/deltas/tinylicious/${encodedDocId}`;
const storageUrl = `http://${serviceHostName}:${servicePort}/repos/tinylicious`;
const response: IFluidResolvedUrl = {
endpoints: {
deltaStorageUrl,
ordererUrl: `http://${serviceHostName}:${servicePort}`,
storageUrl,
},
tokens: { jwt: this.auth(documentId) },
type: "fluid",
url: documentUrl,
};
return response;
}
public async getAbsoluteUrl(resolvedUrl: IFluidResolvedUrl, relativeUrl: string): Promise<string> {
const documentId = decodeURIComponent(resolvedUrl.url.replace(`fluid://${serviceHostName}:${servicePort}/tinylicious/`, ""));
/*
* The detached container flow will ultimately call getAbsoluteUrl() with the resolved.url produced by
* resolve(). The container expects getAbsoluteUrl's return value to be a URL that can then be roundtripped
* back through resolve() again, and get the same result again. So we'll return a "URL" with the same format
* described above.
*/
return `${documentId}/${relativeUrl}`;
}
private auth(documentId: string) {
const claims: ITokenClaims = {
documentId,
scopes: ["doc:read", "doc:write", "summary:write"],
tenantId: "tinylicious",
user: { id: uuid() },
// #ts-ignore
iat: Math.round(new Date().getTime() / 1000),
exp: Math.round(new Date().getTime() / 1000) + 60 * 60, // 1 hour expiration
ver: "1.0",
};
const utf8Key = { utf8: "12345" };
return jsrsasign.jws.JWS.sign(null, JSON.stringify({ alg:"HS256", typ: "JWT" }), claims, utf8Key);
}
}
export const createTinyliciousCreateNewRequest =
(documentId: string): IRequest=> (
{
url: documentId,
headers:{
createNew: true,
},
}
);
Then, you just run this React app standalone instead of concurrently, and without the built-in Tinylicious server.
Go to their GitHub, clone their Tinylicious in the FluidFramework/server repo, and run it in whatever port you want.
And here you go, now you can run Tinylicious in any host, any port you wanted.
The Tinylicious port is now configurable. More details in https://github.com/microsoft/FluidFramework/issues/5415
I have been using Cypress.io to run end-to-end tests. Recently, I have been using it to run unit tests as well. However, I have some issues with some small helper functions that I have built with NodeJs.
I have a create file called utils.spec.js in the following path <my-project-name>/cypress/integration/unit/utils.spec.js & I have written the following tests:
File: utils.spec.js
// Path to utils.js which holds the regular helper javascript functions
import { getTicketBySummary } from '../../path/to/utils';
describe('Unit Tests for utils.js methods', () => {
/**
* Array of objects mocking Tickets Object response
*/
const mockedTickets = {
data: {
issues: [
{
id: 1,
key: 'ticket-key-1',
fields: {
summary: 'This is ticket number 1',
},
},
{
id: 2,
key: 'ticket-key-2',
fields: {
summary: 'This is ticket number 2',
},
},
],
},
};
const mockedEmptyTicketsArray = [];
it('returns an array containing a found ticket summary', () => {
expect(
getTicketBySummary({
issues: mockedTickets,
summaryTitle: 'This is ticket number 1',
})
).eq(mockedTickets.data.issues[0]);
});
it('returns an empty array, when no ticket summary was found', () => {
expect(
getTicketBySummary({
issues: mockedTickets,
summaryTitle: 'This is ticket number 3',
})
).eq(mockedEmptyTicketsArray);
});
});
File: utils.js
const fs = require('fs');
/**
* Method to search for an issue by its title
* Saving the search result into an array
*
* #param {array} - issues - an array containing all existing issues.
* #param {string} - summaryTitle - used to search an issue by title
*/
const getTicketBySummary = ({ issues, summaryTitle }) =>
issues.data.issues.filter(issueData => {
return issueData.fields.summary === summaryTitle ? issueData : null;
});
/**
* Method to read a file's content
* Returns another string representing the file's content
*
* #param {str} - file - a passed string representing the file's path
*/
const readFileContent = file => {
return new Promise((resolve, reject) => {
fs.readFile(file, 'utf8', (err, data) => {
if (err) return reject(err);
return resolve(data);
});
});
};
module.exports = { getTicketBySummary, readFileContent };
However, when I run the command: npx cypress run --spec=\"cypress/integration/unit/utils.spec.js\" --env mode=terminal, I get the error:Module not found: Error: Can't resolve 'fs'.
Also if I commented out the fs import & its function I get another error:
1) An uncaught error was detected outside of a test:
TypeError: The following error originated from your test code, not from Cypress.
> Cannot assign to read only property 'exports' of object '#<Object>'
When Cypress detects uncaught errors originating from your test code it will automatically fail the current test.
Cypress could not associate this error to any specific test.
We dynamically generated a new test to display this failure
I did some digging on the second error & it seems the describe method is defined. How can I fix both issues? What am I doing wrong?
You can use tasks to execute node code. In your plugins.js create the task with the arguments you need, returning the calculated value:
on('task', {
// you can define and require this elsewhere
getTicketBySummary('getTicketBySummary', { issues, summaryTitle }) {
return issues.data.issues.filter(...);
}
})
}
In your test, execute the task via cy.task:
it('returns an array containing a found ticket summary', () => {
cy.task('getTicketBySummary', {
issues: mockedTickets,
summaryTitle: 'This is ticket number 1',
}).then(result => {
expect(result).eq(mockedTickets.data.issues[0]);
})
});
That being said, getTicketBySummary looks like a pure function that doesn't depend on fs. Perhaps separate helper functions that actually need node as that could avoid need cy.task. If you want to be able to import commonjs (require) via ES6 import/export you would usually need to setup build tools (babel/rollup/etc) to be able to resolve that effectively.
Hopefully that helps!
Sentry by defaults has integration for console.log to make it part of breadcrumbs:
Link: Import name: Sentry.Integrations.Console
How can we make it to work for bunyan logger as well, like:
const koa = require('koa');
const app = new koa();
const bunyan = require('bunyan');
const log = bunyan.createLogger({
name: 'app',
..... other settings go here ....
});
const Sentry = require('#sentry/node');
Sentry.init({
dsn: MY_DSN_HERE,
integrations: integrations => {
// should anything be handled here & how?
return [...integrations];
},
release: 'xxxx-xx-xx'
});
app.on('error', (err) => {
Sentry.captureException(err);
});
// I am trying all to be part of sentry breadcrumbs
// but only console.log('foo'); is working
console.log('foo');
log.info('bar');
log.warn('baz');
log.debug('any');
log.error('many');
throw new Error('help!');
P.S. I have already tried bunyan-sentry-stream but no success with #sentry/node, it just pushes entries instead of treating them as breadcrumbs.
Bunyan supports custom streams, and those streams are just function calls. See https://github.com/trentm/node-bunyan#streams
Below is an example custom stream that simply writes to the console. It would be straight forward to use this example to instead write to the Sentry module, likely calling Sentry.addBreadcrumb({}) or similar function.
Please note though that the variable record in my example below is a JSON string, so you would likely want to parse it to get the log level, message, and other data out of it for submission to Sentry.
{
level: 'debug',
stream:
(function () {
return {
write: function(record) {
console.log('Hello: ' + record);
}
}
})()
}
I am attempting to update an entity in my datastore kind using sample code from here https://cloud.google.com/datastore/docs/reference/libraries. The actual code is something like this:
/ Imports the Google Cloud client library
const Datastore = require('#google-cloud/datastore');
// Your Google Cloud Platform project ID
const projectId = 'YOUR_PROJECT_ID';
// Creates a client
const datastore = new Datastore({
projectId: projectId,
});
// The kind for the new entity
const kind = 'Task';
// The name/ID for the new entity
const name = 'sampletask1';
// The Cloud Datastore key for the new entity
const taskKey = datastore.key([kind, name]);
// Prepares the new entity
const task = {
key: taskKey,
data: {
description: 'Buy milk',
},
};
// Saves the entity
datastore
.save(task)
.then(() => {
console.log(`Saved ${task.key.name}: ${task.data.description}`);
})
.catch(err => {
console.error('ERROR:', err);
});
I tried to create a new entity using this code. But when I ran this code and checked the datastore console, there were no entitites created.Also, I am unable to update an existing entity. What could be the reason for this?
I am writing the code in Google Cloud Functions.This is the log when I run this function:
{
insertId: "-ft02akcfpq"
logName: "projects/test-66600/logs/cloudaudit.googleapis.com%2Factivity"
operation: {…}
protoPayload: {…}
receiveTimestamp: "2018-06-15T09:36:13.760751077Z"
resource: {…}
severity: "NOTICE"
timestamp: "2018-06-15T09:36:13.436Z"
}
{
insertId: "000000-ab6c5ad2-3371-429a-bea2-87f8f7e36bcf"
labels: {…}
logName: "projects/test-66600/logs/cloudfunctions.googleapis.com%2Fcloud-functions"
receiveTimestamp: "2018-06-15T09:36:17.865654673Z"
resource: {…}
severity: "ERROR"
textPayload: "Warning, estimating Firebase Config based on GCLOUD_PROJECT. Intializing firebase-admin may fail"
timestamp: "2018-06-15T09:36:09.434Z"
}
I have tried the same code and it works for me. However, I have noticed that there was a delay before the entities appeared in Datastore. In order to update and overwrite existing entities, use .upsert(task) instead of .save(task) (link to GCP documentation). You can also use .insert(task) instead of .save(task) to store new entities.
Also check that the project id is correct and that you are inspecting the entities for the right kind.