Cloudwatch RUM and API Gateway v2 / Lambda XRay Tracing - javascript

I have a web application instrumented with Cloudwatch RUM which sends an x-amzn-trace-id header in all XHR requests. The AWS docs do not specify further instructions on how to connect client-side traces with backend services, so I assume there is automatic instrumentation. However, the Cloudwatch service map is unable to connect the full flow and it seems that lambda produces its own trace id. API Gateway v2 does not have any XRay tracing and I am unable to find a setting or cli command to enable it. Has anyone successfully done this with v2 of API Gateway?

I reached out to AWS Support and got a link to the following documentation:
https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-vs-rest.html#:~:text=%E2%9C%93-,Monitoring,-API%20Gateway%20supports
It looks like v2 of API Gateway does not yet support xray tracing or execution logging at all.
FOLLOW UP:
Can I manually set the trace id in my lambda function using the sdk and the header sent from RUM?

Lambda service auto-generates a trace id before invoking a function when the trace header is not present in the invoke request headers. Manually overwriting this trace id within the function would likely result in broken trace between the Lambda and Lambda::Function nodes in the service graph.

Related

Kibana UI fails to load resources (javascript and css)

We have an Elasticsearch (7.17.5) / Kibana (7.17.5) pair running within our Kubernetes Cluster (1.21.7). When accessing the Kibana UI via the cluster's API Gateway (Broadcom API Gateway 10.1.00) all of the associated resource files come in garbled (looks like instead of UTF-8 content, appears to be Unicode).
Anyway, all of the resources are found (200 OK on their Get requests) and the Kibana and Gateway logs seem fine with the requests and their content, but the browser console shows an "Uncaught Syntax Error: Illegal Character U+001B at position 0" for all of the CSS and JavaScript files downloaded.
I can use Kubectl port-forward directly to the Kibana service, and the pages loads fine. I can also use cUrl to request the various resources and the pull down containing the standard UTF-8 JS/CSS expected.
I'm at a loss. If it was just the API Gateway, then using cUrl to access the resource through the Gateway should have the same issues. If it was just the Kibana UI, then Kubectl port-forward should fail.
Has anyone seen something like this?
Additional data point, we have a large collection of web applications (HTML/CSS/JavaScript collections that are retrieved via the API Gateway that thus far have not been adulterated into strange Unicode sequences.
Putting the answer out there for posterity, the UI request generated in the browser had a header for "Accept-Encoding: gzip,deflate,br" and the Broadcom API Gateway doesn't cope with the br.
Resolved by having all UI requests through the API gateway strip the "br" from that header and css/js resources once again loaded as expected.

Authenticate a Google Kubernetes cluster certificate over https?

I'm a little lost trying to get acccess to a Kubernetes cluster hosted on Google Kubernetes Engine.
I would like to use a cluster certificate to authenticate to the provided kubernetes endpoint, so that I can run API requests to the kubernetes api, like creating deployments for example.
I'm trying to create deployments from an external API (from a NodeJS app, hosted on google app engine), where I will automatically create deployments on various different kubernetes clusters, not necessarily in the same google project.
I used to use basic auth to authenticate to the kubernetes api, this was trivial, all I needed was the username and password for a cluster, and then to base64 encode the two and put it in an Authentication header. I did this using axios and had no problems at all.
Now I'd like to switch over to using client certificates and I think I lack some understanding.
I guess I need to get the provided endpoint ip of the cluster, download the cluster certificate provided by google... that looks something like this:
...possibly base64 encode it and save it as a .crt, .cert, or ??.pem file and point axios to the file using a httpagent? (I tried saving the raw data as a .crt and .cert file, setting it as a httpagent and this unsurprisingly didn't work).
Do I need some kind of client/server key pair for the certificate, or maybe an API key?
I also read something about setting a Bearer token as an Authorization header, I guess this needs to be paired with the certificate, but I'm unsure where I can find/generate this token?
If there's anyone who can help with this obscure issue I'd be very grateful,
Thanks in advance!
P.S. I've been trying to decipher the K8s docs and I think I'm close, but I'm still not sure I'm looking at the right docs: https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/
I've resolved the issue and I thought I'd post an answer in case someone is in my situation and needs a reference.
Alberto's answer is a great one, and I think this is the most secure mindset to have, its definitely preferable to keep everything internal to the google environment. But in my case, even if just for testing, I really needed to interface with the kubernetes API over https from another domain.
It turns out that it was simpler than I anticipated, instead of using the CA certificate, what I needed to do was to fetch a service account token from google cloud shell and place this in the Authorization header (very similar to basic auth). (In addition I should check the validity of the kubernetes endpoint by using the service account's client certificate, but I did not so this.)
Once in the google cloud shell and authenticated to the relevant cluster run the following:
kubectl get serviceaccount
kubectl get serviceaccount default -o yaml
kubectl get secret-default-token-some_id_string -o yaml
kubectl create clusterrolebinding default-serviceaccount-admin --clusterrole=cluster-admin --serviceaccount=default:default
Point 1 displays whether there is in fact a default service account present, this should be the case with a cluster created on Google Kubernetes Engine.
Point 2 prints out which service account is the default and should provide a service account name, something like "secret-default-token-abcd"
Point 3 prints out the token as well as its corresponding certificate. The token is printed as a base64 encoded string and needs to be decoded before it can be used in a http request.
Point 4 assigns the default service account a role (part of the Kubernetes RBAC thing) so that it has the correct permissions to create and delete deployments in the cluster.
Then in postman or wherever you're making the Kubernetes api call, set an Authorization header as follows:
Authorization: Bearer <base64 decoded token received in point 3>
Now you can make calls directly to the kubernetes api, for example, to create or delete deployments.
As mentioned above the certificate also provided as a base64 encoded string in Point 3, can be used to verify the kubernetes endpoint, but I haven't tested this (definitely more secure way of doing things).
In addition I was making the http call in a nodejs script and I had to set an environment variable process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0"; to override the certificate check... definitely not a safe way to go and not recommended.
I'm not sure if using manual authentication is the best for this case, as one of the good features of Kubernetes Engine is that it will provide authentication configuration for kubectl for you automatically, via the ‘gcloud container clusters get-credentials’ command.
There are also some resources (like connecting to another cluster) that are controlled via GCP IAM permissions instead of inside Kubernetes.
I would advise you to take a deep look at this page:
https://cloud.google.com/kubernetes-engine/docs/how-to/api-server-authentication
This should also handle the certificate for you.
In general I would advise you to check the GKE documentation first before the kubernetes one, as GKE is not only just Kubernetes but it also makes managing some things way easier.

how do I run an http request from a dynamic IP webpage, thru an AWS static IP, since the endpoint API needs to be from a whitelisted IP?

I am working on a project where I have a Wix site (the important thing is that thru Wix I am forced to use a dynamic IP).
The API that I want to send requests to, only accepts whitelisted IPs. I can easily add a whitelisted IP to the list. In fact, I've already got an AWS EC2 server running node/fetch, that has an elastic IP, which is whitelisted, getting good responses.
The problem is, and it be a noob question, How do i connect my wix back-end to AWS thru my elastic IP to an external API, and get info back.
back-end --> AWS --> API endpoint
API endpoint --> AWS --> back-end
I've done my due diligence, and have even asked a similar question myself and not received answers. I need a HTTP guru. thanks in advance
I've not worked on the Wix to AWS part yet, but as for the AWS to API part I have created an EC2 server, connected thru ssh, installed node, installed fetch, used filzilla to drop a js file i wrote on my local machine. I hooked up an elastic IP that i whitelisted for the API endpoint to the EC2 server/instance. I ran the js file and i am able to get auth, add data and pull data back.
I don;t know how to integrate all of the AWS services however, I am new to AWS and while I am amazed at what I can do sometimes, other times, i get overwhelmed with all the connections and products.
I've successfully created an SQS queue, and pushed to lambda functions, i get 200 responses from the API, but they are not the typical "not whitelisted IP" responses. they refer to 127.0.0.1:443. i researched this port and seen its usually used as a secure port.
i've made requests with API gateway, i get the "not whitelisted IP" response, which is expected since the IPs change dynamically on API gateway.
My vision is that I'll need to incorporate
-SQS
-Lambda
-API Gateway
-VPC
-and probably abandon my EC2
///its a basic node-fetch request
the solution I discovered is to use Wix Corvid to send to an SQS query in AWS. then write code and host it on an EC2 instance, with a whitelisted IP. the node script on the EC2 calls the SQS queue and does a fetch with the proper message forwarded on. since you can assign a static IP to EC2s using Elastic IP in AWS, this solves the problem. the EC2 and SQS (AWS Services) act as a proxy which passes all info to the endpoint.

How to get the content of an attachment with the JS SDK for S4/HANA?

So I am currently in the process of learning the new Javascript Cloud SDK. Of course there is also a package for attachments and document info records but I am still facing some problems.
So mainly I just want to get an attachment which is attached to a document info record and safe it to my local file system. I am working with the JS Cloud SDK so I am working with a Node application.
When working with the API directly (testing via Postman) I can get the media_src of the attachment simply by adding '$value' to the request path. When I try to access this URL outside of Postman with a simple Node https.get request I get a SAML 2.0 Error (SAML2 Service not accessible). I guess that is because I cannot access those URLs via browser and therefore I should use the SDK for that.
So the final problem I am facing is that I cannot find anything about getting the file itself in the JSDoc of the SDK.
Same goes also for creating an attachment. Should I use the 'builder()' method for that and pass a JSON object or how does a POST or PUT request work with that SDK? I cannot find any blogs etc. because they are only doing simple 'Hello World' programms or GET some data.
thanks for reaching out to us!
Currently, we do not support OData media streams in the JS SDK's VDM yet.
If this functionality is critical for you, you can consider using the Java version of the SDK. Alternatively, you can open an issue here.
Regarding the SAML error I cannot comment, since I don't how your Postman is configured or how your system is setup.

Accessing data on Amazon's DynamoDB via JavaScript

1) Client Access: Is there anyway to perform CRUD operations on DynamoDB using client side JavaScript (REST/Ajax/jQuery)?
I know Amazon has support for .NET and Java.
2) Server Access: Is there any way we can access DynamoDB using server side JavaScript (Node.js) without having to install Java/.NET on the server?
Update 2012-12-05
There is now an official AWS SDK for Node.js, see the introductory post AWS SDK for Node.js - Now Available in Preview Form for details, here are the initially supported services:
The SDK supports Amazon S3, Amazon EC2, Amazon DynamoDB, and the
Amazon Simple Workflow Service, with support for additional services
on the drawing board. [emphasis mine]
Update 2012-02-27
Wantworthy has implemented a Node.js module for accessing Amazon DynamoDB a week after its launch date, thus covering 2) as well, see dynode:
Dynode is designed to be a simple and easy way to work with Amazon's
DynamoDB service. Amazon's http api is complicated and non obvious how
to interact with it. This client aims to offer a simplified more
obvious way of working with DynamoDB, but without getting in your way
or limiting what you can do with DynamoDB.
Update 2012-02-11
Peng Xie has implemented a Node.js module for accessing Amazon DynamoDB at its launch date basically, thus covering 2) already, see dynamoDB:
DynamoDB uses JSON for communication. [...] This module wraps up the request
and takes care of authentication. The user will be responsible for
crafting the request and consuming the result.
Unfortunately there is no official/complete JavaScript SDK for AWS as of today (see AWS Software Development Kits and boto [Python] for the available offerings).
Fortunately decent coverage for several AWS services in JavaScript is provided by the Node.js library aws-lib already though, which would be a good starting point for adding DynamoDB accordingly. An as of today unresolved feature request to Add support for DynamoDB has been filed already as well.
Further, AWS forum user gmlvsk3 has recently implemented dedicated JavaScript interface for DynamoDB, but supposedly you need [a] Java runtime to run it, because it is based on the Mozilla Rhino JavaScript engine - I haven't reviewed the code in detail yet (at first sight it looks a bit immature though in comparison to e.g. aws-lib, but may cover your needs regardless of course), so you should check it out yourself.
Finally, you can implement JavaScript HTTP Requests to Amazon DynamoDB yourself of course (see the API Reference for Amazon DynamoDB for details):
If you don't use one of the AWS SDKs, you can perform Amazon DynamoDB
operations over HTTP using the POST request method. The POST method
requires you to specify the operation in the header of the request and
provide the data for the operation in JSON format in the body of the
request.
I created a module called Dino to make it easier to work with the AWS SDK in web applications. You can use something like Restify to expose your data to jQuery via a REST interface.
Suppose you wanted to display pages of blog posts for a user. Using Dino and Restify, you would do the following:
server.get('/posts/:user_id', function(req, res, next){
Post.find({
match: {
user_id: req.params.user_id
},
skip: req.params.skip || 0,
take: req.params.take || 10
}, function(err, posts){
return res.send(posts.toJSON());
});
});
Regarding 1), there is now the AWS SDK for JavaScript in the Browser that allows you to access services including DynamoDB.
as for 2) we've been working as well since DDB launch date. One of its key features are simplicity/performance and how close it is (retry behavior, etc) to Amazon official Java/PHP libraries:
https://github.com/teleportd/node-dynamodb
It's successfully used in production at various places with 100+ write/s (at teleportd). Additionally we're working on a a mocked version to enable efficient testing of the library's client code.

Categories