Building Chainlink External Adapter for Binance Withdrawals - javascript

I have already made an attempt to build the external adapter but unfortunately, I keep running into the error: "You are not authorized to execute this request"
If you want to know how to exactly build one there are the following resources (basically the same material but different format):
https://blog.chain.link/build-and-use-external-adapters/ (blogpost)
https://youtu.be/65NhO5xxSZc (video)
In the resources above a template is used in NodeJS:
https://github.com/PatrickAlphaC/CL-EA-NodeJS-Template
And this is the repo to my own attempt (with error). I just modified the index.js file:
https://github.com/gvandriel/CL-EA-NodeJS-Template
Then start the server with
yarn (install dependencies)
yarn start
Open up another terminal and paste the following to post a withdrawal request:
curl -X POST -H "content-type:application/json" "http://localhost:8080/" --data '{ "id": 0, "data": { "asset": "USDT", "address": "0xe66273cC443F774653E885496f76b486F956B47F", "amount": 10 } }'
Please note that since you are doing a withdrawal from Binance you need to enable that in your account API settings and set the restricted IP address. Moreover, I believe you can only withdraw funds from an address that you previously have withdrawn to. Also don't forget to update the .env_sample file with your own keys
What does work in the code?
We know that the totalstring at line 58 is working since we tested it outside the external adapter. Moreover we also know that the header with X-MBX-APIKEY is working. Thus we believe that the error consists in the following:
Requester.request(config, customError)
.then((response) => {
// It's common practice to store the desired value at the top-level
// result key. This allows different adapters to be compatible with
// one another.
response.data.result = Requester.validateResultNumber(response.data, [
"msg",
]);
callback(response.status, Requester.success(jobRunID, response));
})
.catch((error) => {
callback(500, Requester.errored(jobRunID, error));
});
The bug might also be somewhere else in the code so make sure to check the modified index.js file here:
https://github.com/gvandriel/CL-EA-NodeJS-Template

In your index.js file, you are setting the params and config objects:
const params = {
asset,
address,
amount,
recvWindow,
timestamp,
signature,
};
const config = {
method: "post",
url,
headers: {
"X-MBX-APIKEY": process.env.API_key,
},
};
However, params is never included into config. Also, it seems you want to send these params as POST data. Consider renaming the params object to data (Axios docs), and append data in your config:
const config = {
method: "post",
url,
headers: {
"X-MBX-APIKEY": process.env.API_key,
},
data
}

Related

How to read parameters of Azure function HTTP request and respond with a file from blob storage?

I'm building an Azure Function app and have had some issues trying to proxy requests (for data capture purposes) and still respond with the requested files. Essentially what I'm trying to accomplish is:
Client requests file via GET parameter on an Azure Function endpoint (not its blob storage location)
Function logs some metadata to table storage (e.g. IP address, timestamp, file name, etc.)
Function locates the desired file in blob storage and forwards it to the client as if Step 2 didn't occur
I've tried an approach using Q (outlined here) with no luck, and I haven't been able to narrow the issue down (beyond standard 500 errors).
The tutorial above basically goes as follows:
const rawFile = await q.nfcall(fs.readFile, blobUrl);
const fileBuffer = Buffer.from(rawFile, ‘base64’);
context.res = {
status: 202,
body: fileBuffer,
headers: {
"Content-Disposition": "attachment; examplefile.mp3;"
}
};
context.done();
I've hit a bit of a wall and I'm struggling to find a solution to what I thought would be a common problem (i.e. simply logging download metadata to a table). I'm new to Azure and so far I'm finding things a bit tedious... Is there a simple way of doing this?
Edit: Based on the response I get from context.bindings.myFile.length I think I've been able to retrieve a file from blob storage, but I haven't yet been able to send it back in the response. I've tried the following:
context.res = {
status: 202,
body: context.bindings.myFile,
headers: {
'Content-Type': 'audio/mpeg',
'Content-Disposition': 'attachment;filename=' + fileName,
'Content-Length': context.bindings.myFile.length
}
};
Edit 2: I think this is pretty much solved - I overlooked the methods and route in the HTTP input part of the answer below. Looks like I'm able to dynamically retrieve blobs based on the GET request and use them as inputs, and I've been able to get them sent back to the client as well. My HTTP response now looks like this:
context.res = {
status: 200,
headers: {
'Content-Length': context.bindings.myFile.length,
'Content-Type': 'audio/mpeg'
},
body: context.bindings.myFile,
isRaw: true
};
Let me paraphrase what you asked:
You want just to retrieve blob storage object upon http request.
I think it is worth looking into bindings. They simplify the integration with other Azure services - Storage Accounts, Service Bus, Twilio - and Function Apps.
In your case it should be an input binding for blob storage. As one way to achieve it you need to customize your route in the http trigger part of function.json as follows: file/{fileName}. Then you use fileName in input binding definition in the same function.json.
I think function.json should like this:
{
"bindings": [
{
"type": "httpTrigger",
"name": "req",
"direction": "in",
"methods": [ "get" ],
"route": "file/{fileName}"
},
{
"name": "myFile",
"type": "blob",
"path": "your-container-name/{fileName}",
"connection": "MyStorageConnectionAppSetting",
"direction": "in"
},
{
"type": "http",
"name": "res",
"direction": "out"
}
]
}
With your index.js as follows:
module.exports = function(context, req) {
context.log('Node.js Queue trigger function processed', context.bindings.myFile);
const fileToReturn = context.bindings.myFile;
// you return it here with context.res = ...
context.done();
};
You also superficially mentioned the logging that is not part of your question but is mentioned. I recommend looking into Azure Application Insights. It might serve the purpose.

Accessing 3rd party API from wix

I am trying to communicate with a 3rd party API. I wrote the API in python. I want to update the name column in the database from the Wix web page using a user form and text box. The database updates and all of the endpoints are responsive using postman to test. I think the problem resides in my JavaScript on the Wix end.
I modeled the JavaScript from the Wix example at:
https://support.wix.com/en/article/calling-server-side-code-from-the-front-end-with-web-modules
I have a back end module called placeOrder stored in orderplaced.jsw that should post the variable 'name' to the api.
import { fetch } from 'wix-fetch';
// wix-fetch is the API we provide to make https calls in the backend
export function placeOrder(name) {
return fetch("https://reliableeparts.pythonanywhere.com/user", {
method: 'post',
name: JSON.stringify({ name })
}).then(function (response) {
if (response.status >= 200 && response.status < 300){
console.log(JSON.stringify({ name }))
return response.text();}
console.log(Error(response.statusText))
return Error(response.statusText);}
);
}
The front end module waits for a button click and stores the text box in the name variable.
{
import {placeOrder} from 'backend/orderplaced.jsw';
export function button1_click(event, $w) {
placeOrder(
$w("#input1").value)
.then(function() {
console.log("Form submitted to backend.");
}
);
}
}
Output:
2
The code appears to be reaching the back end. I believe the problem is in my placeOrder function as I am not very familiar with JavaScript.
Your code seems legit. The problem is with the server. When I tried to send a POST request to that address I got a 500 Internal Server Error.
You may check this curl and test the service yourself:
curl -i -X POST -H "Content-Type:application/json" https://reliableeparts.pythonanywhere.com/user -d '{"name":"test123"}'
You are probably missing the correct object structure the server is expecting or missing proper headers to POST the server (or both...)
Make sure you're following the API this server allows

Getting 403 (Forbidden) when uploading to S3 with a signed URL

I'm trying to generate a pre-signed URL then upload a file to S3 through a browser. My server-side code looks like this, and it generates the URL:
let s3 = new aws.S3({
// for dev purposes
accessKeyId: 'MY-ACCESS-KEY-ID',
secretAccessKey: 'MY-SECRET-ACCESS-KEY'
});
let params = {
Bucket: 'reqlist-user-storage',
Key: req.body.fileName,
Expires: 60,
ContentType: req.body.fileType,
ACL: 'public-read'
};
s3.getSignedUrl('putObject', params, (err, url) => {
if (err) return console.log(err);
res.json({ url: url });
});
This part seems to work fine. I can see the URL if I log it and it's passing it to the front-end. Then on the front end, I'm trying to upload the file with axios and the signed URL:
.then(res => {
var options = { headers: { 'Content-Type': fileType } };
return axios.put(res.data.url, fileFromFileInput, options);
}).then(res => {
console.log(res);
}).catch(err => {
console.log(err);
});
}
With that, I get the 403 Forbidden error. If I follow the link, there's some XML with more info:
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your key and signing method.
</Message>
...etc
Your request needs to match the signature, exactly. One apparent problem is that you are not actually including the canned ACL in the request, even though you included it in the signature. Change to this:
var options = { headers: { 'Content-Type': fileType, 'x-amz-acl': 'public-read' } };
Receiving a 403 Forbidden error for a pre-signed s3 put upload can also happen for a couple of reasons that are not immediately obvious:
It can happen if you generate a pre-signed put url using a wildcard content type such as image/*, as wildcards are not supported.
It can happen if you generate a pre-signed put url with no content type specified, but then pass in a content type header when uploading from the browser. If you don't specify a content type when generating the url, you have to omit the content type when uploading. Be conscious that if you are using an upload tool like Uppy, it may attach a content type header automatically even when you don't specify one. In that case, you'd have to manually set the content type header to be empty.
In any case, if you want to support uploading any file type, it's probably best to pass the file's content type to your api endpoint, and use that content type when generating your pre-signed url that you return to your client.
For example, generating a pre-signed url from your api:
const AWS = require('aws-sdk')
const uuid = require('uuid/v4')
async function getSignedUrl(contentType) {
const s3 = new AWS.S3({
accessKeyId: process.env.AWS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY
})
const signedUrl = await s3.getSignedUrlPromise('putObject', {
Bucket: 'mybucket',
Key: `uploads/${uuid()}`,
ContentType: contentType
})
return signedUrl
}
And then sending an upload request from the browser:
import Uppy from '#uppy/core'
import AwsS3 from '#uppy/aws-s3'
this.uppy = Uppy({
restrictions: {
allowedFileTypes: ['image/*'],
maxFileSize: 5242880, // 5 Megabytes
maxNumberOfFiles: 5
}
}).use(AwsS3, {
getUploadParameters(file) {
async function _getUploadParameters() {
let signedUrl = await getSignedUrl(file.type)
return {
method: 'PUT',
url: signedUrl
}
}
return _getUploadParameters()
}
})
For further reference also see these two stack overflow posts: how-to-generate-aws-s3-pre-signed-url-request-without-knowing-content-type and S3.getSignedUrl to accept multiple content-type
If you're trying to use an ACL, make sure that your Lambda IAM role has the s3:PutObjectAcl for the given Bucket and also that your bucket allows for the s3:PutObjectAcl for the uploading Principal (user/iam/account that's uploading).
This is what fixed it for me after double checking all my headers and everything else.
Inspired by this answer https://stackoverflow.com/a/53542531/2759427
1) You might need to use S3V4 signatures depending on how the data is transferred to AWS (chunk versus stream). Create the client as follows:
var s3 = new AWS.S3({
signatureVersion: 'v4'
});
2) Do not add new headers or modify existing headers. The request must be exactly as signed.
3) Make sure that the url generated matches what is being sent to AWS.
4) Make a test request removing these two lines before signing (and remove the headers from your PUT). This will help narrow down your issue:
ContentType: req.body.fileType,
ACL: 'public-read'
Had the same issue, here is how you need to solve it,
Extract the filename portion of the signed URL.
Do a print that you are extracting your filename portion correctly with querystring parameters. This is critical.
Encode to URI Encoding of the filename with query string parameters.
Return the url from your lambda with encoded filename along with other path or from your node service.
Now post from axios with that url, it will work.
EDIT1:
Your signature will also be invalid, if you pass in wrong content type.
Please ensure that the content-type you have you create the pre-signed url is same as the one you are using it for put.
Hope it helps.
As others have pointed out the solution is to add the signatureVerision.
const s3 = new AWS.S3(
{
apiVersion: '2006-03-01',
signatureVersion: 'v4'
}
);
There is very detailed discussion around the same take a look https://github.com/aws/aws-sdk-js/issues/468
This code was working with credentials and a bucket I created several years ago, but caused a 403 error on recently created credentials/buckets:
const s3 = new AWS.S3({
region: region,
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY,
})
The fix was simply to add signatureVersion: 'v4'.
const s3 = new AWS.S3({
signatureVersion: 'v4',
region: region,
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY,
})
Why? I don't know.
TLDR: Check that your bucket exists and is accessible by the AWS Key that is generating the Signed URL..
All of the answers are very good and most likely are the real solution, but my issue actually stemmed from S3 returning a Signed URL to a bucket that didn't exist.
Because the server didn't throw any errors, I had assumed that it must be the upload that was causing the problems without realizing that my local server had an old bucket name in it's .env file that used to be the correct one, but has since been moved.
Side note: This link helped https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403/
It was while checking the uploading users IAM policies that I discovered that the user had access to multiple buckets, but only 1 of those existed anymore.
Did you add the CORS policy to the S3 bucket? This fixed the problem for me.
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
I encountered the same error twice with different root causes / solutions:
I was using generate_presigned_url.
The solution for me was switching to generate_presigned_post (doc) which returns a host of essential information such as
"url":"https://xyz.s3.amazonaws.com/",
"fields":{
"key":"filename.ext",
"AWSAccessKeyId":"ASIAEUROPRSWEDWOMM",
"x-amz-security-token":"some-really-long-string",
"policy":"another-long-string",
"signature":"the-signature"
}
Add these fields to your request headers, don't forget to keep file last!
That time I forgot to give proper permissions to the Lambda. Interestingly, Lambda can create good looking signed upload URLs which you won't have permission to use. The solution is to enrich the policy with S3 actions:
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::my-own-bucket/*"
]
}
using python boto3 when you upload a file the permissions are private by default. you can make the object public using ACL='public-read'
s3.put_object_acl(
Bucket='gid-requests', Key='potholes.csv', ACL='public-read')
I did all that's mentioned here and allowed these permissions for it to work:

Can't fetch data, CORS issue, trying to hack it with JSONP, still not working

I'm trying to fetch data from http://www.recipepuppy.com/api/?q=onion&p=1. (Sample query)
It works in a browser, but I was trying to fetch it inside my React app and I'm encountering “No 'Access-Control-Allow-Origin' header is present on the requested resource error.
So I changed my strategy and now I'm trying to use JSONP (https://github.com/mzabriskie/axios/blob/master/COOKBOOK.md#jsonp).
But I can't make it work. I'm getting this error all the time. Can someone please help me with my issue?
Error:
Uncaught ReferenceError: jp0 is not defined
at ?q=onion&p=1&callback=__jp0:1
My Code:
import jsonp from 'jsonp'
export const FETCH_RECIPES = 'FETCH_RECIPE'
export const SHOW_INFO = 'SHOW_INFO'
export function fetchRecipes (searchTermToDOoooooooooo) {
const request = jsonp('http://www.recipepuppy.com/api/?q=onion&p=1', null, function (err, data) {
if (err) {
console.error(err.message)
} else {
console.log(data)
}
})
return (dispatch) => {
/*
request.then(({ data: data1 }) => {
dispatch({ type: FETCH_RECIPES, payload: data1 })
})
*/
}
}
export function showInfo (info) {
return {
type: SHOW_INFO,
payload: info
}
}
How that error looks in dev tools:
You can't do it with client-only code, at least not with JSONP+Axios (Axios doesn't (natively) support JSONP; the "jsonp" library is different from Axios), because it's the server you're getting information from that's in violation of the cross-origin rules. In this case, it's Recipe Puppy that isn't set up for Access-Control-Allow-Origin headers.
One option is to use a server-side proxy, as #Pointy mentions.
Your flow would then shift to:
Client calls server-side proxy for information.
Proxy calls Recipe Puppy's API and translates or passes through information as needed.
Proxy relays that information to the client-side code for further processing.
As for your current shift to jsonp, it appears the jsonp library is not exporting jp0 properly for some reason. This could be an error with your build tool. You'll want to double-check your setup and make sure your build tool is picking up the jsonp library and actually putting it into the compiled source.

Empty response from Google Vision API with an image stored in Google Cloud Storage

I am trying to use the cloud vision API and I am able to make a successful request, but my response comes back empty, even with the test image provided on the API docs.
Request Body:
const imagePath = `gs://[bucket_name]/faulkner.jpg`;
const requestObject = {
requests: [
{
image: {
source: {
gcsImageUri: imagePath
}
},
features:[
{
type: 'LABEL_DETECTION',
maxResults: 100
}
]
}
]
};
faulkner.jpg
Response Body:
{
"responses": [{}]
}
I have even tried using the cloud API console and copy the request fields, and that too does not work
const apiKey = 'myAPIKey';
const fields = `fields=responses(labelAnnotations)&`;
const visionAPI = `https://vision.googleapis.com/v1/images:annotate?${fields}key=${apiKey}`;
Any help would be greatly appreciated.
I was having this exact issue, this is what worked for me...
You do not need OAuth, just an API key.
This is what I was doing wrong...
In my HTTP call I needed to wrap my request in a new object literal as in
{data: requestBody }
To clarify,
// My old call
HTTP.call("POST", "https://vision.googleapis.com/v1/images:annotate?key=myAPIKey", requestBody, myCallback);
// To my new call
HTTP.call("POST", "https://vision.googleapis.com/v1/images:annotate?key=myAPIKey", {data: requestBody}, myCallback);
// reqeustBody example
{
"requests":
[
{
"features":
[
{
"type": "LABEL_DETECTION"
}
],
"image":
{
"source":
{
"gcsImageUri": "gs://myBucketNameHere/myDemoImageNameHere.jpg"
}
}
}
]
}
NOTE: A few things that need to be done.
Image is in your Google Cloud Platform Storage Bucket.
The image name is exact in the call as it is in storage.
The image must have something to detect i.e. if using FACE_DETECTION the image must have a human face.
The image in Google Cloud Platform Storage MUST be checked to Share publicly.
I am using the very same call above with my image named demo-image.jpg and everything works now that I wrapped the requestBody.
Are you doing OAuth2 with the appropriate token. For using Vision API with gCS Images, we cannot just use the API Key.
Have you tried making the request using an oauth2 access key? There's a quick-and-dirty way to test this on the command line if you have the gcloud tool:
Create and download a service account json key
Set gcloud to use that service account:
gcloud auth activate-service-account --key-file <service-account-file.json>
Get an access token using gcloud auth print-access-token and perform a curl request with it:
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "content-type: application/json" \
--data-binary '{"requests": [{"image": {"source": {"gcs_image_uri": "gs://your-bucket/your-object.jpg"}}, "features": [{"type": "LABEL_DETECTION", "maxResults": 100}]}]}' \
"https://vision.googleapis.com/v1/images:annotate?alt=json"
For production use, though, you'll want to explicitly use the oauth2 flow to get your access tokens, since they're short-lived and require refreshing.

Categories