I have a problem with a fetch request inside Firebase Functions for the Autodesk Forge Token.
Here is the error that is showing on functions registrations:
FetchError: request to https://developer.api.autodesk.com/authentication/v1/authenticate failed,
reason: getaddrinfo EAI_AGAIN developer.api.autodesk.com:443
at ClientRequest.<anonymous> (/srv/node_modules/node-fetch/lib/index.js:1455:11)
at emitOne (events.js:116:13)
at ClientRequest.emit (events.js:211:7)
at TLSSocket.socketErrorListener (_http_client.js:401:9)
at emitOne (events.js:116:13)
at TLSSocket.emit (events.js:211:7)
at emitErrorNT (internal/streams/destroy.js:66:8)
at _combinedTickCallback (internal/process/next_tick.js:139:11)
at process._tickDomainCallback (internal/process/next_tick.js:219:9)
I have already tried inserting the Forge API inside my react js project, and figure it out it would be a CORS problems.
const snapshot = change.after;
console.log(snapshot)
const api = "https://developer.api.autodesk.com/authentication/v1/authenticate"
const search = () =>
fetch(`${api}`, {
method: 'POST',
headers: {
'Content-Type': 'application/x-www-form-urlencoded'
},
body: JSON.stringify(`client_id=${process.env.REACT_APP_FORGE_CLIENT_ID}&client_secret=${process.env.REACT_APP_FORGE_CLIENT_SECRET}&grant_type=client_credentials&scope=data:read`)
}).then(res => res.json())
search().then((res) => {
const data = res
return snapshot.ref.parent.child('token').set(data);
})
})
Since Firebase Functions runs in the GCD backend CORS does not really come into play ...
You must be on the free plan - getaddrinfo EAI_AGAIN indicates a DNS lookup timeout and is due to the limitations of the free tier where outbound networking is limited to within Google services. Upgrade your plan to Flame or Blaze.
Related
I'm tryng to send a Curl request using "request-promise" of npm. The Curl that I want to send is as follows:
`curl \
-H "Content-Type: multipart/form-data" \
-F "original=#./${parent_path}" \
-F "modified=#./${version_path}" \
-o "${out_path}" \
${URI}`
My code in node is:
BIMFile.findOne({ _id: responseDB.parent_id })
.then(parent => {
parent_path = parsePath(parent.path);
version_path = parsePath(responseDB.path);
console.log("PARENT!", parent_path, version_path);
const URI =
`${protocol}://${host_img_diff}:${port_img_diff}/diff`
out_path = version_path + '.tmp.jpg';
request.post({
url: URI,
formData: {
file: fs.createReadStream(parent_path),
file: fs.createReadStream(version_path)
}
}).then((apiResponse) => {
console.log('apiUPDATEResponse', apiResponse);
})
The result is:
Unhandled rejection StatusCodeError: 400 - "<!DOCTYPE HTML PUBLIC
\"-//W3C//DTD HTML 3.2 Final//EN\">\n<title>400 Bad
Request</title>\n<h1>Bad Request</h1>\n<p>The browser (or proxy) sent a re
quest that this server could not understand.</p>\n"
at new StatusCodeError (/backend/node_modules/request-promise-
core/lib/errors.js:32:15)
at Request.plumbing.callback (/backend/node_modules/request-promise-
core/lib/plumbing.js:104:33)
at Request.RP$callback [as _callback] (/backend/node_modules/request-
promise-core/lib/plumbing.js:46:31)
at Request.self.callback (/backend/node_modules/request/request.js:185:22)
at emitTwo (events.js:126:13)
at Request.emit (events.js:214:7)
at Request.<anonymous> (/backend/node_modules/request/request.js:1161:10)
at emitOne (events.js:116:13)
at Request.emit (events.js:211:7)
at IncomingMessage.<anonymous>
(/backend/node_modules/request/request.js:1083:12)
at Object.onceWrapper (events.js:313:30)
at emitNone (events.js:111:20)
at IncomingMessage.emit (events.js:208:7)
at endReadableNT (_stream_readable.js:1055:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
The server returns the following message:
xxx.xx.x.xx - - [05/Apr/2019 10:00:15] "POST /diff HTTP/1.1" 400 -
As you can see the server couldn't understand the post request. Does anyone know how to add files correctly?
Use send files using formData as you're doing, but one of the issues you have is that you're setting both files in the same property, and only the last one is set.
console.log({ file: 1, file: 2 });
So if file can receive multiple files, you need to use an array
const formData = {
file: [
fs.createReadStream(parent_path),
fs.createReadStream(version_path)
]
}
If you need additional meta-data, request module provides a way too:
Pass optional meta-data with an 'options' object with style:
{value: DATA, options: OPTIONS}
Use case: for some types of
streams, you'll need to provide "file"-related information manually.
See the form-data README for more information about options:
https://github.com/form-data/form-data
Sometimes when I download multiple files from S3 bucket using node sdk, the request will timeout for one of the downloads. I would like for the request to just retry to attempt to download again.
The json error response says retryable: false.
Is there a way I can configure it to be true?
Here is the error:
{ TimeoutError: Connection timed out after 480000ms
at ClientRequest.<anonymous> (node_modules/aws-sdk/lib/http/node.js:83:34)
at Object.onceWrapper (events.js:272:13)
at ClientRequest.emit (events.js:180:13)
at ClientRequest.emit (domain.js:421:20)
at TLSSocket.emitTimeout (_http_client.js:703:34)
at Object.onceWrapper (events.js:272:13)
at TLSSocket.emit (events.js:180:13)
at TLSSocket.emit (domain.js:421:20)
at TLSSocket.Socket._onTimeout (net.js:396:8)
at ontimeout (timers.js:466:11)
at tryOnTimeout (timers.js:304:5)
at Timer.listOnTimeout (timers.js:267:5)
message: 'Connection timed out after 480000ms',
code: 'TimeoutError',
time: 2019-04-01T17:58:41.010Z,
region: 'us-west-2',
hostname: 'bucket-name',
retryable: false,
statusCode: 200,
retryDelay: 129.76727762396757 }
I am not sure of how you are using the resources as you didn't share your code, but this might help you.
// setting retries
var s3 = new AWS.S3({apiVersion: '2006-03-01', maxRetries:10});
S3 sdk documentation - maxRetries
I'm trying to get the blocked countries for a youtube video by its ID using the API provided by unblockvideos.com as a json file, in a node.js environment. I'm using the same syntax to get youtube video metadata with the youtube API v3 and that is working just fine, so I don't know why this would deliver an error.
var jsdom = require("jsdom");
const { JSDOM } = jsdom;
const { window } = new JSDOM();
const { document } = (new JSDOM('')).window;
global.document = document;
var $ = jQuery = require('jquery')(window);
$.get('https://api.unblockvideos.com/youtube_restrictions?id=vMHZdfRWF94', function(ubjson) {
//code
});
here is my console output:
Error: Cross origin null forbidden
at dispatchError (/home/nodeworkspace/node_modules/jsdom/lib/jsdom/living/xhr-utils.js:60:19)
at Object.validCORSHeaders (/home/nodeworkspace/node_modules/jsdom/lib/jsdom/living/xhr-utils.js:72:5)
at receiveResponse (/home/nodeworkspace/node_modules/jsdom/lib/jsdom/living/xmlhttprequest.js:845:21)
at Request.client.on.res (/home/nodeworkspace/node_modules/jsdom/lib/jsdom/living/xmlhttprequest.js:677:38)
at emitOne (events.js:116:13)
at Request.emit (events.js:211:7)
at Request.onRequestResponse (/home/nodeworkspace/node_modules/jsdom/node_modules/request/request.js:1066:10)
at emitOne (events.js:116:13)
at ClientRequest.emit (events.js:211:7)
at HTTPParser.parserOnIncomingClient [as onIncoming] (_http_client.js:544:21) undefined
You're running into a CORS issue: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
Likely, the API is refusing to serve content to your browser because the 'domain' of your browser isn't accepted by unblockvideos.com.
You'd have to contact unblockvideos.com to have them whitelist your domain.
I have installed the npm azure-storage package.
On Azure I have created a Storage Account and a container.
I then try to create an Append Blob:
const azure = require('azure-storage');
const service = azure.createBlobService("[ACCOUNT]", "[KEY]");
service.createAppendBlobFromText("[CONTAINER]",
"some-blob-name",
"some-text",
{},
(err, result) => {
console.log('err ->',err);
console.log('result ->',result);
});
The result of calling this is:
err -> { Error
at Function.StorageServiceClient._normalizeError (/[REMOVED]/node_modules/azure-storage/lib/common/services/storageserviceclient.js:1191:23)
at BlobService.StorageServiceClient._processResponse (/[REMOVED]/node_modules/azure-storage/lib/common/services/storageserviceclient.js:738:50)
at Request.processResponseCallback [as _callback] (/[REMOVED]/node_modules/azure-storage/lib/common/services/storageserviceclient.js:311:37)
at Request.self.callback (/[REMOVED]/node_modules/request/request.js:186:22)
at emitTwo (events.js:125:13)
at Request.emit (events.js:213:7)
at Request.<anonymous> (/[REMOVED]/node_modules/request/request.js:1163:10)
at emitOne (events.js:115:13)
at Request.emit (events.js:210:7)
at IncomingMessage.<anonymous> (/[REMOVED]/node_modules/request/request.js:1085:12)
at Object.onceWrapper (events.js:314:30)
at emitNone (events.js:110:20)
at IncomingMessage.emit (events.js:207:7)
at endReadableNT (_stream_readable.js:1045:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
name: 'StorageError',
message: 'Append blobs are not supported.\nRequestId:ed1777f4-601c-00cf-19a0-bb77ba000000\nTime:2018-03-14T14:25:50.8138962Z',
code: 'BlobTypeNotSupported',
statusCode: 400,
requestId: 'ed1777f4-601c-00cf-19a0-bb77ba000000' }
result -> null
I have not been able to find anything, when searching for the error.
Am I missing something here?
Please check the redundancy kind of the storage account in which you're trying to create this blob.
Blob type support varies by the storage account redundancy kind.
For example, ZRS Classic redundancy kind of storage account only supports Block Blob while Premium LRS redundancy kind of storage account only supports Page Blob.
If I have a record in /etc/postgresql/9.4/main/pg_hba.conf which specifically trusts my specific user
# TYPE DATABASE USER ADDRESS METHOD
local all myuser trust
Since I'm on debian I restart postgresql like this
sudo /etc/init.d/postgresql restart
Here is my entire source file testing this out:
const pg = require('pg');
const connectionString = "postgres://myuser:mypassword#localhost/mydbname";
const client = new pg.Client(connectionString);
client.connect();
const query = client.query('SELECT * FROM USERS');
query.on('end', () => { client.end(); });
and this is the error I consistently get:
error: password authentication failed for user "myuser"
at Connection.parseE (/home/myuser/webserver/node_modules/pg/lib/connection.js:539:11)
at Connection.parseMessage (/home/myuser/webserver/node_modules/pg/lib/connection.js:366:17)
at Socket.<anonymous> (/home/myuser/webserver/node_modules/pg/lib/connection.js:105:22)
at emitOne (events.js:96:13)
at Socket.emit (events.js:188:7)
at readableAddChunk (_stream_readable.js:176:18)
at Socket.Readable.push (_stream_readable.js:134:10)
at TCP.onread (net.js:551:20)
It's also worth noting that doing the following works:
psql -h localhost -U myuser mydb
What am I doing wrong here?
As the documentation states, local is only for UNIX socket connections, while you are establishing a TCP connection to localhost.
Use a line like this:
host all myuser 127.0.0.1/32 trust
to trust all connections from localhost using IPv4 (use the adress ::1/128 for IPv6).