TestCafe requestHook response body is not giving text - javascript

I am trying to use TestCafe's request hooks to get the body of a response. I am able to log the body of the request and can see the xml of the request body no problem. For the response though I am getting globby gook. I am thinking I am having some kind of ssl issue but not exactly sure. It seems strange because I am getting a 200 status code, and am able to see the headers of the response. If it was an ssl thing do think I should see the headers.
anyway here is my code
the custom requestHook
import { RequestHook} from 'testcafe'
export default class AdultHomeScreenHook extends RequestHook{
constructor(requestFilter:any, responceOptions:any){
super(requestFilter,responceOptions)
}
onRequest (event:any) {
console.log('========================')
console.log('Request Body')
let buf = event._requestContext.reqBody as Buffer
console.log(buf.toLocaleString())
}
onResponse (event:any) {
console.log('========================')
let buf = event.body as Buffer
console.log(event)
}
}
this is the important parts of the test fixture
import AdultHomeHook from '../requestHooks/adultHomeScreenHook'
let adultHomeHook = new AdultHomeHook({url:'https://url.com/login?language=en',
method:'post'},{ includeHeaders: true, includeBody: true })
fixture.only`Adult Home Screen
Tests`.page`localhost:8080`.requestHooks(adultHomeHook);
and then the code to launch the webapp and start the tests
const fs = require('fs');
const selfSigned = require('openssl-self-signed-certificate');
const createTestCafe = require('testcafe');
let testcafe = null;
var options = {
key: selfSigned.key,
cert: selfSigned.cert
};
createTestCafe('localhost', 1337, 1338, options)
.then(tc => {
testcafe = tc;
const runner = testcafe.createRunner();
return runner
.startApp('node scripts/run start', 45000)
.src([
'./testcafe/tests/testsAccountDetails.ts'
])
.browsers('chrome --allow-insecure-localhost')
.run({
selectorTimeout: 30000
});
})
.then(failedCount => {
console.log('Tests failed: ' + failedCount);
testcafe.close();
});
I have tried a couple of different things for the ssl options object, tried a self-signed cert and also using the cert for the webapp and a good number of other things to no avail.
when I run everything I am able to see the body of the request as expected
<device><type>web</type><deviceId>547564be-fd2d-6ea8-76db-77c1f3d05e3e</deviceId></device>
but the response body is not right something like this
U�Os�0��~n� ��锶3m��������$h�Z�=���{ﷇ��2��.۾���]I=�!/ylƴ�4p%��P�G�����~��0�jݧ�NUn��(���IQ�
=2�
I can also see the headers for both request and response no problem

turns out this was not an ssl issue at all. the response body from the server was coming in a zipped format. I had to unzip the response body buffer and then could run .toString() on the unzipped Buffer
onResponse (event:any) {
console.log('========================')
let buf = event.body as Buffer
let unzippedBody = zlib.gunzipSync(buf) as Buffer
console.log(unzippedBody.toLocaleString())
}

Related

Collect hundreds of json files from url and combine into one json file in JavaScript

I am trying to 1) retrieve hundreds of separate json files from this website https://bioguide.congress.gov/ that contains legislators in the U.S., 2) process them and 3) combine them into a big json that contains all the individual records.
Some of the files I am working with (each individual legislator has a different url that contains their data in a json file format) can be found in these urls:
https://bioguide.congress.gov/search/bio/F000061.json
https://bioguide.congress.gov/search/bio/F000062.json
https://bioguide.congress.gov/search/bio/F000063.json
https://bioguide.congress.gov/search/bio/F000064.json
https://bioguide.congress.gov/search/bio/F000091.json
https://bioguide.congress.gov/search/bio/F000092.json
My approach is to create a for loop to loop over the different ids and combine all the records in an array of objects. Unfortunately, I am stuck trying to access the data.
So far, I have tried the following methods but I am getting a CORS error.
Using fetch:
url = "https://bioguide.congress.gov/search/bio/F000061.json"
fetch(url)
.then((res) => res.text())
.then((text) => {
console.log(text);
})
.catch((err) => console.log(err));
Using the no-cors mode in fetch and getting an empty response:
url = "https://bioguide.congress.gov/search/bio/F000061.json"
const data = await fetch(url, { mode: "no-cors" })
Using d3:
url = "https://bioguide.congress.gov/search/bio/F000061.json"
const data = d3.json(url);
I am getting a CORS related error blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. with all of them.
I would appreciate any suggestions and advice to work around this issue. Thanks.
Following on from what #code says in their answer, here's a contrived (but tested) NodeJS example that gets the range of data (60-69) from the server once a second, and compiles it into one JSON file.
import express from 'express';
import fetch from 'node-fetch';
import { writeFile } from 'fs/promises';
const app = express();
const port = process.env.PORT || 4000;
let dataset;
let dataLoadComplete;
app.listen(port, () => {
console.log(`Server running on port ${port}`);
});
function getData() {
return new Promise((res, rej) => {
// Initialise the data array
let arr = [];
dataLoadComplete = false;
// Initialise the page number
async function loop(page = 0) {
try {
// Use the incremented page number in the url
const uri = `https://bioguide.congress.gov/search/bio/F00006${page}.json`;
// Get the data, parse it, and add it to the
// array we set up to capture all of the data
const response = await fetch(uri);
const data = await response.json();
arr = [ ...arr, data];
console.log(`Loading page: ${page}`);
// Call the function again to get the next
// set of data if we've not reached the end of the range,
// or return the finalised data in the promise response
if (page < 10) {
setTimeout(loop, 1000, ++page);
} else {
console.log('API calls complete');
res(arr);
}
} catch (err) {
rej(err);
}
}
loop();
});
}
// Call the looping function and, once complete,
// write the JSON to a file
async function main() {
const completed = await getData();
dataset = completed;
dataLoadComplete = true;
writeFile('data.json', JSON.stringify(dataset, null, 2), 'utf8');
}
main();
Well, you're getting a CORS (Cross-Origin Resource Sharing) error because the website you're sending an AJAX request to (bioguide.congress.gov) has not explicitly enabled CORS, which means that you can't send AJAX requests (client-side) to that website because of security reasons.
If you want to send a request to that site, you must send a request from the server-side (such as PHP, Node, Python, etc).
More on the subject

How to upload a file into Firebase Storage from a callable https cloud function

I have been trying to upload a file to Firebase storage using a callable firebase cloud function.
All i am doing is fetching an image from an URL using axios and trying to upload to storage.
The problem i am facing is, I don't know how to save the response from axios and upload it to storage.
First , how to save the received file in the temp directory that os.tmpdir() creates.
Then how to upload it into storage.
Here i am receiving the data as arraybuffer and then converting it to Blob and trying to upload it.
Here is my code. I have been missing a major part i think.
If there is a better way, please recommend me. Ive been looking through a lot of documentation, and landed up with no clear solution. Please guide. Thanks in advance.
const bucket = admin.storage().bucket();
const path = require('path');
const os = require('os');
const fs = require('fs');
module.exports = functions.https.onCall((data, context) => {
try {
return new Promise((resolve, reject) => {
const {
imageFiles,
companyPIN,
projectId
} = data;
const filename = imageFiles[0].replace(/^.*[\\\/]/, '');
const filePath = `ProjectPlans/${companyPIN}/${projectId}/images/${filename}`; // Path i am trying to upload in FIrebase storage
const tempFilePath = path.join(os.tmpdir(), filename);
const metadata = {
contentType: 'application/image'
};
axios
.get(imageFiles[0], { // URL for the image
responseType: 'arraybuffer',
headers: {
accept: 'application/image'
}
})
.then(response => {
console.log(response);
const blobObj = new Blob([response.data], {
type: 'application/image'
});
return blobObj;
})
.then(async blobObj => {
return bucket.upload(blobObj, {
destination: tempFilePath // Here i am wrong.. How to set the path of downloaded blob file
});
}).then(buffer => {
resolve({ result: 'success' });
})
.catch(ex => {
console.error(ex);
});
});
} catch (error) {
// unknown: 500 Internal Server Error
throw new functions.https.HttpsError('unknown', 'Unknown error occurred. Contact the administrator.');
}
});
I'd take a slightly different approach and avoid using the local filesystem at all, since its just tmpfs and will cost you memory that your function is using anyway to hold the buffer/blob, so its simpler to just avoid it and write directly from that buffer to GCS using the save method on the GCS file object.
Here's an example. I've simplified out a lot of your setup, and I am using an http function instead of a callable. Likewise, I'm using a public stackoverflow image and not your original urls. In any case, you should be able to use this template to modify back to what you need (e.g. change the prototype and remove the http response and replace it with the return value you need):
const functions = require('firebase-functions');
const axios = require('axios');
const admin = require('firebase-admin');
admin.initializeApp();
exports.doIt = functions.https.onRequest((request, response) => {
const bucket = admin.storage().bucket();
const IMAGE_URL = 'https://cdn.sstatic.net/Sites/stackoverflow/company/img/logos/so/so-logo.svg';
const MIME_TYPE = 'image/svg+xml';
return axios.get(IMAGE_URL, { // URL for the image
responseType: 'arraybuffer',
headers: {
accept: MIME_TYPE
}
}).then(response => {
console.log(response); // only to show we got the data for debugging
const destinationFile = bucket.file('my-stackoverflow-logo.svg');
return destinationFile.save(response.data).then(() => { // note: defaults to resumable upload
return destinationFile.setMetadata({ contentType: MIME_TYPE });
});
}).then(() => { response.send('ok'); })
.catch((err) => { console.log(err); })
});
As a commenter noted, in the above example the axios request itself makes an external network access, and you will need to be on the Blaze or Flame plan for that. However, that alone doesn't appear to be your current problem.
Likewise, this also defaults to using a resumable upload, which the documentation does not recommend when you are doing large numbers of small (<10MB files) as there is some overhead.
You asked how this might be used to download multiple files. Here is one approach. First, lets assume you have a function that returns a promise that downloads a single file given its filename (I've abridged this from the above but its basically identical except for the change of INPUT_URL to filename -- note that it does not return a final result such as response.send(), and there's sort of an implicit assumption all the files are the same MIME_TYPE):
function downloadOneFile(filename) {
const bucket = admin.storage().bucket();
const MIME_TYPE = 'image/svg+xml';
return axios.get(filename, ...)
.then(response => {
const destinationFile = ...
});
}
Then, you just need to iteratively build a promise chain from the list of files. Lets say they are in imageUrls. Once built, return the entire chain:
let finalPromise = Promise.resolve();
imageUrls.forEach((item) => { finalPromise = finalPromise.then(() => downloadOneFile(item)); });
// if needed, add a final .then() section for the actual function result
return finalPromise.catch((err) => { console.log(err) });
Note that you could also build an array of the promises and pass them to Promise.all() -- that would likely be faster as you would get some parallelism, but I wouldn't recommend that unless you are very sure all of the data will fit inside the memory of your function at once. Even with this approach, you need to make sure the downloads can all complete within your function's timeout.

Outputting file details using ffprobe in ffmpeg AWS Lambda layer

I am trying to output the details of an audio file with ffmpeg using the ffprobe option. But it is just returning 'null' at the moment? I have added the ffmpeg layer in Lambda. can anyone spot why this is not working?
const { spawnSync } = require("child_process");
const { readFileSync, writeFileSync, unlinkSync } = require("fs");
const util = require('util');
var fs = require('fs');
let path = require("path");
exports.handler = (event, context, callback) => {
spawnSync(
"/opt/bin/ffprobe",
[
`var/task/myaudio.flac`
],
{ stdio: "inherit" }
);
};
This is the official AWS Lambda layer I am using, it is a great prooject but a little lacking in documentation.
https://github.com/serverlesspub/ffmpeg-aws-lambda-layer
First of all, I would recommend using NodeJS 8.10 over NodeJs 6.10 (it will be soon EOL, although AWS is unclear on how long it will be supported)
Also, I would not use the old style handler with a callback.
A working example below - since it downloads a file from the internet (couldn't be bothered to create a package to deploy on lambda with the file uploaded) give it a bit more time to work.
const { spawnSync } = require('child_process');
const util = require('util');
var fs = require('fs');
let path = require('path');
const https = require('https');
exports.handler = async (event) => {
const source_url = 'https://upload.wikimedia.org/wikipedia/commons/b/b2/Bell-ring.flac'
const target_path = '/tmp/test.flac'
async function downloadFile() {
return new Promise((resolve, reject) => {
const file = fs.createWriteStream(target_path);
const request = https.get(source_url, function(response) {
const stream = response.pipe(file)
stream.on('finish', () => {resolve()})
});
});
}
await downloadFile()
const test = spawnSync('/opt/bin/ffprobe',[
target_path
]);
console.log(test.output.toString('utf8'))
const response = {
statusCode: 200,
body: JSON.stringify([test.output.toString('utf8')]),
};
return response;
}
NB! In production be sure to generate a unique temporary file as instances that the Lambda function run on are often shared from invocation to invocation, you don't want multiple invocations stepping on each others files! When done, delete the temporary file, otherwise you might run out of free space on the instance executing your functions. The /tmp folder can hold 512MB, so it can run out fast if you work with many large flac files
I'm not fully familiar with this layer, however from looking at the git repo of the thumbnail-builder it looks like child_process is a promise, so you should be waiting for it's result using .then(), otherwise it is returning null because it doesn't wait for the result.
So try something like:
return spawnSync(
"/opt/bin/ffprobe",
[
`var/task/myaudio.flac`
],
{ stdio: "inherit" }
).then(result => {
return result;
})
.catch(error => {
//handle error
});

AWS Lambda w/ Google Vision API throwing PEM_read_bio:no start line or Errno::ENAMETOOLONG

The Goal: User uploads to S3, Lambda is triggered to take the file and send to Google Vision API for analysis, returning the results.
According to this, google-cloud requires native libraries and must be compiled against the OS that lambda is running. Using lambda-packager threw an error but some internet searching turned up using an EC2 with Node and NPM to run the install instead. In the spirit of hacking through this, that's what I did to get it mostly working*. At least lambda stopped giving me ELF header errors.
My current problem is that there are 2 ways to call the Vision API, neither work and both return a different error (mostly).
The Common Code: This code is always the same, it's at the top of the function, and I'm separating it to keep the later code blocks focused on the issue.
'use strict';
const AWS = require('aws-sdk');
const S3 = new AWS.S3();
const Bucket = 'my-awesome-bucket';
const gCloudConfig = {
projectId: 'myCoolApp',
credentials: {
client_email: 'your.serviceapi#project.email.com',
private_key: 'yourServiceApiPrivateKey'
}
}
const gCloud = require('google-cloud')(gCloudConfig);
const gVision = gCloud.vision();
Using detect(): This code always returns the error Error: error:0906D06C:PEM routines:PEM_read_bio:no start line. Theoretically it should work because the URL is public. From searching on the error, I considered it might be an HTTPS thing, so I've even tried a variation on this where I replaced HTTPS with HTTP but got the same error.
exports.handler = (event, context, callback) => {
const params = {
Bucket,
Key: event.Records[0].s3.object.key
}
const img = S3.getSignedUrl('getObject', params);
gVision.detect(img, ['labels','text'], function(err, image){
if(err){
console.log('vision error', err);
}
console.log('vision result:', JSON.stringify(image, true, 2));
});
}
Using detectLabels(): This code always returns Error: ENAMETOOLONG: name too long, open ....[the image in base64].... On a suggestion, it was believed that the method shouldn't be passed the base64 image, but instead the public path; which would explain why it says the name is too long (a base64 image is quite the URL). Unfortunately, that gives the PEM error from above. I've also tried not doing the base64 encoding and pass the object buffer directly from aws but that resulted in a PEM error too.
exports.handler = (event, context, callback) => {
const params = {
Bucket,
Key: event.Records[0].s3.object.key
}
S3.getObject(params, function(err, data){
const img = data.Body.toString('base64');
gVision.detectLabels(img, function(err, labels){
if(err){
console.log('vision error', err);
}
console.log('vision result:', labels);
});
});
}
According to Best Practices, the image should be base64 encoded.
From the API docs and examples and whatever else, it seems that I'm using these correctly. I feel like I've read all those docs a million times.
I'm not sure what to make of the NAMETOOLONG error if it's expecting base64 stuff. These images aren't more than 1MB.
*The PEM error seems to be related to credentials, and because my understanding of how all these credentials work and how the modules are being compiled on EC2 (which doesn't have any kind of PEM files), that might be my problem. Maybe I need to set up some credentials before running npm install, kind of in the same vein as needing to be installed on a linux box? This is starting to be outside my range of understanding so I'm hoping someone here knows.
Ideally, using detect would be better because I can specify what I want detected, but just getting any valid response from Google would be awesome. Any clues you all can provide would be greatly appreciated.
So, a conversation with another colleague pointed me to consider abandoning the whole loading of the API and using the google-cloud module. Instead, I should consider trying the Cloud REST API via curl and seeing if it can work that way.
Long story short, making an HTTP request and using the REST API for Google Cloud was how I solved this issue.
Here is the working lambda function I have now. Probably still needs tweaks but this is working.
'use strict';
const AWS = require('aws-sdk');
const S3 = new AWS.S3();
const Bucket = 'yourBucket';
const fs = require('fs');
const https = require('https');
const APIKey = 'AIza...your.api.key...kIVc';
const options = {
method: 'POST',
host: `vision.googleapis.com`,
path: `/v1/images:annotate?key=${APIKey}`,
headers: {
'Content-Type': 'application/json'
}
}
exports.handler = (event, context, callback) => {
const req = https.request(options, res => {
const body = [];
res.setEncoding('utf8');
res.on('data', chunk => {
body.push(chunk);
});
res.on('end', () => {
console.log('results', body.join(''));
callback(null, body.join(''));
});
});
req.on('error', err => {
console.log('problem with request:', err.message);
});
const params = {
Bucket,
Key: event.Records[0].s3.object.key
}
S3.getObject(params, function(err, data){
const payload = {
"requests": [{
"image": {
"content": data.Body.toString('base64')
},
"features": [{
"type": "LABEL_DETECTION",
"maxResults": 10
},{
"type": "TEXT_DETECTION",
"maxResults": 10
}]
}]
};
req.write(JSON.stringify(payload));
req.end();
});
}

Getting 204 using koa2 and koa-router for REST api - response body not being passed

I'm coming from Express and trying to learn Koa2 for a new project that I'm working on, but I'm struggling getting the most basic Get operation working for my app.
On the server side I have a route setup that is hitting an authorization server (Etrade), which returns an HTML link that the user will need to use to authorize the app.
I can use Postman to hit the route and see that I get the link back from Etrade through my console.log() call, but it is not coming back to Postman in the response body.
When I wired it up to the client app, I get a response status code of 204, which means my response body is empty if I'm understanding this correctly.
I need to figure out how to get the response body passed along as well as improve my understanding of Koa2.
I've currently setup my server.js as follows:
import Koa from 'koa';
import convert from 'koa-convert';
import proxy from 'koa-proxy';
import logger from 'koa-logger';
import body from 'koa-better-body';
import api from '../config/router/router';
import historyApiFallback from 'koa-connect-history-api-fallback';
import config from '../config/base.config';
const port = config.server_port;
const host = config.server_host;
const app = new Koa();
app.use(logger());
app.use(body());
app.use(api.routes());
app.use(api.allowedMethods());
// enable koa-proxyy if it has been enabled in the config
if ( config.proxy && config.proxy.enabled ) {
app.use(convert(proxy(config.proxy.options)));
}
app.use(convert(historyApiFallback({
verbose : false
})));
server.listen(port);
console.log(`Server is now running at http://${host}:${port}.`);
My router.js is setup as follows:
import Router from 'koa-router';
import etradeVerification from '../../server/api/etrade/verification';
const api = new Router({
prefix: '/api'
});
etradeVerification(api);
export default api;
Finally the logic for the route, minus the key and secret stuff:
import Etrade from 'node-etrade-api';
const myKey = '';
const mySecret = '';
const configuration = {
useSandbox : true,
key : myKey,
secret : mySecret
};
const et = new Etrade(configuration);
export default function( router ) {
router.get('/etrade', getEtradeUrl);
}
async function getEtradeUrl( ctx, next ) {
// Isn't this how I send the response back to the client?
// This isn't coming through as a response body when using Postman or the client app
ctx.body = await et.getRequestToken(receiveVerificationUrl, failedToGetUrl);
}
function receiveVerificationUrl( url ) {
console.log(url); // This works and displays the response from etrade
return url
}
function failedToGetUrl( error ) {
console.log('Error encountered while attempting to retrieve a request token: ', error);
}
Thanks for your help and guidance!
ctx.body = await et.getRequestToken(receiveVerificationUrl,
failedToGetUrl);
This call to et.getRequestToken does not return anything. When await fires, it'll just get undefined. Normally I'd suggest using es6-promisify but it's also not a standard Node interface either (one callback, with an err and value arguments (Very disappointing!))
Perhaps create a function like the following to Promisify the function:
function getRequestToken(et){
return new Promise(function(resolve, reject){
et.getRequestToken(resolve, reject)
})
}
Then you can ctx.body = await getRequestToken(et).

Categories