Http requests being dropped in Chrome Extension - javascript

Summary:
I've built a chrome extension that reaches out to external API to fetch some data. Sometimes that data returns quickly, sometimes it takes 4 seconds or so. I'm often doing about 5-10 in rapid succession (this is a scraping tool).
Previously, a lot of requests were dropped because the service worker in V3 of Manifest randomly shuts down. I thought I had resolved that. Then I realized there was a race condition because local storage doesn't have a proper queue.
Current Error - Even with all these fixes, requests are still being dropped. The external API returns the correct data successfully, but it seems like the extension never gets it. Hoping someone can point me in the right direction.
Relevant code attached, I imagine it will help someone dealing with these queue and service worker issues.
Local Storage queue
let writing: Map<string, Promise<any>> = new Map();
let updateUnsynchronized = async (ks: string[], f: Function) => {
let m = await new Promise((resolve, reject) => {
chrome.storage.local.get(ks, res => {
let m = {};
for (let k of ks) {
m[k] = res[k];
}
maybeResolveLocalStorage(resolve, reject, m);
});
});
// Guaranteed to have not changed in the meantime
let updated = await new Promise((resolve, reject) => {
let updateMap = f(m);
chrome.storage.local.set(updateMap, () => {
maybeResolveLocalStorage(resolve, reject, updateMap);
});
});
console.log(ks, 'Updated', updated);
return updated;
};
export async function update(ks: string[], f: Function) {
let ret = null;
// Global lock for now
await navigator.locks.request('global-storage-lock', async lock => {
ret = await updateUnsynchronized(ks, f);
});
return ret;
}
Here's the main function
export async function appendStoredScrapes(
scrape: any,
fromHTTPResponse: boolean
) {
let updated = await update(['urlType', 'scrapes'], storage => {
const urlType = storage.urlType;
const scrapes = storage.scrapes;
const {url} = scrape;
if (fromHTTPResponse) {
// We want to make sure that the url type at time of scrape, not time of return, is used
scrapes[url] = {...scrapes[url], ...scrape};
} else {
scrapes[url] = {...scrapes[url], ...scrape, urlType};
}
return {scrapes};
});
chrome.action.setBadgeText({text: `${Object.keys(updated['scrapes']).length}`});
}
Keeping the service worker alive
let defaultKeepAliveInterval = 20000;
// To avoid GC
let channel;
// To be run in content scripts
export function contentKeepAlive(name : string) {
channel = chrome.runtime.connect({ name });
channel.onDisconnect.addListener(() => contentKeepAlive(name));
channel.onMessage.addListener(msg => { });
}
let deleteTimer = (chan : any) => {
if (chan._timer) {
clearTimeout(chan._timer);
delete chan._timer;
}
}
let backgroundForceReconnect = (chan : chrome.runtime.Port) => {
deleteTimer(chan);
chan.disconnect();
}
// To be run in background scripts
export function backgroundKeepAlive(name : string) {
chrome.runtime.onConnect.addListener(chan => {
if (chan.name === name) {
channel = chan;
channel.onMessage.addListener((msg, chan) => { });
channel.onDisconnect.addListener(deleteTimer);
channel._timer = setTimeout(backgroundForceReconnect, defaultKeepAliveInterval, channel);
}
});
}
// "Always call sendResponse() in your chrome.runtime.onMessage listener even if you don't need
// the response. This is a bug in MV3." — https://stackoverflow.com/questions/66618136/persistent-service-worker-in-chrome-extension
export function defaultSendResponse (sendResponse : Function) {
sendResponse({ farewell: 'goodbye' });
}
Relevant parts of background.ts
backgroundKeepAlive('extension-background');
let listen = async (request, sender, sendResponse) => {
try {
if (request.message === 'SEND_URL_DETAIL') {
const {url, website, urlType} = request;
await appendStoredScrapes({url}, false);
let data = await fetchPageData(url, website, urlType);
console.log(data, url, 'fetch data returned background');
await appendStoredScrapes(data, true);
defaultSendResponse(sendResponse);
} else if (request.message === 'KEEPALIVE') {
sendResponse({isAlive: true});
} else {
defaultSendResponse(sendResponse);
}
} catch (e) {
console.error('background listener error', e);
}
};
chrome.runtime.onMessage.addListener(function (request, sender, sendResponse) {
listen(request, sender, sendResponse);
});

Related

How to get HTTP responseBody using Selenium, CDP, JavaScript

I am an AQA and testing an app. According to the test, after the button is clicked, I need to get the responseBody returned from the server, like we have in devtools - network tab. I have tried multiple Java and Python code examples found here, tried to transform them to JavaScript, but nothing worked for me. I've been trying smth like this:
try {
const url = 'http://someUrl';
const driver = await new Builder().forBrowser('chrome').build();
const cdpConnection = await driver.createCDPConnection('page');
await cdpConnection.execute('Network.responseReceived()', response => {
// Network.getResponseBody(), etc.
const res = response.getResponse();
console.log(res);
};
await driver.get(url);
await driver.quit();
} catch (error) {
console.log(error);
}
Network.responseReceived is an Event. So you have to listen to the message from the underlying CDP connection.
wsConnection.on('message', message => {
let data = JSON.parse(message);
if (data.method === 'Network.loadingFinished') {
// ... load response body here
}
});
I use Network.loadingFinished event instead of Network.responseReceived, as the response is then completely loaded.
The problem is, that the CDPConnection class is not properly implemented yet: CDPConnection.js#L18 It doesn't return any promise. This is message-based communication, though it adds the Message ID, to retrieve later the Response Message from the WebSocket, but it doesn't handle that response message here webdriver.js#L1239
Until it is implemented you can use custom CDPConnection class. Here is the TypeScript implementation.
let ID = 0;
type TAwaiter = {
id: number
resolve: (value: any) => void
reject: (reason?: any) => void
};
export class BiDiCDPConnection {
private requests: Map<number, TAwaiter> = new Map();
constructor(private wsConnection, private sessionId: string) {
wsConnection.on('message', this.onMessage.bind(this));
wsConnection.on('close', this.onClose.bind(this));
wsConnection.on('error', this.rejectAll.bind(this));
}
execute <T = any> (method, params, onMessageSent: (err) => any = null): Promise<T> {
let message = {
sessionId: this.sessionId,
method,
params,
id: ++ID,
};
let listener = {
id: message.id,
resolve: null,
reject: null,
};
let promise = new Promise<T>((resolve, reject) => {
listener.resolve = resolve;
listener.reject = reject;
});
this.requests.set(listener.id, listener);
this.wsConnection.send(JSON.stringify(message), onMessageSent)
return promise;
}
private onMessage (message: Buffer) {
let params = JSON.parse(message.toString());
let { id, result } = params;
if (id != null && this.requests.has(id)) {
this.requests.get(id)?.resolve?.(result);
this.requests.delete(id);
}
}
private onClose () {
this.rejectAll(new Error(`CDPConnection: The underlying connection was closed`));
}
private rejectAll(error: Error) {
let awaiters = this.requests.values();
this.requests = new Map();
for (let awaiter of awaiters) {
awaiter.reject(error);
}
}
}
Then you initialize the class and use it for your calls, after you create the inner cdp connection, as createCDPConnection establishes the WebSocket connection.
const cdpConnection = await driver.createCDPConnection('page');
const wsConnection = driver._wsConnection;
const bidiCdpConnection = new BiDiCDPConnection(wsConnection, driver.sessionId);
wsConnection.on('message', message => {
let data = JSON.parse(message);
if (data.method === 'Network.loadingFinished') {
let response = await bidiCdpConnection.execute('Network.getResponseBody', {
requestId: data.params.requestId,
});
console.log(response)
}
});
I use this to monitor (selenium-query/BrowserNetworkMonitor.ts) and intercept (selenium-query/BrowserNetworkInterceptor.ts) requests. You can take and modify those classes for your initial needs.
I am close to getting this to work... but not quite there. See my code here: https://github.com/SeleniumHQ/seleniumhq.github.io/issues/1155
If anyone can figure out the last step I'm missing, that'd be so amazing.
ie.
let test = await cdpConnection.execute('Fetch.getResponseBody', {
requestId: obj.params.requestId,
});
console.log(test); // ------> THIS RETURNS UNDEFINED !!!!

Puppeteer - Wait for network requests to complete after page.select()

Is there a way to wait for network requests to resolve after performing an action on a page, before performing a new action in Puppeteer?
I need to interact with a select menu on the page using page.select() which causes dynamic images and fonts to load into the page. I need to wait for these requests to complete before executing the next action.
--
Caveats:
I cannot reload the page or go to a new url.
I do not know what the request types might be, or how many
--
// launch puppeteer
const browser = await puppeteer.launch({});
// load new page
const page = await browser.newPage();
// go to URL and wait for initial requests to resolve
await page.goto(pageUrl, {
waitUntil: "networkidle0"
});
// START LOOP
for (let value of lotsOfValues) {
// interact with select menu
await page.select('select', value);
// wait for network requests to complete (images, fonts)
??
// screenshot page with new content
await pageElement.screenshot({
type: "jpeg",
quality: 100
});
} // END LOOP
// close
await browser.close();
The answer to this lies in using page.setRequestInterception(true); and monitoring subsequent requests, waiting for them to resvolve before moving on to the next task (thanks #Guarev for the point in the right direction).
This module (https://github.com/jtassin/pending-xhr-puppeteer) does exactly that, but for XHR requests. I modified it to look for 'image' and 'font' types.
Final code looks something like this:
// launch puppeteer
const browser = await puppeteer.launch({});
// load new page
const page = await browser.newPage();
// go to URL and wait for initial requests to resolve
await page.goto(pageUrl, {
waitUntil: "networkidle0"
});
// enable this here because we don't want to watch the initial page asset requests (which page.goto above triggers)
await page.setRequestInterception(true);
// custom version of pending-xhr-puppeteer module
let monitorRequests = new PuppeteerNetworkMonitor(page);
// START LOOP
for (let value of lotsOfValues) {
// interact with select menu
await page.select('select', value);
// wait for network requests to complete (images, fonts)
await monitorRequests.waitForAllRequests();
// screenshot page with new content
await pageElement.screenshot({
type: "jpeg",
quality: 100
});
} // END LOOP
// close
await browser.close();
NPM Module
class PuppeteerNetworkMonitor {
constructor(page) {
this.promisees = [];
this.page = page;
this.resourceType = ['image'];
this.pendingRequests = new Set();
this.finishedRequestsWithSuccess = new Set();
this.finishedRequestsWithErrors = new Set();
page.on('request', (request) => {
request.continue();
if (this.resourceType.includes(request.resourceType())) {
this.pendingRequests.add(request);
this.promisees.push(
new Promise(resolve => {
request.resolver = resolve;
}),
);
}
});
page.on('requestfailed', (request) => {
if (this.resourceType.includes(request.resourceType())) {
this.pendingRequests.delete(request);
this.finishedRequestsWithErrors.add(request);
if (request.resolver) {
request.resolver();
delete request.resolver;
}
}
});
page.on('requestfinished', (request) => {
if (this.resourceType.includes(request.resourceType())) {
this.pendingRequests.delete(request);
this.finishedRequestsWithSuccess.add(request);
if (request.resolver) {
request.resolver();
delete request.resolver;
}
}
});
}
async waitForAllRequests() {
if (this.pendingRequestCount() === 0) {
return;
}
await Promise.all(this.promisees);
}
pendingRequestCount() {
return this.pendingRequests.size;
}
}
module.exports = PuppeteerNetworkMonitor;
For anyone still interested in the solution #danlong posted above but wants it in a more modern way, here is the TypeScript version for it:
import { HTTPRequest, Page, ResourceType } from "puppeteer";
export class PuppeteerNetworkMonitor {
page: Page;
resourceType: ResourceType[] = [];
promises: Promise<unknown>[] = [];
pendingRequests = new Set();
finishedRequestsWithSuccess = new Set();
finishedRequestsWithErrors = new Set();
constructor(page: Page, resourceType: ResourceType[]) {
this.page = page;
this.resourceType = resourceType;
this.finishedRequestsWithSuccess = new Set();
this.finishedRequestsWithErrors = new Set();
page.on(
"request",
async (
request: HTTPRequest & { resolver?: (value?: unknown) => void },
) => {
await request.continue();
if (this.resourceType.includes(request.resourceType())) {
this.pendingRequests.add(request);
this.promises.push(
new Promise((resolve) => {
request.resolver = resolve;
}),
);
}
},
);
page.on(
"requestfailed",
(request: HTTPRequest & { resolver?: (value?: unknown) => void }) => {
if (this.resourceType.includes(request.resourceType())) {
this.pendingRequests.delete(request);
this.finishedRequestsWithErrors.add(request);
if (request.resolver) {
request.resolver();
delete request.resolver;
}
}
},
);
page.on(
"requestfinished",
(request: HTTPRequest & { resolver?: (value?: unknown) => void }) => {
if (this.resourceType.includes(request.resourceType())) {
this.pendingRequests.delete(request);
this.finishedRequestsWithSuccess.add(request);
if (request.resolver) {
request.resolver();
delete request.resolver;
}
}
},
);
}
async waitForAllRequests() {
if (this.pendingRequestCount() === 0) {
return;
}
await Promise.all(this.promises);
}
pendingRequestCount() {
return this.pendingRequests.size;
}
}
I did change one thing, where instead of hard-coding what resource type to look for in the network requests, I am passing the resource types to look for as one of the constructor arguments. That should make this class more generic.
I've tested this code with my API that uses Puppeteer, and it works great.
For the usage of this class, it would be similar to what #danlong posted above like this:
// other necessary puppeteer code here...
const monitorNetworkRequests = new PuppeteerNetworkMonitor(page, ["image"]);
await monitorNetworkRequests.waitForAllRequests();

Using async/await instead of callback when using Worklet

I'm writing a wrapper class hiding the internals of working with AudioWorklet. Working with a worklet involves communication between a node and a processor through message ports.
As soon as the code running in the node reaches port.postMessage(), script execution in the node ends. When node.port.onmessage fires (through processor.port.postMessage), code in the node can resume execution.
I can get it to work by using a callback function. See the code below.
class HelloWorklet {
constructor(audioContext) {
audioContext.audioWorklet.addModule('helloprocessor.js').then(() => {
this.awNode = new AudioWorkletNode(audioContext, 'hello-processor');
this.awNode.port.onmessage = (event) => {
switch (event.data.action) {
case 'response message':
this.respondMessage(event.data);
break;
}
}
});
}
requestMessage = (callback) => {
this.awNode.port.postMessage({action: 'request message'});
this.callback = callback;
}
respondMessage = (data) => {
// some time consuming processing
let msg = data.msg + '!';
this.callback(msg);
}
}
let audioCtx = new AudioContext();
let helloNode = new HelloWorklet(audioCtx);
const showMessage = (msg) => {
// additional processing
console.log(msg);
}
const requestMessage = () => {
helloNode.requestMessage(showMessage);
}
and the processor
class HelloProcessor extends AudioWorkletProcessor {
constructor() {
super();
this.port.onmessage = (event) => {
switch (event.data.action) {
case 'request message':
this.port.postMessage({action: 'response message', msg: 'Hello world'});
break;
}
}
}
process(inputs, outputs, parameters) {
// required method, but irrelevant for this question
return true;
}
}
registerProcessor('hello-processor', HelloProcessor);
Calling requestMessage() causes Hello world! to be printed in the console. As using callbacks sometimes decreases the readability of the code, i'd like to rewrite the code using await like so:
async requestMessage = () => {
let msg = await helloNode.requestMessage;
// additional processing
console.log(msg);
}
Trying to rewrite the HelloWorklet.requestMessage I cannot figure out how to glue the resolve of the Promise to the this.awNode.port.onmessage. To me it appears as if the interruption of the code between this.awNode.port.postMessage and this.awNode.port.onmessage goes beyond a-synchronicity.
As using the AudioWorklet already breaks any backwards compatibility, the latest ECMAScript features can be used.
edit
Thanks to part 3 of the answer of Khaled Osman I was able to rewrite the class as follows:
class HelloWorklet {
constructor(audioContext) {
audioContext.audioWorklet.addModule('helloprocessor.js').then(() => {
this.awNode = new AudioWorkletNode(audioContext, 'hello-processor');
this.awNode.port.onmessage = (event) => {
switch (event.data.action) {
case 'response message':
this.respondMessage(event.data);
break;
}
}
});
}
requestMessage = () => {
return new Promise((resolve, reject) => {
this.resolve = resolve;
this.reject = reject;
this.awNode.port.postMessage({action: 'request message'});
})
}
respondMessage = (data) => {
// some time consuming processing
let msg = data.msg + '!';
this.resolve(msg);
}
}
let audioCtx = new AudioContext();
let helloNode = new HelloWorklet(audioCtx);
async function requestMessage() {
let msg = await helloNode.requestMessage();
// additional processing
console.log(msg);
}
I think there're three things that might help you
Promises don't return multiple values, so something like request message can not be fired again once its fulfilled/resolved, so it won't be suitable to request/post multiple messages. For that you can use Observables or RxJS
You can use util.promisify to convert NodeJS callback style functions to promises like so
const { readFile } = require('fs')
const { promisify } = require('util')
const readFilePromise = promisify(fs.readFile)
readFilePromise('test.txt').then(console.log)
or manually create wrapper functions that return promises around them that resolve/reject inside the callbacks.
For resolving a promise outside of the promise's block you can save the resolve/reject as variables and call them later like so
class MyClass {
requestSomething() {
return new Promise((resolve, reject) => {
this.resolve = resolve
this.reject = reject
})
}
onSomethingReturned(something) {
this.resolve(something)
}
}

ES7 promises and awaiting async function which loops forever in background

This might be a special case:
I want to read from a queue (AWS SQS), which is done by making a call which waits a few secs for messages, and then resolves - and call again and again in a loop as long as you want to process that queue (it checks a flag every time).
This means that I have a consume function which is running as long as the app is active, or the queue is unflagged.
And I also have a subscribe function used for subscribing to a queue - which is supposed to resolve as soon as it knows that the consumer is able to connect to the queue. Even though this functions calls the consumer which keeps running and does not return until the queue is unflagged.
It gives me some challenges - do you have any tips on how to solve this with modern JS and async/await promises? I keep in mind this code is running in a React web app, not in node.js.
I basically just want the await subscribe(QUEUE) call (which comes from the GUI) to resolve as soon as it's sure that it can read from that queue. But if it cannot, I want it to throw an error which is propagated to the origin of the subscribe call - which means that I have to await consume(QUEUE), right?
Update:
Some untested draft code has been added (I don't want to spend more time making it work if I'm not doing the right approach) - I thought about sending success and failure callback to the consuming function, so it can report a success as soon as it gets the first valid (but possibly empty) response from the queue, which makes it store the queue url as a subscription - and unsubscribe if as the queue poll fails.
Since I'm setting up several queue consumers they should not be blocking anything but just work in the background
let subscribedQueueURLs = []
async function consumeQueue(
url: QueueURL,
success: () => mixed,
failure: (error: Error) => mixed
) {
const sqs = new AWS.SQS()
const params = {
QueueUrl: url,
WaitTimeSeconds: 20,
}
try {
do {
// eslint-disable-next-line no-await-in-loop
const receivedData = await sqs.receiveMessage(params).promise()
if (!subscribedQueueURLs.includes(url)) {
success()
}
// eslint-disable-next-line no-restricted-syntax
for (const message of receivedData.Messages) {
console.log({ message })
// eslint-disable-next-line no-await-in-loop
eventHandler && (await eventHandler.message(message, url))
const deleteParams = {
QueueUrl: url,
ReceiptHandle: message.ReceiptHandle,
}
// eslint-disable-next-line no-await-in-loop
const deleteResult = await sqs.deleteMessage(deleteParams).promise()
console.log({ deleteResult })
}
} while (subscribedQueueURLs.includes(url))
} catch (error) {
failure(error)
}
}
export const subscribe = async (entityType: EntityType, entityId: EntityId) => {
const url = generateQueueURL(entityType, entityId)
consumeQueue(
url,
() => {
subscribedQueueURLs.push(url)
eventHandler && eventHandler.subscribe(url)
},
error => {
console.error(error)
unsubscribe(entityType, entityId)
}
)
}
I ended up solving it like this - maybe not the most elegant solution though...
let eventHandler: ?EventHandler
let awsOptions: ?AWSOptions
let subscribedQueueUrls = []
let sqs = null
let sns = null
export function setup(handler: EventHandler) {
eventHandler = handler
}
export async function login(
{ awsKey, awsSecret, awsRegion }: AWSCredentials,
autoReconnect: boolean
) {
const credentials = new AWS.Credentials(awsKey, awsSecret)
AWS.config.update({ region: awsRegion, credentials })
sqs = new AWS.SQS({ apiVersion: '2012-11-05' })
sns = new AWS.SNS({ apiVersion: '2010-03-31' })
const sts = new AWS.STS({ apiVersion: '2011-06-15' })
const { Account } = await sts.getCallerIdentity().promise()
awsOptions = { accountId: Account, region: awsRegion }
eventHandler && eventHandler.login({ awsRegion, awsKey, awsSecret }, autoReconnect)
}
async function handleQueueMessages(messages, queueUrl) {
if (!sqs) {
throw new Error(
'Attempt to subscribe before SQS client is ready (i.e. authenticated).'
)
}
// eslint-disable-next-line no-restricted-syntax
for (const message of messages) {
if (!eventHandler) {
return
}
// eslint-disable-next-line no-await-in-loop
await eventHandler.message({
content: message,
queueUrl,
timestamp: new Date().toISOString(),
})
const deleteParams = {
QueueUrl: queueUrl,
ReceiptHandle: message.ReceiptHandle,
}
// eslint-disable-next-line no-await-in-loop
await sqs.deleteMessage(deleteParams).promise()
}
}
export async function subscribe(queueUrl: QueueUrl) {
if (!sqs) {
throw new Error(
'Attempt to subscribe before SQS client is ready (i.e. authenticated).'
)
}
const initialParams = {
QueueUrl: queueUrl,
WaitTimeSeconds: 0,
MessageAttributeNames: ['All'],
AttributeNames: ['All'],
}
const longPollParams = {
...initialParams,
WaitTimeSeconds: 20,
}
// Attempt to consume the queue, and handle any pending messages.
const firstResponse = await sqs.receiveMessage(initialParams).promise()
if (!subscribedQueueUrls.includes(queueUrl)) {
subscribedQueueUrls.push(queueUrl)
eventHandler && eventHandler.subscribe(queueUrl)
}
handleQueueMessages(firstResponse.Messages, queueUrl)
// Keep on polling the queue afterwards.
setImmediate(async () => {
if (!sqs) {
throw new Error(
'Attempt to subscribe before SQS client is ready (i.e. authenticated).'
)
}
try {
do {
// eslint-disable-next-line no-await-in-loop
const received = await sqs.receiveMessage(longPollParams).promise()
handleQueueMessages(received.Messages, queueUrl)
} while (sqs && subscribedQueueUrls.includes(queueUrl))
} catch (error) {
eventHandler && eventHandler.disconnect()
throw error
}
})
}

Proper way of creating a EventEmitter that works with Promises in the background

I'm creating a "class" that emits events such as error, data, downloadFile and initialize. Each event is fired after a request is made, and each event is fired by a method that has the same name:
class MyClass extends EventEmitter {
constructor(data) {
this.data = data
this.initialize()
.then(this.downloadFile)
.then(this.data)
.catch(this.error)
}
initialize() {
const req = superagent.post('url...')
superagent.send(data)
const res = await req // this will actually fire the request
this.emit('initialize')
this.url = res.body
return res
}
downloadFile() {
const req = superagent.put(this.url)
const res = await req; // this will actually fire the request
req.on('progress', (progress) => this.emit('downloadFile', progress)
//
// save to disk
//
return res
}
data() {
// Next in the sequence. And will fire the 'data' event: this.emit('data', data)
}
error(err) {
this.emit('error', err)
}
}
After that I have the data method to be called. My doubt is: Is there a design pattern to call the events in sequence without using Promises? Currently I'm using chaining, but I'm feeling that this isn't the best approach, maybe I'm wrong.
this.initialize()
.then(this.downloadFile)
.then(this.data)
.catch(this.error)
But I feel that could be a better approach.
Answers for bergi's questions:
a) Why are you using class syntax?
Because it's easier to inherit from EventEmitter and personally I think it's more readable than using a constructor
functin, e.g:
function Transformation(data) {
this.data = data
}
// Prototype stuffs here
b) How this code is going to be used
I'm creating a client to interact with my API. The ideia is that the user can see what is happening in the background. E.g:
const data = {
data: {},
format: 'xls',
saveTo: 'path/to/save/xls/file.xls'
}
const transformation = new Transformation(data)
// Events
transformation.on('initialize', () => {
// Here the user knows that the transformation already started
})
transformation.on('fileDownloaded', () => {
// Here the file has been downloaded to disk
})
transformation.on('data', (data) => {
// Here the user can see details of the transformation -
// name,
// id,
// size,
// the original object,
// etc
})
transformation.on('error', () => {
// Here is self explanatory, if something bad happens, this event will be fired
})
c) What it is supposed to do?
The user will be able to transform a object with data into a Excel.
It sounds like the transformation object you are creating is used by the caller solely for listening to the events. The user does not need a class instance with properties to get or methods to call. So don't make one. KISS (keep it super simple).
function transform(data) {
const out = new EventEmitter();
async function run() {
try {
const url = await initialise();
const data = await downloadFile(url);
out.emit('data', data);
} catch(err) {
out.emit('error', err);
}
}
async function initialise() {
const req = superagent.post('url...')
superagent.send(data)
const res = await req // this will actually fire the request
out.emit('initialize')
return res.body
}
async function downloadFile(url) {
const req = superagent.put(url)
req.on('progress', (progress) => out.emit('downloadFile', progress)
const res = await req; // this will actually fire the request
//
// save to disk
//
return data;
}
run();
return out;
}
It might be even simpler to leave out the (once-only?) data and error events and just to return a promise, alongside the event emitter for progress notification:
return {
promise: run(), // basically just `initialise().then(downloadFile)`
events: out
};
If you want another way to call the events in sequence, and if you're using a Node.js version that supports ES7, you can do the following :
class MyClass extends EventEmitter {
constructor(data) {
this.data = data;
this.launcher();
}
async launcher() {
try {
await this.initialize();
await this.downloadFile();
await this.data();
}
catch(err) {
this.error(err);
}
}
initialize() {
const req = superagent.post('url...');
superagent.send(data);
this.emit('initialize');
this.url = req.body;
return req;
}
downloadFile() {
const req = superagent.put(this.url);
req.on('progress', (progress) => this.emit('downloadFile', progress)
//
// save to disk
//
return req;
}
data() {
// Next in the sequence. And will fire the 'data' event: this.emit('data', data)
}
error(err) {
this.emit('error', err)
}
}
Explanation : instead of awaiting for your Promises inside your functions, just return the Promises and await for them at root level.

Categories