Trouble with multiple firebase image upload with progress - javascript

I'm trying to upload multiple images to firebase at once. It is doing that, but the returned url array of those cloud image links are too late since the post is already being sent with an empty array. Here is my code:
// uploading media files using promises
async uploadMedia(mediaFile: string){
const extension = mediaFile.split('.')[mediaFile.split('.').length - 1];
const mediaFileName = `${Math.round(Math.random()*100000000000)}.${extension}`;
this.uploadProgress = 0;
const response = await fetch(mediaFile);
const blob = await response.blob();
const storageRef = storage.ref(`${mediaFileName}`).put(blob);
return storageRef.on(`state_changed`,snapshot=>{
this.uploadProgress = (snapshot.bytesTransferred/snapshot.totalBytes);
}, error=>{
this.error = error.message;
this.submitting = false;
this.uploadingMedia = false;
return;
},
async () => {
// check whether the media is an image or a video and add to correct arrays
if(extension == "png" || extension == "jpg"){
return storageRef.snapshot.ref.getDownloadURL().then(async (url)=>{
this.firebaseImageUrls = [...this.firebaseImageUrls, url];
return;
});
}
else{
return storageRef.snapshot.ref.getDownloadURL().then(async (url)=>{
this.firebaseVideoUrls = [...this.firebaseVideoUrls, url];
return;
});
}
});
}
Where everything is being called:
await Promise.all(this.props.store.selectedImagesArray.map(async (file:string) => {
await this.uploadMedia(file);
}))
this.submitPost(); // this submits everything with the firebaseImageUrls
any help is appreciated

The problem seems to be that storageRef.on() does not return a promise. It just registers the handlers. I'm not an expert on firebase. Maybe the put(blob) returns a promise that you can use.

Figured it out. I had to make a promise and resolve the promise for each upload task and then loop through all the files doing this. Then when all the files are completely uploaded and the loop is completed, then I can submit the post with the files that are in firebaseImageUrls.
async uploadMedia(mediaFile: string){
return new Promise(async (resolve, reject) => {
//making the uploading task for one file
const extension = mediaFile.split('.')[mediaFile.split('.').length - 1];
const mediaFileName = `${Math.round(Math.random()*100000000000)}.${extension}`;
const response = await fetch(mediaFile);
const blob = await response.blob();
const storageRef = storage.ref(`${mediaFileName}`);
const task = storageRef.put(blob);
task.on(`state_changed`,snapshot=>{
this.uploadProgress = (snapshot.bytesTransferred/snapshot.totalBytes);
}, error=>{
this.error = error.message;
this.submitting = false;
this.uploadingMedia = false;
return;
},
async () => {
if(extension == "png" || extension == "jpg"){
task.snapshot.ref.getDownloadURL().then((url:any)=>{
console.log(url);
resolve(url);
});
}
else{
task.snapshot.ref.getDownloadURL().then((url:any)=>{
console.log(url);
resolve(url);
});
}
});
})
}
The loop:
for(var i = 0; i < this.props.store.selectedImagesArray.length; i++){
const imageUrl = await this.uploadMedia(this.props.store.selectedImagesArray[i]);
this.firebaseImageUrls = [...this.firebaseImageUrls, imageUrl];
}
this.submitPost();

Related

How to run a function first before updating the array in react JS?

const handleItinerary = (e, type) => {
var index = parseInt(e.target.name);
let arr = [...itinerary];
if (type === "imageUrl") {
const date = new Date().getTime();
const storageRef = ref(storage, `${date}`);
uploadBytes(storageRef, e.target.files[0]).then((snapshot) => {
getDownloadURL(storageRef).then((downloadURL) => {
arr[index]["imageUrl"] = downloadURL;
});
});
}
setitinerary(arr);
}
In the above code I am trying to upload an image in firebase storage using uploadBytes function and after uploading the image I get the downloadURL where image is stored, I want to put its value in arr[index]["imageUrl"], but the arr[index]["imageUrl"] is getting updated first before getting the downloadURL and I am getting error that downloadURL is undefined, so how to resolve this issue?
I am using react 18 and firebase version 9.
When using then() to run code in response to an asynchronous operation being completed, any code that needs to run upon completion has to be inside that then() callback.
So
const handleItinerary = (e, type) => {
var index = parseInt(e.target.name);
let arr = [...itinerary];
if (type === "imageUrl") {
const date = new Date().getTime();
const storageRef = ref(storage, `${date}`);
uploadBytes(storageRef, e.target.files[0]).then((snapshot) => {
getDownloadURL(storageRef).then((downloadURL) => {
arr[index]["imageUrl"] = downloadURL;
setitinerary(arr);
});
});
}
}
To make this a bit more familiar, you can mark the `` as async and use await inside it:
const handleItinerary = async (e, type) => {
var index = parseInt(e.target.name);
let arr = [...itinerary];
if (type === "imageUrl") {
const date = new Date().getTime();
const storageRef = ref(storage, `${date}`);
const snapshot = await uploadBytes(storageRef, e.target.files[0]);
const downloadURL = await getDownloadURL(storageRef);
arr[index]["imageUrl"] = downloadURL;
setitinerary(arr);
}
}
Note that this doesn't change anything about the actual behavior and all asynchronous calls are still executed asynchronously. It is merely a more familiar way to write the code.
If you have a list of images to upload, be sure to either use for of instead of forEach or Promise.all to detect when all asynchronous operations are done.
You can move the code that updates the arr[index]["imageUrl"] value inside the then block where you retrieve the downloadURL. This will ensure that the arr[index]["imageUrl"] value is only updated after the downloadURL has been retrieved.
const handleItinerary = (e, type) => {
var index = parseInt(e.target.name);
let arr = [...itinerary];
if (type === "imageUrl") {
const date = new Date().getTime();
const storageRef = ref(storage, `${date}`);
uploadBytes(storageRef, e.target.files[0]).then((snapshot) => {
getDownloadURL(storageRef).then((downloadURL) => {
arr[index]["imageUrl"] = downloadURL;
setitinerary(arr);
});
});
}
}

Download functionality using streams for large files in NodeJS

I am trying to implement a download functionality using streams in NodeJS.
In the code I am trying to simulate an endpoint that sends data in chunks, something similar to paginated data, for example chunks in size of 5000. Or to make it further clear, we can send top and skip parameters to the endpoint to get a particular chunk of data. If no parameters are provided, it send the first 5000 entries.
There are 2 cases that I am trying to take care of:
When the user cancels the download from the browser, how do I handle the continuous fetching of data from the endpoint
When the user pauses the download from the browser, how do I pause the data fetching, and then resume once user resumes it
The first case can be taken care of using 'close' event of request. When the connection between the client and the server get cancelled, I disconnect.
If anyone has a better way of implementing this please suggest.
I am having trouble handling the second case when the user pauses.
If anyone could guide me through this, or even provide a better solution to the overall problem(incl. handling the chunks of data), it would be really helpful.
const {createServer} = require('http');
const {Transform} = require('stream');
const axios = require('axios');
var c = 0;
class ApiStream extends Transform {
constructor(apiCallback, res, req) {
super();
this.apiCallback = apiCallback;
this.isPipeSetup = false;
this.res = res;
this.req = req
}
//Will get data continuously
async start() {
let response;
try {
response = await this.apiCallback();
} catch (e) {
response = null;
}
if (!this.isPipeSetup) {
this.pipe(this.res);
this.isPipeSetup = true;
}
if (response) {
response = response.data
if (Array.isArray(response)) {
response.forEach((item) => {
this.push(JSON.stringify(item) + "\n");
});
} else if (typeof response === "object") {
this.push(JSON.stringify(response) + "\n");
} else if (typeof response === "string") {
this.push(response + "\n");
}
this.start()
}else{
this.push(null);
console.log('Stream ended')
}
}
}
const server = createServer(async (req, res, stream) => {
res.setHeader("Content-disposition", "attachment; filename=download.json");
res.setHeader("Content-type", "text/plain");
let disconnected = false;
const filestream = new ApiStream(async () => {
let response;
try {
if(disconnected){
console.log('Client connection closed')
return null;
}
c++;
response = await axios.get("https://jsonplaceholder.typicode.com/users");
//Simulate delay in data fetching
let z = 0;
if(c>=200) response = null;
while(z<10000){
let b = 0;
while(b<10000){
b+=0.5;
}
z +=0.5;
}
} catch (error) {
res.status(500).send(error);
}
if (response) {
return response;
}
return null;
}, res, req);
await filestream.start();
req.on('close', (err) => {
disconnected = true;
})
})
server.listen(5050, () => console.log('server running on port 5050'));

Progress for a fetch blob javascript

I'm trying to do a javascript fetch to grab a video file using fetch. I am able to get the file downloaded and get the blob URL, but I can't seem to get the progress while its downloading.
I tried this:
let response = await fetch('test.mp4');
const reader = response.body.getReader();
const contentLength=response.headers.get('Content-Length');
let receivedLength = 0;
d=document.getElementById('progress_bar');
while(true)
{
const {done, value} = await reader.read();
if (done)
{
break;
}
receivedLength += value.length;
d.innerHTML="Bytes loaded:"+receivedLength;
}
const blob = await response.blob();
var vid=URL.createObjectURL(blob);
The problem is that I get "Response.blob: Body has already been consumed". I see that the reader.read() is probably doing that. How do I just get the amount of data received and then get a blob URL at the end of it?
Thanks.
Update:
My first attempt collected the chunks as they downloaded and them put them back together, with a large (2-3x the size of the video) memory footprint. Using a ReadableStream has a much lower memory footprint (memory usage hovers around 150MB for a 1.1GB mkv). Code largely adapted from the snippet here with only minimal modifications from me:
https://github.com/AnthumChris/fetch-progress-indicators/blob/master/fetch-basic/supported-browser.js
<div id="progress_bar"></div>
<video id="video_player"></video>
const elProgress = document.getElementById('progress_bar'),
player = document.getElementById('video_player');
function getVideo2() {
let contentType = 'video/mp4';
fetch('$pathToVideo.mp4')
.then(response => {
const contentEncoding = response.headers.get('content-encoding');
const contentLength = response.headers.get(contentEncoding ? 'x-file-size' : 'content-length');
contentType = response.headers.get('content-type') || contentType;
if (contentLength === null) {
throw Error('Response size header unavailable');
}
const total = parseInt(contentLength, 10);
let loaded = 0;
return new Response(
new ReadableStream({
start(controller) {
const reader = response.body.getReader();
read();
function read() {
reader.read().then(({done, value}) => {
if (done) {
controller.close();
return;
}
loaded += value.byteLength;
progress({loaded, total})
controller.enqueue(value);
read();
}).catch(error => {
console.error(error);
controller.error(error)
})
}
}
})
);
})
.then(response => response.blob())
.then(blob => {
let vid = URL.createObjectURL(blob);
player.style.display = 'block';
player.type = contentType;
player.src = vid;
elProgress.innerHTML += "<br /> Press play!";
})
.catch(error => {
console.error(error);
})
}
function progress({loaded, total}) {
elProgress.innerHTML = Math.round(loaded / total * 100) + '%';
}
First Attempt (worse, suitable for smaller files)
My original approach. For a 1.1GB mkv, the memory usage creeps up to 1.3GB while the file is downloading, then spikes to about 3.5Gb when the chunks are being combined. Once the video starts playing, the tab's memory usage goes back down to ~200MB but Chrome's overall usage stays over 1GB.
Instead of calling response.blob() to get the blob, you can construct the blob yourself by accumulating each chunk of the video (value). Adapted from the exmaple here: https://javascript.info/fetch-progress#0d0g7tutne
//...
receivedLength += value.length;
chunks.push(value);
//...
// ==> put the chunks into a Uint8Array that the Blob constructor can use
let Uint8Chunks = new Uint8Array(receivedLength), position = 0;
for (let chunk of chunks) {
Uint8Chunks.set(chunk, position);
position += chunk.length;
}
// ==> you may want to get the mimetype from the content-type header
const blob = new Blob([Uint8Chunks], {type: 'video/mp4'})

Initializing a Puppeteer Browser Outside of Scraping Function

I am very new to puppeteer (I started today). I have some code that is working the way that I want it to except for an issue that I think is making it extremely inefficient. I have a function that links me through potentially thousands of urls that have incremental IDs to pull the name, position, and stats of each player and then inserts that data into a neDB database. Here is my code:
const puppeteer = require('puppeteer');
const Datastore = require('nedb');
const database = new Datastore('database.db');
database.loadDatabase();
async function scrapeProduct(url, id){
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(url);
let attributes = [];
const [name] = await page.$x('//*[#id="ctl00_ctl00_ctl00_Main_Main_name"]');
const txt = await name.getProperty('innerText');
const playerName = await txt.jsonValue();
attributes.push(playerName);
//Make sure that there is a legitimate player profile before trying to pull a bunch of 'undefined' information.
if(playerName){
const [role] = await page.$x('//*[#id="ctl00_ctl00_ctl00_Main_Main_position"]');
const roleTxt = await role.getProperty('innerText');
const playerRole = await roleTxt.jsonValue();
attributes.push(playerRole);
//Loop through the 12 attributes and pull their values.
for(let i = 1; i < 13; i++){
let vLink = '//*[#id="ctl00_ctl00_ctl00_Main_Main_SectionTabBox"]/div/div/div/div[1]/table/tbody/tr['+i+']/td[2]';
const [e1] = await page.$x(vLink);
const val = await e1.getProperty('innerText');
const skillVal = await val.jsonValue();
attributes.push(skillVal);
}
//Create a player profile to be pushed into the database. (I realize this is very wordy and ugly code)
let player = {
Name: attributes[0],
Role: attributes[1],
Athleticism: attributes[2],
Speed: attributes[3],
Durability: attributes[4],
Work_Ethic: attributes[5],
Stamina: attributes[6],
Strength: attributes[7],
Blocking: attributes[8],
Tackling: attributes[9],
Hands: attributes[10],
Game_Instinct: attributes[11],
Elusiveness: attributes[12],
Technique: attributes[13],
_id: id,
};
database.insert(player);
console.log('player #' + id + " scraped.");
await browser.close();
} else {
console.log("Blank profile");
await browser.close();
}
}
//Making sure the first URL is scraped before moving on to the next URL. (i removed the URL because its unreasonably long and is not important for this part).
(async () => {
for(let i = 0; i <= 1000; i++){
let link = 'https://url.com/Ratings.aspx?rid='+i+'&section=Ratings';
await scrapeProduct(link, i);
}
})();
What I think is making this so inefficient is the fact that everytime scrapeProduct() is called, i create a new browser and create a new page. Instead I believe it would be more efficient to create 1 browser and 1 page and just change the pages URL with
await page.goto(url)
I believe that in order to do what I'm trying to accomplish here, i need to move:
const browser = await puppeteer.launch();
const page = await browser.newPage();
outside of my scrapeProduct() function but i cannot seem to get this to work. Anytime I try i get an error in my function saying that page is not defined. I am very new to puppeteer (started today), I would appreciate any guidance on how to accomplish this. Thank you very much!
TL;DR
How do i create 1 Browser instance and 1 Page instance that a function can use repeatedly by only changing the await page.goto(url) function.
About a year ago I tried to a make an React Native Pokemon Go helper app. Since there wasn't an api for pokemon nest and pokestops I created a server that scraped thesilphroad.com and I found the need to implement something like #Arkan said.
I wanted the server to be able to take multiple request, so I decided to initialize the browser when the server is booted up. When a request is received, the server checks to see if MAX_TABS have been reached. If reached, it waits, if not a new tab is opened and the scrape is performed
Here's the scraper.js
const puppeteer = require ('puppeteer')
const fs = require('fs')
const Page = require('./Page')
const exec = require('child_process').exec
const execSync = require('util').promisify(exec)
module.exports = class scraper {
constructor(){
this.browser = null
this.getPages = null
this.getTotalPages = null
this.isRunning = false
//browser permissions
this.permissions = ['geolocation']
this.MAX_TABS = 5
//when puppeteer launches
this.useFirstTab = true
}
async init(config={}){
let headless = config.headless != undefined ? config.headless : true
this.permissions = this.permissions.concat(config.permissions || [])
//get local chromium location
let browserPath = await getBrowserPath('firefox') || await getBrowserPath('chrome')
this.browser = await puppeteer.launch({
headless:headless,
executablePath:browserPath,
defaultViewport:null,
args:[
'--start-maximized',
]
})
this.getPages = this.browser.pages
this.getTotalPages = ()=>{
return this.getPages().then(pages=>pages.length).catch(err=>0)
}
this.isRunning = true
}
async waitForTab(){
let time = Date.now()
let cycles = 1
await new Promise(resolve=>{
let interval = setInterval(async()=>{
let totalPages = await this.getTotalPages()
if(totalPages < this.MAX_TABS){
clearInterval(interval)
resolve()
}
if(Date.now() - time > 100)
console.log('Waiting...')
if(Date.now() - time > 20*1000){
console.log('... ...\n'.repeat(cycle)+'Still waiting...')
cycle++
time = Date.now()
}
},500)
})
}
//open new tab and go to page
async openPage(url,waitSelector,lat,long){
await this.waitForTab()
let pg
//puppeteer launches with a blank tab, use this
// if(this.useFirstTab){
// let pages = await this.browser.pages()
// pg = pages.pop()
// this.useFirstTab = false
// }
// else
pg = await this.browser.newPage()
if(lat && long){
await this.setPermissions(url)
}
let page = await new Page()
await page.init(pg,url,waitSelector,lat,long)
return page
}
async setPermissions(url){
const context = this.browser.defaultBrowserContext();
await context.overridePermissions(url,this.permissions)
}
}
// assumes that the browser is in path
async function getBrowserPath(browserName){
return execSync('command -v chromium').then(({stdout,stderr})=>{
if(stdout.includes('not found'))
return null
return stdout
}).catch(err=>null)
}
The scraper imports Page.js, which is just wrapper for a puppeteer Page object with the functions I used most made available
const path = require('path')
const fs = require('fs')
const userAgents = require('./staticData/userAgents.json')
const cookiesPath = path.normalize('./cookies.json')
// a wrapper for a puppeteer page with pre-made functions
module.exports = class Page{
constuctor(useCookies=false){
this.page = null
this.useCookies = useCookies
this.previousSession = this.useCookies && fs.existsSync(cookiesPath)
}
async close (){
await this.page.close()
}
async init(page,url,waitSelector,lat,long){
this.page = page
let userAgent = userAgents[Math.floor(Math.random()*userAgents.length)]
await this.page.setUserAgent(userAgent)
await this.restoredSession()
if(lat && long)
await this.page.setGeolocation({
latitude: lat || 59.95, longitude:long || 30.31667, accuracy:40
})
await this.page.goto(url)
await this.wait(waitSelector)
}
async screenshotElement(selector='body',directory='./screenshots',padding=0,offset={}) {
const rect = await this.page.evaluate(selector => {
const el = document.querySelector(selector)
const {x, y, width, height} = el.getBoundingClientRect()
return {
left: x,
top: y,
width,
height,
id: el.id
}
}, selector)
let ext = 'jpeg'
let filename = path.normalize(directory+'/'+Date.now())
return await this.page.screenshot({
type:ext,
path:filename+' - '+selector.substring(5)+'.'+ext,
clip: {
x: rect.left - padding+(offset.left || 0),
y: rect.top - padding+(offset.right || 0),
width: rect.width + padding * 2+(offset.width||0),
height: rect.height + padding * 2+ (offset.height||0)
},
encoding:'base64'
})
}
async restoredSession(){
if(!this.previousSession)
return false
let cookies = require(cookiesPath)
for(let cookie of cookies){
await this.page.setCookie(cookie)
}
console.log('Loaded previous session')
return true
}
async saveSession(){
//write cookie to file
if(!this.useCookies)
return
const cookies = await this.page.cookies()
fs.writeFileSync(cookiesPath,JSON.stringify(cookies,null,2))
console.log('Wrote cookies to file')
}
//wait for text input elment and type text
async type(selector,text,options={delay:150}){
await this.wait(selector)
await this.page.type(selector,text,options)
}
//click and waits
async click(clickSelector,waitSelector=500){
await this.page.click(clickSelector)
await this.wait(waitSelector)
}
//hovers over element and waits
async hover(selector,waitSelector=500){
await this.page.hover(selector)
await this.wait(1000)
await this.wait(waitSelector)
}
//waits and suppresses timeout errors
async wait(selector=500, waitForNav=false){
try{
//waitForNav is a puppeteer's waitForNavigation function
//which for me does nothing but timeouts after 30s
waitForNav && await this.page.waitForNavigation()
await this.page.waitFor(selector)
} catch (err){
//print everything but timeout errors
if(err.name != 'Timeout Error'){
console.log('error name:',err.name)
console.log(err)
console.log('- - - '.repeat(4))
}
this.close()
}
}
}
``
To achieve this, you'll just need to separate the browser from your requests, like in a class, for example:
class PuppeteerScraper {
async launch(options = {}) {
this.browser = await puppeteer.launch(options);
// you could reuse the page instance if it was defined here
}
/**
* Pass the address and the function that will scrape your data,
* in order to mantain the page inside this object
*/
async goto(url, callback) {
const page = await this.browser.newPage();
await page.goto(url);
/**evaluate its content */
await callback(page);
await page.close();
}
async close() {
await this.browser.close();
}
}
and, to implement it:
/**
* scrape function, takes the page instance as its parameters
*/
async function evaluate_page(page) {
const titles = await page.$$eval('.col-xs-6 .star-rating ~ h3 a', (itens) => {
const text_titles = [];
for (const item of itens) {
if (item && item.textContent) {
text_titles.push(item.textContent);
}
}
return text_titles;
});
console.log('titles', titles);
}
(async () => {
const scraper = new PuppeteerScraper();
await scraper.launch({ headless: false });
for (let i = 1; i <= 6; i++) {
let link = `https://books.toscrape.com/catalogue/page-${i}.html`;
await scraper.goto(link, evaluate_page);
}
scraper.close();
})();
altho, if you want something more complex, you could take a look how they done at Apify project.

Nodejs download s3 images (getObject) in loop

I'm trying to download images from S3 bucket in a loop. My bucket is not public and using the direct getSignedURL doesn't work (Forbidden error). I need to download (between 10 - 30) images from S3 upon user selection from the user interface (and then later just delete after creating a GIF).
It downloads correct amount of images (with correct names) but the content of all the images is replaced with the last image on the local machine. I even wrote a Promise to call within the loop (hoping that each getObject call will complete first before going to the next) but didn't work. Except bluebird, I tried all solutions from this, but same result. My code looks like this:
var urlParams = {Bucket: 'bucket_name', Key: ''};
for (i = 0; i < imageNames.length; i+=increment) {
urlParams.Key = imageNames[i]+'.jpg';
pathToSave = '/local-files/'+urlParams.Key;
var tempFile = fs.createWriteStream(pathToSave);
// I tried a Promise (and setTimeout) here too but gives me the same result
var stream = s3.getObject(urlParams).createReadStream().pipe(tempFile);
var had_error = false;
stream.on('error', function(err){
had_error = true;
});
stream.on('close', function(){
if (!had_error) {
console.log("Image saved");
}
});
}
After the above code finishes, as I mentioned all images with correct names are saved but due to the non-blocking issue here, all images contain the content of last image in the array (imageNames). The Promise I wrote and tried is below
function getBucketObject(urlParams){
return new Promise ((resolve, reject)=> {
var pathToSave = '/local-files/'+params.Key;
var tempFile = fs.createWriteStream(pathToSave);
var stream = s3.getObject(params).createReadStream().pipe(tempFile);
var had_error = false;
stream.on('error', function(err){
had_error = true;
});
stream.on('close', function(){
if (!had_error) {
resolve(pathToSave);
}
});
})
}
Both setTimeout and Promise are not working for my issue. Any help will be appreciated. Thanks
You should use let instead of var
Modify your code as:
for (var i = 0; i < imageNames.length; i+=increment) {
let urlParams = {Bucket: 'bucket_name', Key: imageNames[i]+'.jpg'};
let pathToSave = 'img/analysis/'+urlParams.Key;
getBucketObject(urlParams).then(function(pathToSave){
console.log("image saved");
})
}
function getBucketObject(urlParams){
return new Promise ((resolve, reject)=> {
let pathToSave = '/local-files/'+params.Key;
let tempFile = fs.createWriteStream(pathToSave);
let stream = s3.getObject(params).createReadStream().pipe(tempFile);
let had_error = false;
stream.on('error', function(err){
had_error = true;
});
stream.on('close', function(){
if (!had_error) {
resolve(pathToSave);
}
});
})
}
Following what #Anshuman Jaiswal suggested in the comment, I tried the code with let instead of var and it works now. Thanks Anshuman. The code in the loop looks like below
for (var i = 0; i < imageNames.length; i+=increment) {
let urlParams = {Bucket: 'bucket_name', Key: imageNames[i]+'.jpg'};
let pathToSave = '/local-files/'+urlParams.Key;
getBucketObject(urlParams).then(function(pathToSave){
console.log("image saved");
})
}
And the promise function is below
function getBucketObject(urlParams){
return new Promise ((resolve, reject)=> {
let pathToSave = '/local-files/'+params.Key;
let tempFile = fs.createWriteStream(pathToSave);
let stream = s3.getObject(params).createReadStream().pipe(tempFile);
let had_error = false;
stream.on('error', function(err){
had_error = true;
});
stream.on('close', function(){
if (!had_error) {
resolve(pathToSave);
}
});
})
}

Categories