How to load a large file into Adonis JS divided into chunks? - javascript

I'm doing the server side of the site on the Adonis JS framework.
I have been tasked with loading large files, to solve this problem I decided to use file loading by chunks.
I have found some client-side code and it seems to work.
Here is the code on client side: https://codepen.io/chaly7500/pen/YzQyZNR
The code on the server side:
//routes.ts.
apiGroup('v1', 'files', Route.group(async () => {
Route.post('upload', 'Files/UploadController.index')
}))
//UploadController.ts.
'use strict'
import {HttpContextContract} from "#ioc:Adonis/Core/HttpContext";
import MediaRepositories from "App/Repositories/MediaRepositories";
export default class UploadController {
public async index({request}:HttpContextContract){
const file = request.file('file')
// console.log(file)
return await MediaRepositories.createMedia(file)
}
}
//MediaRepositories.ts
'use strict'
Import Application from "#ioc:Adonis/Core/Application";
export default class MediaRepositories {
static async createMedia(file) {
await file.move(Application.publicPath('media/transientmodels'))
}
static async updateMediaById(){
}
static async updateMediaByIds(){
}
}
After uploading to the server, I have a blob file
And when I change the blob file to blob.png the image breaks
Has anyone implemented uploading large files using AdonisJS?
Or how to correctly convert blob file to image or video?
Main question:
How to upload big files to adonis and not get request timeout error ?

I was able to solve the loading problem with this library
https://www.npmjs.com/package/file-chunked
//UploadController.ts
'use strict'
import {HttpContextContract} from "#ioc:Adonis/Core/HttpContext";
import parseJson from "parse-json";
import MediaRepositories from "App/Repositories/MediaRepositories";
export default class UploadController {
public async index({request}:HttpContextContract){
const file = await request.file('file')
const chunkMetaDataStr = await request.input('chunkMetadata');
const chunkMetaData = await parseJson(chunkMetaDataStr);
return await MediaRepositories.createMedia(file, chunkMetaData)
}
}
// MediaRepositories.ts
'use strict'
import Application from "#ioc:Adonis/Core/Application";
import FileChunked from "file-chunked";
import * as fs from "fs";
import Media from "App/Models/Media";
import Env from '#ioc:Adonis/Core/Env'
export default class MediaRepositories {
static async createMedia(file, chunkMetaData) {
await file?.move(Application.publicPath('media/transientmodels/' + chunkMetaData.FileGuid + '/tmp_chunks'));
await FileChunked.upload({
chunkStorage: Application.publicPath('media/transientmodels/' + chunkMetaData.FileGuid), // where the uploaded file(chunked file in this case) are saved
uploadId: chunkMetaData.FileGuid,
chunkIndex: chunkMetaData.Index,
totalChunksCount: chunkMetaData.TotalCount,
filePath: file?.filePath,
});
if (chunkMetaData.Index == (chunkMetaData.TotalCount - 1)) {
fs.copyFileSync(Application.publicPath('media/transientmodels/' + chunkMetaData.FileGuid + '/tmp_chunks/' + file.clientName),
Application.publicPath('media/transientmodels/' + chunkMetaData.FileGuid + '/tmp_chunks/' + chunkMetaData.FileName));
}
}
}

Related

amazon s3.upload is taking time

I am trying to upload file to s3, before that I am altering the name of the file. Now I am accepting 2 files from request form-data object, renaming the filename, and uploading the file to s3. And end of the task I need to return the renamed file list which is uploaded successfully.
I am using S3.upload() function. But the problem is, the variable which is assigned as empty array initially, that will contain the renamed file list. But the array is returning empty response. The s3.upload() is taking much time. is there any probable solution where I can store the file name if upload is successful and return those names in response.
Please help me to fix this. The code looks like this,
if (formObject.files.document && formObject.files.document.length > 0) {
const circleCode = formObject.fields.circleCode[0];
let collectedKeysFromAwsResponse = [];
formObject.files.document.forEach(e => {
const extractFileExtension = ".pdf";
if (_.has(FILE_EXTENSIONS_INCLUDED, _.lowerCase(extractFileExtension))) {
console.log(e);
//change the filename
const originalFileNameCleaned = "cleaning name logic";
const _id = mongoose.Types.ObjectId();
const s3FileName = "s3-filename-convension;
console.log(e.path, "", s3FileName);
const awsResponse = new File().uploadFileOnS3(e.path, s3FileName);
if(e.hasOwnProperty('ETag')) {
collectedKeysFromAwsResponse.push(awsResponse.key.split("/")[1])
}
}
});
};
use await s3.upload(params).promise(); is the solution.
Use the latest code - which is AWS SDK for JavaScript V3. Here is the code you should be using
// Import required AWS SDK clients and commands for Node.js.
import { PutObjectCommand } from "#aws-sdk/client-s3";
import { s3Client } from "./libs/s3Client.js"; // Helper function that creates Amazon S3 service client module.
import {path} from "path";
import {fs} from "fs";
const file = "OBJECT_PATH_AND_NAME"; // Path to and name of object. For example '../myFiles/index.js'.
const fileStream = fs.createReadStream(file);
// Set the parameters
export const uploadParams = {
Bucket: "BUCKET_NAME",
// Add the required 'Key' parameter using the 'path' module.
Key: path.basename(file),
// Add the required 'Body' parameter
Body: fileStream,
};
// Upload file to specified bucket.
export const run = async () => {
try {
const data = await s3Client.send(new PutObjectCommand(uploadParams));
console.log("Success", data);
return data; // For unit tests.
} catch (err) {
console.log("Error", err);
}
};
run();
More details can be found in the AWS JavaScript V3 DEV Guide.

How to use plugin's data in a nuxt.config.js file?

My plugin, env.js:
export default async (_ctx, inject) => {
const resp = await fetch('/config.json')
const result = await resp.json()
inject('env', result)
// eslint-disable-next-line no-console
console.log('env injected', result)
return result
}
Then an idea was to use it's data inside nuxt.config.js to inject into publicRuntimeConfig:
import env from './plugins/env.js'
publicRuntimeConfig: {
test: env,
},
Then in a browser console i'm checking it:
this.$nuxt.$config
It shows me:
instead of a value, though this.$nuxt.$env shows the correct values:
What's wrong?
UPDATE 1
Tried Tony's suggestion:
// nuxt.config.js
import axios from 'axios'
export default async () => {
const resp = await axios.get('/config.json')
const config = resp.data
return {
publicRuntimeConfig: {
config
}
}
}
It cannot fetch config.json, but if i point it to an external resource: "https://api.openbrewerydb.org/breweries" it does work.
Intention of this question, is to have config.json where a user could simply change variable values there (from a compiled code) and change endpoints without a re-build process.
In nuxt.config.js, your env variable is a JavaScript module, where the default export is the function intended to be automatically run by Nuxt in a plugin's context. Importing the plugin script does not automatically execute that function. Even if you manually ran that function, it wouldn't make sense to use an injected prop as a runtime config because the data is already available as an injected prop.
If you just want to expose config.json as a runtime config instead of an injected prop, move the code from the plugin into an async configuration:
// nuxt.config.js
export default async () => {
const resp = await fetch('/config.json')
const config = await resp.json()
return {
publicRuntimeConfig: {
keycloak: config
}
}
}

How to import a CSV file in ReactJs?

I am trying to import a csv file that is in a folder called data on the same level as this function. I've tried to incorporate the solution I found on here, but no luck and I don't know what I need to modify.
getData.jsx
import React, { useState, useEffect } from 'react';
import Papa from 'papaparse';
export default function GetData(artist) {
const data = Papa.parse(fetchCsv);
console.log(data);
return data;
}
async function fetchCsv() {
const response = await fetch('data/artist_data.csv');
const reader = response.body.getReader();
const result = await reader.read();
const decoder = new TextDecoder('utf-8');
const csv = decoder.decode(result.value);
return csv;
}
Few problems I see here.
When you do fetch('data/mycsv.csv') you are essentially making a request to http://localhost:3000/data/mycsv.csv. Check the n/w tab and you will see the response returned is your html. React first loads your root page and then checks further for routes.
Some coding errors like - you haven't called the fetchCsv fun inside GetData function. Also you need to await for fetchCsv.
Solution:
Move your data folder which has your csv file to the public folder and make corrections to your code.
import React from 'react';
import Papa from 'papaparse';
async function GetData(artist) {
const data = Papa.parse(await fetchCsv());
console.log(data);
return data;
}
async function fetchCsv() {
const response = await fetch('data/mycsv.csv');
const reader = response.body.getReader();
const result = await reader.read();
const decoder = new TextDecoder('utf-8');
const csv = await decoder.decode(result.value);
console.log('csv', csv);
return csv;
}
I have tested the above code in my local and it works fine.

firebase.storage() is not a function in jest test cases

I am using Jest to test my firebase functions. This is all in the browser, so I don't have any conflicts with firebase on the server side. When I use firebase.auth() or firebase.database() everything works fine. When I try to use firebase.storage() my tests fail.
Here is my firebase import and initialization:
import firebase from 'firebase';
import config from '../config';
export const firebaseApp = firebase.initializeApp(config.FIREBASE_CONFIG);
export const firebaseAuth = firebaseApp.auth();
export const firebaseDb = firebaseApp.database();
I have an imageUtils file that has an upload function in it:
import { firebaseApp } from './firebase';
export const uploadImage = (firebaseStoragePath, imageURL) => {
return new Promise((resolve, reject) => {
// reject if there is no imagePath provided
if (!firebaseStoragePath) reject('No image path was provided. Cannot upload the file.');
// reject if there is no imageURL provided
if (!imageURL) reject('No image url was provided. Cannot upload the file');
// create the reference
const imageRef = firebaseApp.storage().ref().child(firebaseStoragePath);
let uploadTask;
// check if this is a dataURL
if (isDataURL(imageURL)) {
// the image is a base64 image string
// create the upload task
uploadTask = imageRef.putString(imageURL);
} else {
// the image is a file
// create the upload task
uploadTask = imageRef.put(imageURL);
}
// monitor the upload process for state changes
const unsub = uploadTask.on(firebase.storage.TaskEvent.STATE_CHANGED,
(snapshot) => {
// this is where we can check on progress
}, (error) => {
reject(error.serverResponse);
unsub();
}, () => {
// success function
resolve(uploadTask.snapshot.downloadURL);
unsub();
});
});
};
And I am trying to create a test case for that function and every time it fails with:
TypeError: _firebase3.firebaseApp.storage is not a function
When I run the app normally everything works fine and I never get errors about storage() being undefined or not a function. It is only when I try to run a test case.
I have set a console.dir(firebaseApp); line in the firebase import, and it comes back with both auth() and database() but no storage. How can I get storage to import/initialize/exist properly?
Add the following import
import "firebase/storage";
It looks like this was fixed in a recent update to the firebase javascript package:
I had the same problem. Another possible solution:
import * as firebase from "firebase";
import "firebase/app";
import "firebase/storage";

How to use nock to record request and responses to files and use it to playback in mocha acceptance test?

I inherited a typescript#2 project that has no tests in place.
It's basically a cli task runner, and a task requests an external api multiple time in order to create a file. As a a first failsafe, I want to set up acceptance tests.
Therefore, I want to mock the calls to the external api and to fetch the response from a local file. How do I achieve that?
I've looked into nock as it appears to provide this functionality, yet how do I use it?
(I don't provide an example as I intend to answer my question myself as I just recently have been through the entire ordeal.)
I have refactored my application that all the call to the to the external api happen when a Task object executes its execute method. Such a task implements the interface ITask:
import {ReadStream} from 'fs';
export interface ITask {
execute(): Promise<ReadStream>;
}
This allowed me to wrap a Task inside either a recorder or playback decorator. (I also don't let the execute create a file anymore but it returns the Promise of a Stream. In my normal workflow I would dump that stream to the file ystem (or upload it where ever I wanted).
RecordDecorator:
import {writeFile} from 'fs';
import {ITask} from './ITask';
import nock = require('nock');
import mkdirp = require('mkdirp');
import {ReadStream} from 'fs';
export class TaskMockRecorder implements ITask {
constructor(private task: ITask, private pathToFile: string) {
}
public async execute(): Promise <ReadStream> {
this.setupNock();
const stream = await this.task.execute();
this.writeRecordFile();
return Promise.resolve(stream);
}
private writeRecordFile() {
const nockCallObjects = nock.recorder.play();
mkdirp(this.pathToFile, async() => {
writeFile(`${this.pathToFile}`, JSON.stringify(nockCallObjects, null, 4));
});
}
private setupNock() {
nock.recorder.rec({
dont_print: true,
enable_reqheaders_recording: true,
output_objects: true,
});
}
}
PlayBackDecorator
import {ITask} from './ITask';
import {ReadStream} from 'fs';
import {Partner} from '../Types';
import nock = require('nock');
export class TaskMockPlaybackDecorator implements ITask {
constructor(private task: ITask, private pathToFile: string) {
}
public async execute(): Promise<ReadStream> {
nock.load(this.pathToFile);
nock.recorder.play();
return this.task.execute();
}
}
Decorating the task
I furthermore introduced the custom type MockMode:
export type MockeMode = 'recording'|'playback'|'none';
which I then can inject into my appRunner function:
export async function appRun(config: IConfig, mockMode: MockeMode): Promise<ReadStream> {
let task: ITask;
task = new MyTask(config);
const pathToFile = `tapes/${config.taskName}/tape.json`;
switch (mockMode) {
case 'playback':
console.warn('playback mode!');
task = new TaskMockPlaybackDecorator(task, path);
break;
case 'recording':
console.warn('recording mode!');
task = new TaskMockRecorder(task, path);
break;
default:
console.log('normal mode');
}
const csvStream = await task.execute();
return Promise.resolve(csvStream);
}
Implementing the acceptance test:
I now had to add reference files and set up the mocha test that compares both the generated stream from a playback run with the reference file:
import nock = require('nock');
import {appRun} from '../../src/core/task/taskRunner';
import {createReadStream} from 'fs';
import {brands} from '../../src/config/BrandConfig';
import isEqual = require('lodash/isEqual');
const streamEqual = require('stream-equal');
describe('myTask', () => {
const myConfig = { // myConfig // }
const referencePath = `references/${myConfig.taskName}.csv`;
it(`generates csv that matches ${referencePath}`, async() => {
nock.load(`tapes/${config}.taskName}/tape.json`);
nock.recorder.play();
return new Promise(async(resolve, reject) => {
const actual = await appRun(myConfig, 'playback');
const expected = createReadStream(referencePath);
streamEqual(actual, expected, (err: any, isEqual: boolean) => {
if (err) {
reject(err);
}
if (isEqual) {
resolve('equals');
return;
}
reject('not equals');
});
});
});
});
Depending on the size of the taped json request/respones one might need to increase the run size via timeout, as the default is 2 seconds and these kind of test might run slower.
mocha --recursive dist/tests -t 10000
This approach also makes it possible to easily update the tapes, one can just pass the mockMode parameter from as an argument and it will update the tape.json.
Downside is that the tape.json might be huge depending on the amount of traffic, yet this was intentional as as a first step I wanted to be sure that my application behaves the same on any changes to its codebase.

Categories