I'm using multiparty for uploading a file; I'm so new to Node.JS and streaming; so my question is, is it right if I stream the file by the file.path which is returned in form.parse() like the way I'm doing in my attempted code? I mean this is absolute path and obviously is working on localhost because it is the absolute path of my current server which is localhost, but is it going to work when the user attempts to upload a file from their computer too?
form.parse(req, function (err, fields, files) {
var rs= fs.createReadStream(files.file[0].path);
var fileDate;
rs.on('readable', function () {
while (null !== (chunk = rs.read())) {
fileDate += chunk;
}
});
rs.on('end', function () {
console.log('importedData', fileDate);
});
});
Thanks, please let me know if you need more clarification!
That looks correct. By default, uploaded files are put in a temporary folder, if you're using Linux this will likely be /tmp, your users' files will end up in the same place when they upload their files through your front-end.
Related
I got a App File which is structured like a zip file.
Now I would like to extract all of the files in the app file.
I tried to convert the app to a zip file in the code (just copy and paste as zip file), but then it's a "SFX ZIP Archive", which most of the unzipper in node.js can't read.
For example AdmZip (error message):
rejected promise not handled within 1 second: Error: Invalid CEN
header (bad signature)
var AdmZip = require('adm-zip');
var admZip2 = new AdmZip("C:\\temp\\Test\\Microsoft_System.zip");
admZip2.extractAllTo("C:\\temp\\Test\\System", true)
So now i don't know how to deal with it, because I need to extract the files with all subfolder/subfiles to a specific folder on the computer.
How would you do this?
You can download the .app file here:
https://drive.google.com/file/d/1i7v_SsRwJdykhxu_rJzRCAOmam5dAt-9/view?usp=sharing
If you open it, you should see something like this:
Thanks for your help :)
EDIT:
I'm already using JSZip for resaving the zip file as a normal ZIP Archive. But this is a extra step which costs some time.
Maybe someone knows how to extract files to a path with JSZip :)
EDIT 2:
Just for you information: It's a VS Code Extension Project
EDIT 3:
I got something which worked for me.
For my solution I did it with Workers (Because parallel)
var zip = new JSZip();
zip.loadAsync(data).then(async function (contents) {
zip.remove('SymbolReference.json');
zip.remove('[Content_Types].xml');
zip.remove('MediaIdListing.xml');
zip.remove('navigation.xml');
zip.remove('NavxManifest.xml');
zip.remove('Translations');
zip.remove('layout');
zip.remove('ProfileSymbolReferences');
zip.remove('addin');
zip.remove('logo');
//workerdata.files = Object.keys(contents.files)
//so you loop through contents.files and foreach file you get the dirname
//then check if the dir exists (create if not)
//after this you create the file with its content
//you have to rewrite some code to fit your code, because this whole code are
//from 2 files, hope it helps someone :)
Object.keys(workerData.files.slice(workerData.startIndex, workerData.endIndex)).forEach(function (filename, index) {
workerData.zip.file(filename).async('nodebuffer').then(async function (content) {
var destPath = path.join(workerData.baseAppFolderApp, filename);
var dirname = path.dirname(destPath);
// Create Directory if is doesn't exists
await createOnNotExist(dirname);
files[index] = false;
fs.writeFile(destPath, content, async function (err) {
// This is code for my logic
files[index] = true;
if (!files.includes(false)) {
parentPort.postMessage(workerData);
};
});
});
});
jsZip is A library for creating, reading and editing .zip files with JavaScript, with a lovely and simple API.
link (https://www.npmjs.com/package/jszip)
example (extraction)
var JSZip = require('JSZip');
fs.readFile(filePath, function(err, data) {
if (!err) {
var zip = new JSZip();
zip.loadAsync(data).then(function(contents) {
Object.keys(contents.files).forEach(function(filename) {
zip.file(filename).async('nodebuffer').then(function(content) {
var dest = path + filename;
fs.writeFileSync(dest, content);
});
});
});
}
});
The file is a valid zip file appended to some sort of executable.
The easiest way is to extract it calling an unzipper such as unzipada.exe - free, open-source software available here. Pre-built Windows executables available in the Files section.
My app is created with mean and I am a user of docker too. The purpose of my app is to create and download a CSV file. I already created my file, compressed it and placed it in a temp folder (the file will be removed after the download). This part is in the nodejs server side and works without problems.
I already use several things like (res.download) which is supposed to download directly the file in the browser but nothing append. I tried to use blob in the angularjs part but it doesn't work.
The getData function creates and compresses the file (it exists I can reach it directly when I look where the app is saved).
exports.getData = function getData(req, res, next){
var listRequest = req.body.params.listURL;
var stringTags = req.body.params.tagString;
//The name of the compressed CSV file
var nameFile = req.body.params.fileName;
var query = url.parse(req.url, true).query;
//The function which create the file
ApollineData.getData(listRequest, stringTags, nameFile)
.then(function (response){
var filePath = '/opt/mean.js/modules/apolline/client/CSVDownload/'+response;
const file = fs.createReadStream(filePath);
res.download(filePath, response);
})
.catch(function (response){
console.log(response);
});
};
My main problem is to download this file directly in the browser without using any variable because it could be huge (like several GB). I want to download it and then delete it.
There is nothing wrong with res.download
Probably the reason why res.download don't work for you is b/c you are using AJAX to fetch the resource, Do a regular navigation. Or if it requires some post data and another method: create a form and submit.
I just want to be able to upload and download binary files to Strato Hidrive using node.js with webdav.
I tested uploading a jpg-image with the following code:
const createClient = require("webdav");
const fs = require("fs");
let client = createClient(
"https://myusername.webdav.hidrive.strato.com",
"myusername",
"mypassword"
);
let data = fs.readFileSync("./localfolder/logo.jpg", {encoding: "binary"});
client.putFileContents("/myfolder/logo.jpg", data, { "format": "binary", });
However when I check the uploaded file by downloading it through their web-client, it can't be opened and it seems to be corrupted.
Is there solution to this? Either by changing the code or by suggesting a free webdav space (other than Strato Hidrive) where it might be working?
Thanks a lot!
I just started working with the Microsoft Azure Storage SDK for NodeJS (https://github.com/Azure/azure-storage-node) and already successfully uploaded my first pdf files to the cloud storage.
However, now I started looking at the documentation, in order to download my files as a node_buffer (so I dont have to use fs.createWriteStream), however the documentation is not giving any examples of how this works. The only thing they are writing is "There are also several ways to download files. For example, getFileToStream downloads the file to a stream:", but then they only show one example, which is using the fs.createWriteStream, which I dont want to use.
I was also not able to find anything on Google that really helped me, so I was wondering if anybody has experience with doing this and could share a code sample with me?
The getFileToStream function need a writable stream as param. If you want all the data wrote to a Buffer instead of a file, you just need to create a custom writable stream.
const { Writable } = require('stream');
let bufferArray = [];
const myWriteStream = new Writable({
write(chunk, encoding, callback) {
bufferArray.push(...chunk)
callback();
}
});
myWriteStream.on('finish', function () {
// all the data is stored inside this dataBuffer
let dataBuffer = Buffer.from(bufferArray);
})
then pass myWriteStream to getFileToStream function
fileService.getFileToStream('taskshare', 'taskdirectory', 'taskfile', myWriteStream, function(error, result, response) {
if (!error) {
// file retrieved
}
});
In a Node.js project I have to transfer file from computer to server. I can send file if file size is small i.e. 2mb but unable to send file if it is more than this size. Here is my code as follows:
var url1 = 'http://beta.xxxxx.com/Xbox/xxxx/index.php/info/xxxxx';
var csvenriched = APPDATApath+'/xxxx/users/'+userId+'/programs/'+programName+'/'+foldername+'/Data_'+tmpstmp+'.csv';
var req = request.post(url1, function (err, resp, body1) {
if (err) {
console.log('REQUEST RESULTS:'+err+resp.statusCode+body1);
res.send(err); return false;
} else {
res.send(body1); return false;
}
});
var form = req.form();
form.append('file', fs.createReadStream(csvenriched));
On the PHP side where I am sending data code is as follows:
public function actionSavetestvideo() {
if (!empty($_FILES)) {
$path = Yii::$app->basePath.'/testfiles/'.$_FILES['file']['name'];
if (move_uploaded_file($_FILES['file']['tmp_name'], $path)) {
return 'uploaded';
} else {
return 'error'.$_FILES["file"]["error"];
}
} else {
return $_FILES;
}
}
I know there are answers on internet in case if I have to upload file on Node.js server But in my case I have to transfer file using request module from Node.js to PHP server.
It is working fine in case if file size is small but not if CSV file size is large.
The one thing which I have noticed that if file size is large then if (!empty($_FILES)){} on php side went failed. So I don't think there is issue on PHP side. Please suggest what should I modify there?
The problem is on the PHP side. The default upload_max_filesize configuration value for PHP is 2MB. You will need to increase that value to accept larger file uploads.
I agreed from mscdex the problem is on PHP side. After increasing upload_max_filesize from 2MB to 50MB it was not working, till then I have restarted the Apache Server again.
On Restart the Apache server, it starts working perfectly.