Read large zip files in an angular electron app - javascript

In my usecase i need to read large zip files in an angular electron app.
So far I use jszip but there is a limit of the zip file size of 1GB (32bit version) and 2 GB (64 bit).
I can not unzip the file when the zip files are larger.
Are there other possibilities? Would webassembly be a possibility or do I have the same limitations at this point?

You'll need to process the zip file in chunks, instead of reading it all into memory.
One library that claims to do this is https://www.npmjs.com/package/node-stream-zip

Related

Does react production build automatically compresses vidoes and images files?

I have created a react application which is right now deployed with the production build.
yarn run build
serve -S build
Well I know it compresses the .js and .scss files and creates a build folder then serve.
The issue is does it compresses the images & videos files as well? If not how can I do that? because the project I am working on has a lot's of images & vidoes files which impacts a lot on performance and loads the page very slowly.
Kindly help with this...
React doesn't do those stuff. It is just a framework for development.
It is the web server's responsibility to handle those stuff, i.e. serve in your case.
But even that, it is a common practice that web servers don't apply additional compression to assets (fonts, images, videos, audios, etc.), because they are already compressed in the first place.
Take images as an example, common file formats like JPG, PNG, WEBP are compressed. Unless you are serving BMP or RAW, which you shouldn't, there is no point for web servers to apply any compressions to them.

Is it possible to compress multiple files into single zip and upload from Angular 5+

I want to upload multiple files (around 30) from angular 5+ frontend. I need it such that it combines and available for download as a single zip file.
Is frontend zipping and uploading recommended in angular? If not which is the recommended way.(The server backend is Java).
Thanks in advance.

Babel and large files

I have one big javascript file and I want use babel on it. But file have around 100k lines (~2.8 MB) and I got message about limit to 500kb.
I'm using babel-cli with preset 'env' on Windows (installed on machine, not web version)
Is there way to compile such big file with babel?

Reading a file's details in nodejs, regardless of file type

I'm currently working on a nwjs (formerly node-webkit) application for organizing all files in a directory, regardless of type. The end goal will allow me to place all files in alphabetical folders, based on a number of file properties.
An example would be a folder that contains three files, let's say an MP3, a WAV file, and a .odt file. I'm needing to read some arbitrary metadata off of each of these files so I can perform a "best guess" as to where I should organize the file.
I found a few npm packages for reading metadata from individual types of files (https://github.com/gomfunkel/node-exif for JPEGS, https://github.com/43081j/id3 for MP3 tags), but nothing prebuilt for this particular usage.
If I were to write my own, is there anything about the FS that's built into nodejs that would help?
There is a binding to libmagic (what the file command uses) for node called mmmagic. With it you can scan files and obtain the "best guess" mime type for each.
From there it's a matter of comparing the detected mime type against a list of mime types that are valid for each of your subdirectories.

JavaScript build system using in-memory files?

I'm currently working with Ember's build tool Ember-CLI so there are a lot of JavaScript source files. But this is a more broadly applicable question.
After a source file changes, there's some time spent building and doing file I/O before tests can be run or a browser refresh can happen.
Could the file reading be minimized and file writing part be skipped during development?
So the workflow would be:
Start server
Process each file (ex. CoffeeScript -> JavaScript -> ES6 transpile -> lint).
Store each processed file output in-memory.
When a file is changed, process that one file and swap it out in memory. (No file writing.)
Concatenate the in-memory files and serve directly to the browser. (Files that didn't change don't need to be read in order to concatenate into a single JS source file.)
Would this be noticeably faster than having all of the file I/O?

Categories