Client side compression with HTML5 and Javascript - javascript

Am working on a web application and we allow users to upload files to our server. Am trying to do client side compression before uploading files to the server. What would be the better way to achieve this using HTML5 and JavaScript.
Thanks.

The common mechanism to do what you want is using FileReader and a JavaScript client-side compression library (i.e. compressjs).

In 2022 it's almost too simple, if the browser supports CompressionStream, FormData and Response.
In the example below I use FormData to collect all the fields from the form.
Then I use the readable stream from the file, and pipe it though the compression stream. Then I use Response to read everything from the compressed stream and return it in a blob.
async function compress(file, encoding = 'gzip') {
try {
return {
data: await new Response(file.stream().pipeThrough(new CompressionStream(encoding)), {
headers: {
'Content-Type': file.type
},
}).blob(),
encoding,
};
} catch (error) {
// If error, return the file uncompressed
console.error(error.message);
return {
data: file,
encoding: null
};
}
}
theForm.addEventListener(
'submit',
(event) => event.preventDefault()
)
theForm.addEventListener(
'input',
async function(event) {
// collect all fields
const fd = new FormData(theForm);
// Get 'file handle' from imput elemen
const file = fd.get('theFile');
if (!file) return
const encoding = fd.get('theEncoding');
const compressed = await compress(file, encoding);
theMessage.value = [
'Compressed with', compressed.encoding,
'Source file was', file.size, 'bytes',
'and the compressed file', compressed.data.size,
'saving', ((1 - compressed.data.size / file.size) * 100)
.toFixed(0),
'%.'
].join(' ')
}
)
form>* {
display: block;
width: 100%;
}
<form id="theForm">
<select name="theEncoding">
<option>gzip</option>
<option>deflate</option>
<option>deflate-raw</option>
</select>
<input type="file" name="theFile" id="theFile">
</form>
<output id="theMessage"></output>

Related

Laravel 9 and Javascript: how to download a file returned from Storage::download()

DISCLAIMER: Before creating this question, I've checked here, here and here, and also checked Laravel docs.
Context
Laravel 9 full-stack
No JS framework on front-end, which means I'm using vanilla JS
The folders on Storage are setted like this:
storage
app
public
folder1
folder1A
folder1B
folder1C
etc
The files stored in each folder1X are .pdf format and I don't know its names.
No folders are empty, nor with invalid/corrupted files.
The problem
I have a FileController.php to download files that are inside a folder1X/ directory. The method to download it is as follows:
public function downloadFileFromStorage(Request $request): mixed
{
$dirpath = $request->dirpath; // dirpath = public/folder1/folder1X.
$files = Storage::allFiles($dirpath);
return response()->download(storage_path('app\\' . $files[0]));
}
(Note: dirpath is sent in a axios request by client and is also fetched from database on a previous request)
My Javascript CLI needs to enable the download of this file. The download is enabled by clicking on a button. The button calls downloadPDF(dirpath) which works as follows:
function downloadPDF(dirpath) {
axios.post('/download-pdf-file', { dirpath })
.then(
success => {
const url = success.data
const a = document.createElement('a')
a.download = 'file.pdf'
a.href = url
a.click()
},
error => {
console.log(error.response)
}
)
}
But, when I run this function, I get a about:blank#blocked error.
Attempts
Changed the a HTML DOM approach to a window.open(url) on client;
Changed response() to Storage::download($files[0], 'file-name.pdf'), and with this I also tried using Blob on client as follows:
success => {
const blob = new Blob([success.data], { type: 'application/pdf' })
const fileURL = URL.createObjectURL(blob)
window.openURL(fileURL)
},
Also mixed Blob with the a HTML DOM approach;
Changed storage_path argument to /app/public/ before concatenating to $files[0].
UPDATE
Following tips from #BenGooding and #cengsemihsahin, I changed files to the following:
JS
// FileDownload is imported on a require() at the code beginning
function downloadPDF(dirpath) {
axios({
url: '/download-pdf-file',
method: 'GET',
responseType: 'blob',
options: {
body: { dirpath }
}
}).then(
success => {
FileDownload(success.data, 'nota-fiscal.pdf')
}
)
}
PHP:
public function downloadFileFromStorage(Request $request): mixed
{
$dirpath = $request->dirpath; // dirpath = public/folder1/folder1X.
$files = Storage::allFiles($dirpath);
return Storage::download($files[0], 'filename.pdf');
}
and now it downloads a corrupted PDF that can't be opened.
Finally found the issue, and it was here:
axios({
url: '/download-pdf-file',
method: 'GET',
responseType: 'blob',
options: { // here
body: { dirpath } // here
}
})
Laravel's Request arrow operator -> can't fetch a GET body sent through options (At least, not on $request->key fashion; see more about it here) thus making me download a corrupted file - it wasn't fetching any file on Laravel as it didn't get any path at all.
Here is the solution I came with:
As I want to get a file in a route that doesn't change except for the 1X at folder1X, I'm processing the path obtained and sending the 1X as a GET query param:
let folderNumber = dirpath.split('/')
folderNumber = folderNumber[folderNumber.length].replaceAll('/', '')
axios({
url: '/download-pdf-file?folder=',
method: 'GET',
responseType: 'blob'
})
This way I don't pass the whole path to back-end and it's possible to get folderNumber by using $request->query():
public function downloadFileFromStorage(Request $request): mixed
{
$folderNumber = $request->query('folderNumber');
$folderPath = '/public/folder1/folder' . $folderNumber . '/';
$files = Storage::allFiles($folderPath);
return Storage::download($files[0], 'file-name.pdf');
}
In a nutshell:
To download files, use GET requests;
To send arguments within GET requests, use query parameters and fetch them with $request->query('keyname') (or find out another way. Good luck!);

ASP Net Core Filepond load file from server

I have a project where it uses Filepond to upload files and I need it to load file from server.
I already follow the docs but It doesn't work. The Filepond gives error Error during load 400 and it even doesn't send the request to load the file from server
This is my javascript
let pond = FilePond.create(value, {
files: [
{
// the server file reference
source: 'e958818e-92de-4953-960a-d8157467b766',
// set type to local to indicate an already uploaded file
options: {
type: 'local'
}
}
]
});
FilePond.setOptions({
labelFileProcessingError: (error) => {
return error.body;
},
server: {
headers: {
'#tokenSet.HeaderName' : '#tokenSet.RequestToken'
},
url: window.location.origin,
process: (fieldName, file, metadata, load, error, progress, abort) => {
// We ignore the metadata property and only send the file
fieldName = "File";
const formData = new FormData();
formData.append(fieldName, file, file.name);
const request = new XMLHttpRequest();
request.open('POST', '/UploadFileTemp/Process');
request.setRequestHeader('#tokenSet.HeaderName', '#tokenSet.RequestToken');
request.upload.onprogress = (e) => {
progress(e.lengthComputable, e.loaded, e.total);
};
request.onload = function () {
if (request.status >= 200 && request.status < 300) {
load(request.responseText);
}
else {
let errorMessageFromServer = request.responseText;
error('oh no');
}
};
request.send(formData);
},
revert: "/UploadFileTemp/revert/",
load: "/UploadFileTemp/load"
}
})
This is my controller
public async Task<IActionResult> Load(string p_fileId)
{
//Code to get the files
//Return the file
Response.Headers.Add("Content-Disposition", cd.ToString());
Response.Headers.Add("X-Content-Type-Options", "nosniff");
return PhysicalFile(filePath, "text/plain");
}
NB
I already test my controller via postman and it works. I also check the content-disposition header
I'd advise to first set all the options and then set the files property.
You're setting the files, and then you're telling FilePond where to find them, it's probably already trying to load them but doesn't have an endpoint (yet).
Restructuring the code to look like this should do the trick.
let pond = FilePond.create(value, {
server: {
headers: {
'#tokenSet.HeaderName': '#tokenSet.RequestToken',
},
url: window.location.origin,
process: (fieldName, file, metadata, load, error, progress, abort) => {
// your processing method
},
revert: '/UploadFileTemp/revert',
load: '/UploadFileTemp/load',
},
files: [
{
// the server file reference
source: 'e958818e-92de-4953-960a-d8157467b766',
// set type to local to indicate an already uploaded file
options: {
type: 'local',
},
},
],
});

Upload file Vue 3 and Django REST

I dont get if i work with request correctly, after upload all files is 1 KB and i cant open them. How to create correct file? If i save file as .doc i can see:
------WebKitFormBoundaryt3UjlK5SVq8hgppA
Content-Disposition: form-data; name="file"
[object FileList]
------WebKitFormBoundaryt3UjlK5SVq8hgppA--
So my functions to submit in js file:
async submitFiles() {
let formData = new FormData();
formData.append('file', this.file);
console.log(this.file)
axios.put(`/api/v1/myapp/upload/${this.file[0].name}`,
formData,
{
headers: {
'Content-Disposition': 'attachment',
'X-CSRFToken': await this.getCsrfToken(),
},
}
).then(function () {
console.log('SUCCESS!!');
})
.catch(function () {
console.log('FAILURE!!');
});
},
To handle change of file in form
fileChanged(file) {
this.file = file.target.files
},
And finally my view.py
class FileUploadView(APIView):
parser_classes = [FileUploadParser]
def put(self, request, filename, format=None):
file_obj = request.data['file']
handle_uploaded_file(file_obj)
return Response({'received data': request.data})
Where
def handle_uploaded_file(f):
with open('path/to/my/folder/' + str(f.name), 'wb+') as destination:
for chunk in f.chunks():
destination.write(chunk)
[object FileList]
Oh, you serialized the whole FileList.
Change to: formData.append('file', this.file[0]);
If this won't work you may need to read the file's content.
Edit: it should be enough, according to MDN:
The field's value. This can be a USVString or Blob (including subclasses such as File). If none of these are specified the value is converted to a string.

ReactJS: How to send multiple files in server (Laravel)

I'm new in ReactJS and my backend is laravel and I have problem regarding inserting multiple files to the database, however if i use only the single upload (inserting one file to the database it's working for me.).
PROBLEM: regarding inserting multiple files in the database.
GOAL: To insert multiple files to the database
I have here
FORMDATA:
const formData = new FormData();
formData.append('myFile', this.state.image);
RESPONSE:
axios.post('/api/save_gallery_album_image', formData).then(response => {
console.log(response);
}).catch(error => (error.response));
OnChange:
handleChangeImage(e){
this.setState({
image:e.target.files[0]
})
// console.log(e.target.files);
}
JSX:
<label>Image</label>
<div className="custom-file">
<input type="file"
name="image"
multiple
onChange={this.handleChangeImage}
className="custom-file-input form-control"
accept="image/x-png,image/gif,image/jpeg"
id="file_selected"/>
<label className="custom-file-label" htmlFor="validatedCustomFile">Choose file...</label>
</div>
Server side Controller:
public function save_gallery_album_image(Request $request)
{
$multiple_gallery_file_upload = $request->file('myFile');
$titleValue = $request->get('titleValue');
$pageValue = $request->get('pageValue');
$now = new DateTime();
if($request->hasFile('myFile'))
{
foreach($multiple_gallery_file_upload as $myfiles)
{
$uniqueid=uniqid();
$original_name=$request->file('myFile')->getClientOriginalName();
$size=$request->file('myFile')->getSize();
$extension=$request->file('myFile')->getClientOriginalExtension();
$name=$uniqueid.'.'.$extension;
$path=$request->file('myFile')->storeAs('public',$name);
DB::insert('INSERT INTO album_category (cid,image,created_at) VALUES (?,?,?) ',[
$titleValue,
$name,
$now
]);
}
return response()->json('Input Saved');
}
}
I am facing the same problem I have fixed like this I hope it's helpful for you.
put this code on the right place and check it's working to upload multiple images Thanks.
<form>
<input name="product_image" type="file" multiple onChange={e => this.HandleProductImage(e)}/>
<button type="submit" onClick={e =>this.submitForm(e)}>
</form>
HandleProductImage = e => {
this.setState({
product_image: e.target.files
});
};
submitForm = e => {
const product = new FormData();
if (this.state.product_image) {
for (const file of this.state.product_image) {
product.append("image", file);
}
}
//then use your API to send form data
}
I guess you should request multiple time as files existed using loop.
When you request array of files in multipart form, the multipart form don't include all of your files. So you should send upload requests separately.
After check the comment, I added some sample code.
OnChange:
handleChangeImage(e) {
// Set file list
let files = e.target.files;
files.map((file) => {
// make diffent formData per each file and request.
let formData = new FormData();
formData.append('myFile', file);
axios.post('/api/save_gallery_album_image', formData)
.then(response => {
console.log(response);
}).catch(error => (error.response));
});
}
If you want to save multiple files in one request, I think you should also change your server-side codes.
For now, your server might save just one file per one request.

Is it possible to upload stream on amazon s3 from browser?

I want to capture webcam video stream, and directly stream it to S3 storage.
I've learned that you can upload via stream to s3:
https://aws.amazon.com/blogs/aws/amazon-s3-multipart-upload/
I've learned that you can upload via browser:
http://docs.aws.amazon.com/AmazonS3/latest/dev/HTTPPOSTExamples.html#HTTPPOSTExamplesFileUpload
But Im still lost on how to actually do it.
I need an example of someone uploadin getusermediastream to S3 like above.
Buffer, Binary data, multipart upload, stream... this is all beyond my knowledge. Stuff I wish I knew, but don't even now where to learn.
Currently, you cannot simply pass the media stream to any S3 method to do the multipart upload automatically.
But still, there is an event called dataavailable which produces the chunks of video each given time interval. So we can subscribe to dataavailable and do the S3 Multipart Upload manually.
This approach brings some complications: say chunks of video are generated each 1 second, but we don't know how long does it take to upload the chunk to S3. E.g. the upload can take 3 times longer due to the connection speed. So we can get stuck trying to make multiple PUT requests at the same time.
The potential solution would be to upload the chunks one by one and don't start uploading the next chunk until the prev. one is uploaded.
Here is a snippet of how this can be handled using Rx.js and AWS SDK. Please see my comments.
// Configure the AWS. In this case for the simplicity I'm using access key and secret.
AWS.config.update({
credentials: {
accessKeyId: "YOUR_ACCESS_KEY",
secretAccessKey: "YOUR_SECRET_KEY",
region: "us-east-1"
}
});
const s3 = new AWS.S3();
const BUCKET_NAME = "video-uploads-123";
let videoStream;
// We want to see what camera is recording so attach the stream to video element.
navigator.mediaDevices
.getUserMedia({
audio: true,
video: { width: 1280, height: 720 }
})
.then(stream => {
console.log("Successfully received user media.");
const $mirrorVideo = document.querySelector("video#mirror");
$mirrorVideo.srcObject = stream;
// Saving the stream to create the MediaRecorder later.
videoStream = stream;
})
.catch(error => console.error("navigator.getUserMedia error: ", error));
let mediaRecorder;
const $startButton = document.querySelector("button#start");
$startButton.onclick = () => {
// Getting the MediaRecorder instance.
// I took the snippet from here: https://github.com/webrtc/samples/blob/gh-pages/src/content/getusermedia/record/js/main.js
let options = { mimeType: "video/webm;codecs=vp9" };
if (!MediaRecorder.isTypeSupported(options.mimeType)) {
console.log(options.mimeType + " is not Supported");
options = { mimeType: "video/webm;codecs=vp8" };
if (!MediaRecorder.isTypeSupported(options.mimeType)) {
console.log(options.mimeType + " is not Supported");
options = { mimeType: "video/webm" };
if (!MediaRecorder.isTypeSupported(options.mimeType)) {
console.log(options.mimeType + " is not Supported");
options = { mimeType: "" };
}
}
}
try {
mediaRecorder = new MediaRecorder(videoStream, options);
} catch (e) {
console.error("Exception while creating MediaRecorder: " + e);
return;
}
//Generate the file name to upload. For the simplicity we're going to use the current date.
const s3Key = `video-file-${new Date().toISOString()}.webm`;
const params = {
Bucket: BUCKET_NAME,
Key: s3Key
};
let uploadId;
// We are going to handle everything as a chain of Observable operators.
Rx.Observable
// First create the multipart upload and wait until it's created.
.fromPromise(s3.createMultipartUpload(params).promise())
.switchMap(data => {
// Save the uploadId as we'll need it to complete the multipart upload.
uploadId = data.UploadId;
mediaRecorder.start(15000);
// Then track all 'dataavailable' events. Each event brings a blob (binary data) with a part of video.
return Rx.Observable.fromEvent(mediaRecorder, "dataavailable");
})
// Track the dataavailable event until the 'stop' event is fired.
// MediaRecorder emits the "stop" when it was stopped AND have emitted all "dataavailable" events.
// So we are not losing data. See the docs here: https://developer.mozilla.org/en-US/docs/Web/API/MediaRecorder/stop
.takeUntil(Rx.Observable.fromEvent(mediaRecorder, "stop"))
.map((event, index) => {
// Show how much binary data we have recorded.
const $bytesRecorded = document.querySelector("span#bytesRecorded");
$bytesRecorded.textContent =
parseInt($bytesRecorded.textContent) + event.data.size; // Use frameworks in prod. This is just an example.
// Take the blob and it's number and pass down.
return { blob: event.data, partNumber: index + 1 };
})
// This operator means the following: when you receive a blob - start uploading it.
// Don't accept any other uploads until you finish uploading: http://reactivex.io/rxjs/class/es6/Observable.js~Observable.html#instance-method-concatMap
.concatMap(({ blob, partNumber }) => {
return (
s3
.uploadPart({
Body: blob,
Bucket: BUCKET_NAME,
Key: s3Key,
PartNumber: partNumber,
UploadId: uploadId,
ContentLength: blob.size
})
.promise()
// Save the ETag as we'll need it to complete the multipart upload
.then(({ ETag }) => {
// How how much bytes we have uploaded.
const $bytesUploaded = document.querySelector("span#bytesUploaded");
$bytesUploaded.textContent =
parseInt($bytesUploaded.textContent) + blob.size;
return { ETag, PartNumber: partNumber };
})
);
})
// Wait until all uploads are completed, then convert the results into an array.
.toArray()
// Call the complete multipart upload and pass the part numbers and ETags to it.
.switchMap(parts => {
return s3
.completeMultipartUpload({
Bucket: BUCKET_NAME,
Key: s3Key,
UploadId: uploadId,
MultipartUpload: {
Parts: parts
}
})
.promise();
})
.subscribe(
({ Location }) => {
// completeMultipartUpload returns the location, so show it.
const $location = document.querySelector("span#location");
$location.textContent = Location;
console.log("Uploaded successfully.");
},
err => {
console.error(err);
if (uploadId) {
// Aborting the Multipart Upload in case of any failure.
// Not to get charged because of keeping it pending.
s3
.abortMultipartUpload({
Bucket: BUCKET_NAME,
UploadId: uploadId,
Key: s3Key
})
.promise()
.then(() => console.log("Multipart upload aborted"))
.catch(e => console.error(e));
}
}
);
};
const $stopButton = document.querySelector("button#stop");
$stopButton.onclick = () => {
// After we call .stop() MediaRecorder is going to emit all the data it has via 'dataavailable'.
// And then finish our stream by emitting 'stop' event.
mediaRecorder.stop();
};
button {
margin: 0 3px 10px 0;
padding-left: 2px;
padding-right: 2px;
width: 99px;
}
button:last-of-type {
margin: 0;
}
p.borderBelow {
margin: 0 0 20px 0;
padding: 0 0 20px 0;
}
video {
height: 232px;
margin: 0 12px 20px 0;
vertical-align: top;
width: calc(20em - 10px);
}
video:last-of-type {
margin: 0 0 20px 0;
}
<div id="container">
<video id="mirror" autoplay muted></video>
<div>
<button id="start">Start Streaming</button>
<button id="stop">Stop Streaming</button>
</div>
<div>
<span>Recorded: <span id="bytesRecorded">0</span> bytes</span>;
<span>Uploaded: <span id="bytesUploaded">0</span> bytes</span>
</div>
<div>
<span id="location"></span>
</div>
</div>
<!-- include adapter for srcObject shim -->
<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/aws-sdk/2.175.0/aws-sdk.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/5.5.6/Rx.js"></script>
Caveats:
All Multipart Uploads need to be either completed or aborted. You will be charged if you leave it pending forever. See the "Note" here.
Each chunk that you Upload (except the last one) must be larger than 5 MB. Or an error will be thrown. See the details here. So you need to adjust the timeframe/resolution.
When you are instantiating the SDK make sure that there is a policy that with the s3:PutObject permission.
You need to expose the ETag in your bucket CORS configuration. Here is the example of CORS configuration:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<ExposeHeader>ETag</ExposeHeader>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Limitations:
Be carefull as the MediaRecorder API is still not widely adopted. Make sure you check you visit caniuse.com before using it in prod.

Categories