Upload image taken with camera to firebase storage on React Native - javascript

All I want to do is upload a photo taken using react-native-camera to firebase storage with react-native-fetch-blob, but no matter what I do it doesn't happen.
I've gone through all of the documentations I can find and nothing seems to work.
If anyone has a working system for accomplishing this please post it as an answer. I can get the uri of the jpg that react-native-camera returns (it displays in the ImageView and everything), but my upload function seems to stop working when it's time to put the blob.
My current function:
uploadImage = (uri, imageName, mime = 'image/jpg') => {
return new Promise((resolve, reject) => {
const uploadUri = Platform.OS === 'ios' ? uri.replace('file://', '') : uri
let uploadBlob = null
const imageRef = firebase.storage().ref('selfies').child(imageName)
console.log("uploadUri",uploadUri)
fs.readFile(uploadUri, 'base64').then((data) => {
console.log("MADE DATA")
var blobEvent = new Blob(data, 'image/jpg;base64')
var blob = null
blobEvent.onCreated(genBlob => {
console.log("CREATED BLOB EVENT")
blob = genBlob
firebase.storage().ref('selfies').child(imageName).put(blob).then(function(snapshot) {
console.log('Uploaded a blob or file!');
firebase.database().ref("selfies/" + firebase.auth().currentUser.uid).set(0)
var updates = {};
updates["/users/" + firebase.auth().currentUser.uid + "/signup/"] = 1;
firebase.database().ref().update(updates);
});
}, (error) => {
console.log('Upload Error: ' + error)
alert(error)
}, () => {
console.log('Completed upload: ' + uploadTask.snapshot.downloadURL)
})
})
}).catch((error) => {
alert(error)
})
}
I want to be as efficient as possible, so if it's faster and takes less memory to not change it to base64, then I prefer that. Right now I just have no clue how to make this work.
This has been a huge source of stress in my life and I hope someone has this figured out.

The fastest approach would be to use the native android / ios sdk's and avoid clogging the JS thread, there are a few libraries out there that will provide a react native module to do just this (they all have a small js api that communicates over react natives bridge to the native side where all the firebase logic runs)
react-native-firebase is one such library. It follows the firebase web sdk's api, so if you know how to use the web sdk then you should be able to use the exact same logic with this module as well as additional firebase apis that are only available on the native SDKS.
For example, it has a storage implementation included and a handy putFile function, which you can provide it with a path to a file on the device and it'll upload it for you using the native firebase sdks, no file handling is done on the JS thread and is therefore extremely fast.
Example:
// other built in paths here: https://github.com/invertase/react-native-firebase/blob/master/lib/modules/storage/index.js#L146
const imagePath = firebase.storage.Native.DOCUMENT_DIRECTORY_PATH + '/myface.png';
const ref = firebase.storage().ref('selfies').child('/myface.png');
const uploadTask = ref.putFile(imagePath);
// .on observer is completely optional (can instead use .then/.catch), but allows you to
// do things like a progress bar for example
uploadTask.on(firebase.storage.TaskEvent.STATE_CHANGED, (snapshot) => {
// observe state change events such as progress
// get task progress, including the number of bytes uploaded and the total number of bytes to be uploaded
const progress = (snapshot.bytesTransferred / snapshot.totalBytes) * 100;
console.log(`Upload is ${progress}% done`);
switch (snapshot.state) {
case firebase.storage.TaskState.SUCCESS: // or 'success'
console.log('Upload is complete');
break;
case firebase.storage.TaskState.RUNNING: // or 'running'
console.log('Upload is running');
break;
default:
console.log(snapshot.state);
}
}, (error) => {
console.error(error);
}, () => {
const uploadTaskSnapshot = uploadTask.snapshot;
// task finished
// states: https://github.com/invertase/react-native-firebase/blob/master/lib/modules/storage/index.js#L139
console.log(uploadTaskSnapshot.state === firebase.storage.TaskState.SUCCESS);
console.log(uploadTaskSnapshot.bytesTransferred === uploadTaskSnapshot.totalBytes);
console.log(uploadTaskSnapshot.metadata);
console.log(uploadTaskSnapshot.downloadUrl)
});
Disclaimer: I am the author of react-native-firebase.

Related

knowing whether a fetch is requesting or responding [duplicate]

I'm struggling to find documentation or examples of implementing an upload progress indicator using fetch.
This is the only reference I've found so far, which states:
Progress events are a high level feature that won't arrive in fetch for now. You can create your own by looking at the Content-Length header and using a pass-through stream to monitor the bytes received.
This means you can explicitly handle responses without a Content-Length differently. And of course, even if Content-Length is there it can be a lie. With streams you can handle these lies however you want.
How would I write "a pass-through stream to monitor the bytes" sent? If it makes any sort of difference, I'm trying to do this to power image uploads from the browser to Cloudinary.
NOTE: I am not interested in the Cloudinary JS library, as it depends on jQuery and my app does not. I'm only interested in the stream processing necessary to do this with native javascript and Github's fetch polyfill.
https://fetch.spec.whatwg.org/#fetch-api
Streams are starting to land in the web platform (https://jakearchibald.com/2016/streams-ftw/) but it's still early days.
Soon you'll be able to provide a stream as the body of a request, but the open question is whether the consumption of that stream relates to bytes uploaded.
Particular redirects can result in data being retransmitted to the new location, but streams cannot "restart". We can fix this by turning the body into a callback which can be called multiple times, but we need to be sure that exposing the number of redirects isn't a security leak, since it'd be the first time on the platform JS could detect that.
Some are questioning whether it even makes sense to link stream consumption to bytes uploaded.
Long story short: this isn't possible yet, but in future this will be handled either by streams, or some kind of higher-level callback passed into fetch().
My solution is to use axios, which supports this pretty well:
axios.request({
method: "post",
url: "/aaa",
data: myData,
onUploadProgress: (p) => {
console.log(p);
//this.setState({
//fileprogress: p.loaded / p.total
//})
}
}).then (data => {
//this.setState({
//fileprogress: 1.0,
//})
})
I have example for using this in react on github.
fetch: not possible yet
It sounds like upload progress will eventually be possible with fetch once it supports a ReadableStream as the body. This is currently not implemented, but it's in progress. I think the code will look something like this:
warning: this code does not work yet, still waiting on browsers to support it
async function main() {
const blob = new Blob([new Uint8Array(10 * 1024 * 1024)]); // any Blob, including a File
const progressBar = document.getElementById("progress");
const totalBytes = blob.size;
let bytesUploaded = 0;
const blobReader = blob.stream().getReader();
const progressTrackingStream = new ReadableStream({
async pull(controller) {
const result = await blobReader.read();
if (result.done) {
console.log("completed stream");
controller.close();
return;
}
controller.enqueue(result.value);
bytesUploaded += result.value.byteLength;
console.log("upload progress:", bytesUploaded / totalBytes);
progressBar.value = bytesUploaded / totalBytes;
},
});
const response = await fetch("https://httpbin.org/put", {
method: "PUT",
headers: {
"Content-Type": "application/octet-stream"
},
body: progressTrackingStream,
});
console.log("success:", response.ok);
}
main().catch(console.error);
upload: <progress id="progress" />
workaround: good ol' XMLHttpRequest
Instead of fetch(), it's possible to use XMLHttpRequest to track upload progress — the xhr.upload object emits a progress event.
async function main() {
const blob = new Blob([new Uint8Array(10 * 1024 * 1024)]); // any Blob, including a File
const uploadProgress = document.getElementById("upload-progress");
const downloadProgress = document.getElementById("download-progress");
const xhr = new XMLHttpRequest();
const success = await new Promise((resolve) => {
xhr.upload.addEventListener("progress", (event) => {
if (event.lengthComputable) {
console.log("upload progress:", event.loaded / event.total);
uploadProgress.value = event.loaded / event.total;
}
});
xhr.addEventListener("progress", (event) => {
if (event.lengthComputable) {
console.log("download progress:", event.loaded / event.total);
downloadProgress.value = event.loaded / event.total;
}
});
xhr.addEventListener("loadend", () => {
resolve(xhr.readyState === 4 && xhr.status === 200);
});
xhr.open("PUT", "https://httpbin.org/put", true);
xhr.setRequestHeader("Content-Type", "application/octet-stream");
xhr.send(blob);
});
console.log("success:", success);
}
main().catch(console.error);
upload: <progress id="upload-progress"></progress><br/>
download: <progress id="download-progress"></progress>
Update: as the accepted answer says it's impossible now. but the below code handled our problem for sometime. I should add that at least we had to switch to using a library that is based on XMLHttpRequest.
const response = await fetch(url);
const total = Number(response.headers.get('content-length'));
const reader = response.body.getReader();
let bytesReceived = 0;
while (true) {
const result = await reader.read();
if (result.done) {
console.log('Fetch complete');
break;
}
bytesReceived += result.value.length;
console.log('Received', bytesReceived, 'bytes of data so far');
}
thanks to this link: https://jakearchibald.com/2016/streams-ftw/
As already explained in the other answers, it is not possible with fetch, but with XHR. Here is my a-little-more-compact XHR solution:
const uploadFiles = (url, files, onProgress) =>
new Promise((resolve, reject) => {
const xhr = new XMLHttpRequest();
xhr.upload.addEventListener('progress', e => onProgress(e.loaded / e.total));
xhr.addEventListener('load', () => resolve({ status: xhr.status, body: xhr.responseText }));
xhr.addEventListener('error', () => reject(new Error('File upload failed')));
xhr.addEventListener('abort', () => reject(new Error('File upload aborted')));
xhr.open('POST', url, true);
const formData = new FormData();
Array.from(files).forEach((file, index) => formData.append(index.toString(), file));
xhr.send(formData);
});
Works with one or multiple files.
If you have a file input element like this:
<input type="file" multiple id="fileUpload" />
Call the function like this:
document.getElementById('fileUpload').addEventListener('change', async e => {
const onProgress = progress => console.log('Progress:', `${Math.round(progress * 100)}%`);
const response = await uploadFiles('/api/upload', e.currentTarget.files, onProgress);
if (response.status >= 400) {
throw new Error(`File upload failed - Status code: ${response.status}`);
}
console.log('Response:', response.body);
}
Also works with the e.dataTransfer.files you get from a drop event when building a file drop zone.
I don't think it's possible. The draft states:
it is currently lacking [in comparison to XHR] when it comes to request progression
(old answer):
The first example in the Fetch API chapter gives some insight on how to :
If you want to receive the body data progressively:
function consume(reader) {
var total = 0
return new Promise((resolve, reject) => {
function pump() {
reader.read().then(({done, value}) => {
if (done) {
resolve()
return
}
total += value.byteLength
log(`received ${value.byteLength} bytes (${total} bytes in total)`)
pump()
}).catch(reject)
}
pump()
})
}
fetch("/music/pk/altes-kamuffel.flac")
.then(res => consume(res.body.getReader()))
.then(() => log("consumed the entire body without keeping the whole thing in memory!"))
.catch(e => log("something went wrong: " + e))
Apart from their use of the Promise constructor antipattern, you can see that response.body is a Stream from which you can read byte by byte using a Reader, and you can fire an event or do whatever you like (e.g. log the progress) for every of them.
However, the Streams spec doesn't appear to be quite finished, and I have no idea whether this already works in any fetch implementation.
with fetch: now possible with Chrome >= 105 🎉
How to:
https://developer.chrome.com/articles/fetch-streaming-requests/
Currently not supported by other browsers (maybe that will be the case when you read this, please edit my answer accordingly)
Feature detection (source)
const supportsRequestStreams = (() => {
let duplexAccessed = false;
const hasContentType = new Request('', {
body: new ReadableStream(),
method: 'POST',
get duplex() {
duplexAccessed = true;
return 'half';
},
}).headers.has('Content-Type');
return duplexAccessed && !hasContentType;
})();
HTTP >= 2 required
The fetch will be rejected if the connection is HTTP/1.x.
Since none of the answers solve the problem.
Just for implementation sake, you can detect the upload speed with some small initial chunk of known size and the upload time can be calculated with content-length/upload-speed. You can use this time as estimation.
A possible workaround would be to utilize new Request() constructor then check Request.bodyUsed Boolean attribute
The bodyUsed attribute’s getter must return true if disturbed, and
false otherwise.
to determine if stream is distributed
An object implementing the Body mixin is said to be disturbed if
body is non-null and its stream is disturbed.
Return the fetch() Promise from within .then() chained to recursive .read() call of a ReadableStream when Request.bodyUsed is equal to true.
Note, the approach does not read the bytes of the Request.body as the bytes are streamed to the endpoint. Also, the upload could complete well before any response is returned in full to the browser.
const [input, progress, label] = [
document.querySelector("input")
, document.querySelector("progress")
, document.querySelector("label")
];
const url = "/path/to/server/";
input.onmousedown = () => {
label.innerHTML = "";
progress.value = "0"
};
input.onchange = (event) => {
const file = event.target.files[0];
const filename = file.name;
progress.max = file.size;
const request = new Request(url, {
method: "POST",
body: file,
cache: "no-store"
});
const upload = settings => fetch(settings);
const uploadProgress = new ReadableStream({
start(controller) {
console.log("starting upload, request.bodyUsed:", request.bodyUsed);
controller.enqueue(request.bodyUsed);
},
pull(controller) {
if (request.bodyUsed) {
controller.close();
}
controller.enqueue(request.bodyUsed);
console.log("pull, request.bodyUsed:", request.bodyUsed);
},
cancel(reason) {
console.log(reason);
}
});
const [fileUpload, reader] = [
upload(request)
.catch(e => {
reader.cancel();
throw e
})
, uploadProgress.getReader()
];
const processUploadRequest = ({value, done}) => {
if (value || done) {
console.log("upload complete, request.bodyUsed:", request.bodyUsed);
// set `progress.value` to `progress.max` here
// if not awaiting server response
// progress.value = progress.max;
return reader.closed.then(() => fileUpload);
}
console.log("upload progress:", value);
progress.value = +progress.value + 1;
return reader.read().then(result => processUploadRequest(result));
};
reader.read().then(({value, done}) => processUploadRequest({value,done}))
.then(response => response.text())
.then(text => {
console.log("response:", text);
progress.value = progress.max;
input.value = "";
})
.catch(err => console.log("upload error:", err));
}
I fished around for some time about this and just for everyone who may come across this issue too here is my solution:
const form = document.querySelector('form');
const status = document.querySelector('#status');
// When form get's submitted.
form.addEventListener('submit', async function (event) {
// cancel default behavior (form submit)
event.preventDefault();
// Inform user that the upload has began
status.innerText = 'Uploading..';
// Create FormData from form
const formData = new FormData(form);
// Open request to origin
const request = await fetch('https://httpbin.org/post', { method: 'POST', body: formData });
// Get amount of bytes we're about to transmit
const bytesToUpload = request.headers.get('content-length');
// Create a reader from the request body
const reader = request.body.getReader();
// Cache how much data we already send
let bytesUploaded = 0;
// Get first chunk of the request reader
let chunk = await reader.read();
// While we have more chunks to go
while (!chunk.done) {
// Increase amount of bytes transmitted.
bytesUploaded += chunk.value.length;
// Inform user how far we are
status.innerText = 'Uploading (' + (bytesUploaded / bytesToUpload * 100).toFixed(2) + ')...';
// Read next chunk
chunk = await reader.read();
}
});
const req = await fetch('./foo.json');
const total = Number(req.headers.get('content-length'));
let loaded = 0;
for await(const {length} of req.body.getReader()) {
loaded = += length;
const progress = ((loaded / total) * 100).toFixed(2); // toFixed(2) means two digits after floating point
console.log(`${progress}%`); // or yourDiv.textContent = `${progress}%`;
}
Key part is ReadableStream &Lt;obj_response.body&Gt;.
Sample:
let parse=_/*result*/=>{
console.log(_)
//...
return /*cont?*/_.value?true:false
}
fetch('').
then(_=>( a/*!*/=_.body.getReader(), b/*!*/=z=>a.read().then(parse).then(_=>(_?b:z=>z)()), b() ))
You can test running it on a huge page eg https://html.spec.whatwg.org/ and https://html.spec.whatwg.org/print.pdf . CtrlShiftJ and load the code in.
(Tested on Chrome.)

Why does Busboy yield inconsistent results when parsing FLAC files?

I have an Express server that receives FormData with an attached FLAC audio file. The code works as expected for several files of varying size (10 - 70MB), but some of them get stuck in the 'file' event and I cannot figure out why this happens. It is even more strange when a file that previously did not fire the file.on('close', => {}) event, as can be seen in the documentation for Busboy, suddenly does so, with the file being successfully uploaded.
To me, this seems completely random, as I have tried this with a dozen files of varying size and content type (audio/flac & audio/x-flac), and the results have been inconsistent. Some files will, however, not work at all, even if I attempt to parse them many times over. Whereas, certain files can be parsed and uploaded, given enough attempts?
Is there some error that I fail to deal with in the 'file' event? I did try to listen to the file.on('error', => {}) event, but there were no errors to be found. Other answers suggest that the file stream must be consumed for the 'close' event to proceed, but I think that file.pipe(fs.createWriteStream(fileObject.filePath)); does that, correct?
Let me know if I forgot to include some important information in my question. This has been bothering me for about a week now, so I am happy to provide anything of relevance to help my chances of overcoming this hurdle.
app.post('/upload', (request, response) => {
response.set('Access-Control-Allow-Origin', '*');
const bb = busboy({ headers: request.headers });
const fields = {};
const fileObject = {};
bb.on('file', (_name, file, info) => {
const { filename, mimeType } = info;
fileObject['mimeType'] = mimeType;
fileObject['filePath'] = path.join(os.tmpdir(), filename);
file.pipe(fs.createWriteStream(fileObject.filePath));
file.on('close', () => {
console.log('Finished parsing of file');
});
});
bb.on('field', (name, value) => {
fields[name] = value;
});
bb.on('close', () => {
bucket.upload(
fileObject.filePath,
{
uploadType: 'resumable',
metadata: {
metadata: {
contentType: fileObject.mimeType,
firebaseStorageDownloadToken: fields.id
}
}
},
(error, uploadedFile) => {
if (error) {
console.log(error);
} else {
db.collection('tracks')
.doc(fields.id)
.set({
identifier: fields.id,
artist: fields.artist,
title: fields.title,
imageUrl: fields.imageUrl,
fileUrl: `https://firebasestorage.googleapis.com/v0/b/${bucket.name}/o/${uploadedFile.name}?alt=media&token=${fields.id}`
});
response.send(`File uploaded: ${fields.id}`);
}
}
);
});
request.pipe(bb);
});
UPDATE: 1
I decided to measure the number of bytes that were transferred upon each upload with file.on('data', (data) => {}), just to see if the issue was always the same, and it turns out that this too is completely random.
let bytes = 0;
file.on('data', (data) => {
bytes += data.length;
console.log(`Loaded ${(bytes / 1000000).toFixed(2)}MB`);
});
First Test Case: Fenomenon - Sleepy Meadows Of Buxton
Source: https://fenomenon.bandcamp.com/track/sleepy-meadows-of-buxton
Size: 30.3MB
Codec: FLAC
MIME: audio/flac
Results from three attempts:
Loaded 18.74MB, then became stuck
Loaded 5.05MB, then became stuck
Loaded 21.23MB, then became stuck
Second Test Case: Almunia - New Moon
Source: https://almunia.bandcamp.com/track/new-moon
Size: 38.7MB
Codec: FLAC
MIME: audio/flac
Results from three attempts:
Loaded 12.78MB, then became stuck
Loaded 38.65, was successfully uploaded!
Loaded 38.65, was successfully uploaded!
As you can see, the behavior is unpredictable to say the least. Also, those two successful uploads did playback seamlessly from Firebase Storage, so it really worked as intended. What I cannot understand is why it would not always work, or at least most of the time, excluding any network-related failures.
UPDATE: 2
I am hopelessly stuck trying to make sense of the issue, so I have now created a scenario that closely resembles my actual project, and uploaded the code to GitHub. It is pretty minimal, but I did add some additional libraries to make the front-end pleasant to work with.
There is not much to it, other than an Express server for the back-end and a simple Vue application for the front-end. Within the files folder, there are two FLAC files; One of them is only 4.42MB to prove that the code does sometimes work. The other file is much larger at 38.1MB to reliably illustrate the problem. Feel free to try any other files.
Note that the front-end must be modified to allow files other than FLAC files. I made the choice to only accept FLAC files, as this is what I am working with in my actual project.
You'll need to write the file directly when BusBoy emits the file event.
It seems there is a race condition if you rely on BusBoy that prevents the file load from being completed. If you load it in the file event handler then it works fine.
app.post('/upload', (request, response) => {
response.set('Access-Control-Allow-Origin', '*');
const bb = busboy({
headers: request.headers
});
const fileObject = {};
let bytes = 0;
bb.on('file', (name, file, info) => {
const {
filename,
mimeType
} = info;
fileObject['mimeType'] = mimeType;
fileObject['filePath'] = path.join(os.tmpdir(), filename);
const saveTo = path.join(os.tmpdir(), filename);
const writeStream = fs.createWriteStream(saveTo);
file.on('data', (data) => {
writeStream.write(data);
console.log(`Received: ${((bytes += data.length) / 1000000).toFixed(2)}MB`);
});
file.on('end', () => {
console.log('closing writeStream');
writeStream.close()
});
});
bb.on('close', () => {
console.log(`Actual size is ${(fs.statSync(fileObject.filePath).size / 1000000).toFixed(2)}MB`);
console.log('This is where the file would be uploaded to some cloud storage server...');
response.send('File was uploaded');
});
bb.on('error', (error) => {
console.log(error);
});
request.pipe(bb);
});

mediainfo.js query specific data

I got an example working great.
Now Im trying to modify that working example and come up with a way to extract specific data.
For example. Frame rate.
Im thinking the syntax should be something like this with result.frameRate
see below where I tried console.log("Frame: "+ result.frameRate) also tried the Buzz suggestion of result.media.track[0].FrameRate neither suggestion works.
<button class="btn btn-default" id="getframe" onclick="onClickMediaButton()">Get Frame Rate</button>
<script type="text/javascript" src="https://unpkg.com/mediainfo.js/dist/mediainfo.min.js"></script>
const onClickMediaButton = (filename) => {
//const file = fileinput.files[0]
const file = "D:\VLCrecords\Poldark-Episode6.mp4"
if (file) {
output222.value = 'Working…'
const getSize = () => file.size
const readChunk = (chunkSize, offset) =>
new Promise((resolve, reject) => {
const reader = new FileReader()
reader.onload = (event) => {
if (event.target.error) {
reject(event.target.error)
}
resolve(new Uint8Array(event.target.result))
}
reader.readAsArrayBuffer(file.slice(offset, offset + chunkSize))
})
mediainfo
.analyzeData(getSize, readChunk)
.then((result) => {
consoleLog("Frame: " + result.media.track[0].FrameRate);
output222.value = result;
})
.catch((error) => {
output222.value = `An error occured:\n${error.stack}`
})
}
}
but I cant figure out the exact syntax. Can you help point me in the right direction?
Short answer
Use result.media.track[0].FrameRate.
Long answer
The type of result depends on how you instantiated the library. Your example does not provide enough information on how you are using the library.
From the docs:
MediaInfo(opts, successCallback, errorCallback)
Where opts.format can be object, JSON, XML, HTML or text. So, assuming you used format: 'object' (the default), result will be a JavaScript object.
The structure of the result object depends on the data you provide to MediaInfoLib (or in this case mediainfo.js which is an Emscripten port of MediaInfoLib). The information about the framerate will only be available if you feed a file with at least one video track to the library.
Assuming this is the case, you can access the list of tracks using result.media.track. Given that the video track you are interested in has the index 0, the access to the desired property would be result.media.track[0].FrameRate. This is true for a large amount of video files that usually have at least one video track and have this track as the first available track. Note that this won't necessarily work for all video files and you must make sure your code is fault-tolerant in case these properties don't exist on the result object.
Unfortunately, it seems there is no detailed list of available fields in the MediaInfoLib documentation. You could look at the source code. This might get tedious and lengthy and requires you to understand a fair amount of C++. The most convenient way for me though is to just feed MediaInfoLib a file and look at the result.
PS: The question was already answered in this GitHub issue.
Disclaimer: I'm the author of the aforementioned Emscripten port mediainfo.js.
EDIT:
I don't think you are correct when saying my answer is "partial-incomplete" and "does not work."
As a proof here is a working snippet. Obviously you need to use a media file with a video track.
const fileinput = document.getElementById('fileinput');
const onChangeFile = (mediainfo) => {
const file = fileinput.files[0];
if (file) {
const getSize = () => file.size;
const readChunk = (chunkSize, offset) =>
new Promise((resolve, reject) => {
const reader = new FileReader();
reader.onload = (event) => {
resolve(new Uint8Array(event.target.result));
}
reader.readAsArrayBuffer(file.slice(offset, offset + chunkSize));
});
console.log(`Processing ${file.name}`);
mediainfo
.analyzeData(getSize, readChunk)
.then((result) => {
const frameRate = result.media.track[0].FrameRate; // <- Here we read the framerate
console.log(`Framerate: ${frameRate}`);
})
}
}
MediaInfo(null, (mediainfo) => {
fileinput.addEventListener('change', () => onChangeFile(mediainfo));
fileinput.disabled = false;
})
<script src="https://unpkg.com/mediainfo.js#0.1.4/dist/mediainfo.min.js"></script>
<input disabled id="fileinput" type="file" />

How to make upload faster in angular 7?

I am trying to speed up the upload. So I tried with different solution, with both BackEnd and Front-End. Those are,
1) I uploaded the tar file (already compressed one)
2) I tried chunk upload (sequentially), if the response is success next API will get triggered. In the back-end side, in the same file the content will get appended.
3) I tried chunk upload but in parallel, at a single time I make the 50 request to upload the chunk content (I know, at a time browser handle only 6 requests). From the backend side, we are storing all the chunk file separately, after receiving the final request, appending all those chunks in to the single file.
But observed is, I am not seeing the much difference with all these cases.
Following is my service file
export class largeGeneUpload {
chromosomeFile: any;
options: any;
chunkSize = 1200000;
activeConnections = 0;
threadsQuantity = 50;
totalChunkCount = 0;
chunksPosition = 0;
failedChunks = [];
sendNext() {
if (this.activeConnections >= this.threadsQuantity) {
return;
}
if (this.chunksPosition === this.totalChunkCount) {
console.log('all chunks are done');
return;
}
const i = this.chunksPosition;
const url = 'gene/human';
const chunkIndex = i;
const start = chunkIndex * this.chunkSize;
const end = Math.min(start + this.chunkSize, this.chromosomeFile.size);
const currentchunkSize = this.chunkSize * i;
const chunkData = this.chromosomeFile.webkitSlice ? this.chromosomeFile.webkitSlice(start, end) : this.chromosomeFile.slice(start, end);
const fd = new FormData();
const binar = new File([chunkData], this.chromosomeFile.upload.filename);
console.log(binar);
fd.append('file', binar);
fd.append('dzuuid', this.chromosomeFile.upload.uuid);
fd.append('dzchunkindex', chunkIndex.toString());
fd.append('dztotalfilesize', this.chromosomeFile.upload.total);
fd.append('dzchunksize', this.chunkSize.toString());
fd.append('dztotalchunkcount', this.chromosomeFile.upload.totalChunkCount);
fd.append('isCancel', 'false');
fd.append('dzchunkbyteoffset', currentchunkSize.toString());
this.chunksPosition += 1;
this.activeConnections += 1;
this.apiDataService.uploadChunk(url, fd)
.then(() => {
this.activeConnections -= 1;
this.sendNext();
})
.catch((error) => {
this.activeConnections -= 1;
console.log('error here');
// chunksQueue.push(chunkId);
});
this.sendNext();
}
uploadChunk(resrc: string, item) {
return new Promise((resolve, reject) => {
this._http.post(this.baseApiUrl + resrc, item, {
headers: this.headers,
withCredentials: true
}).subscribe(r => {
console.log(r);
resolve();
}, err => {
console.log('err', err);
reject();
});
});
}
But the thing is, If I upload the same file in google drive it is not taking much time.
Let's consider, I have 700 MB file, to upload it in google drive it took 3 mins. But the same 700 MB file to upload with my Angular code with our back-end server it took 7 mins to finish it.
How do I improve the performance of file upload.?
forgive me ,
it seems silly answer but this depend on your hosting infrastructure
A lot of variables can cause this, but by your story it has nothing to do with your front-end code. Making it into chunks is not going to help, because browsers have their own optimized algorithm to upload files. The most likely culprit is your backend server or the connection from your client to the server.
You say that google drive is fast, but you should also know that google has a very widespread global infrastructure with top of the line cloud servers. If you are using, for example, a 2 euro per month fixed place hosting provider, you cannot expect the same processing and network power as google.

Merging PDFs in Node

Hi i'm trying to merge pdf's of total of n but I cannot get it to work.
I'm using the Buffer module to concat the pdf's but it does only apply the last pdf in to the final pdf.
Is this even possible thing to do in node?
var pdf1 = fs.readFileSync('./test1.pdf');
var pdf2 = fs.readFileSync('./test2.pdf');
fs.writeFile("./final_pdf.pdf", Buffer.concat([pdf1, pdf2]), function(err) {
if(err) {
return console.log(err);
}
console.log("The file was saved!");
});
There are currently some libs out there but they do all depend on either other software or programming languages.
What do you expect to get when you do Buffer.concat([pdf1, pdf2])? Just by concatenating two PDFs files you won’t get one containing all pages. PDF is a complex format (basically one for vector graphics). If you just added two JPEG files you wouldn’t expect to get a big image containing both pictures, would you?
You’ll need to use an external library. https://github.com/wubzz/pdf-merge might work for instance.
HummusJS is another PDF manipulation library, but without a dependency on PDFtk. See this answer for an example of combining PDFs in Buffers.
Aspose.PDF Cloud SDK for Node.js can merge/combine pdf documents without depending on any third-party API/Tool. However, currently, it works with cloud storage(Aspose Internal Storage, Amazon S3, DropBox, Google Drive Storage, Google Cloud Storage, Windows Azure Storage, FTP Storage). In near future, we will provide support to merge files from the request body(stream).
const { PdfApi } = require("asposepdfcloud");
const { MergeDocuments }= require("asposepdfcloud/src/models/mergeDocuments");
var fs = require('fs');
pdfApi = new PdfApi("xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx", "xxxxxxxxxxxxxxxxxxxxx");
const file1 = "dummy.pdf";
const file2 = "02_pages.pdf";
const localTestDataFolder = "C:\\Temp";
const names = [file1, file2];
const resultName = "MergingResult.pdf";
const mergeDocuments = new MergeDocuments();
mergeDocuments.list = [];
names.forEach( (file) => {
mergeDocuments.list.push(file);
});
// Upload File
pdfApi.uploadFile(file1, fs.readFileSync(localTestDataFolder + "\\" + file1)).then((result) => {
console.log("Uploaded File");
}).catch(function(err) {
// Deal with an error
console.log(err);
});
// Upload File
pdfApi.uploadFile(file2, fs.readFileSync(localTestDataFolder + "\\" + file2)).then((result) => {
console.log("Uploaded File");
}).catch(function(err) {
// Deal with an error
console.log(err);
});
// Merge PDF documents
pdfApi.putMergeDocuments(resultName, mergeDocuments, null, null).then((result) => {
console.log(result.body.code);
}).catch(function(err) {
// Deal with an error
console.log(err);
});
//Download file
const outputPath = "C:/Temp/" + resultName;
pdfApi.downloadFile(outputPath).then((result) => {
fs.writeFileSync(localPath, result.body);
console.log("File Downloaded");
}).catch(function(err) {
// Deal with an error
console.log(err);
});

Categories