I am trying to upload a video to AWS S3 by using AWS Amplify in React native. I am able to upload an image when follow this link:
aws-amplify Upload image using PUT in Storage
I apply the same code to upload a video and success. However, the app crash if the video's size is big.
According to the example, they read and convert the image/video file to base64 (consume memory a lot and even cause the app crash) then upload everything on the server.
UPDATE
I found the answer using blob from this link
React native - Upload image to AWS S3 using blob
uploadImageVersion2 = async () => {
const response = await fetch("https://static.asiachan.com/Berry.Good.600.42297.jpg");
const blob = await response.blob();
const fileName = 'profileImage.jpg';
await Storage.put(fileName, blob, {
contentType: 'image/jpg',
level: 'private'
}).then(data => console.log(data))
.catch(err => console.log(err))
}
However, there is an error:
My questions are:
1/ Is there any way to avoid convert base64 before upload the image/video using AWS Amplify
2/ Is there any other ways to upload image/video in React native to AWS S3 without using Storage in AWS Amplify.
If you know the answer of these two questions, please let me know and show me how.
Thank you in advance!
Related
Im calling request from google workspace file with gapi using this code
gapi.client.drive.files.export({
fileId: fileId,
mimeType: 'application/vnd.openxmlformats-officedocument.wordprocessingml.document'
})
from that code i get response back of base64 string to convert as a blob but when i open file in google docs or msword file is corrupt and not working but if i change mimeType into application/pdf it working
same thing also happen in if i request file with application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
am i doing something wrong here or i there is other way to getting file workspace
md5Checksum is not available
As mentioned in the official documentation for files.get:
Response
By default, this responds with a Files resource in the response body. If you provide the URL parameter alt=media, then the response includes the file contents in the response body. Downloading content with alt=media only works if the file is stored in Drive. To download Google Docs, Sheets, and Slides use files.export instead. For further information on downloading files, refer to Download files.
Finally i found solution for this issue according to this official documentation we have two type we can download from google drive.
here my code how i request for file
const res = await drive
.files
.export({
fileId: fileId,
mimeType: yourMimeTarget
})
.then(res => res)
const blob_file = fetch(`data:${yourMimeTarget};base64,${btoa(res.body)}`)
.then(res => res.blob())
so after you fetch google workspace file you will get base64 string than you can convert it into blob
I am trying to read text file from s3 storage in ReactJS. I have the s3 bucket link of the text file. However, I am not able to read the text file from the link. I have googled a lot about this, but everywhere people are reading the file locally and not from a link.
The link of the file looks like this:
https://bucketnmame.s3.amazonaws.com/folder1/folder2/file.txt
This link is being stored in database with NodeJS as backend. Would be great if someone helps me...
Try this
fetch('https://bucketnmame.s3.amazonaws.com/folder1/folder2/file.txt')
.then ((response) => response.text())
.then (data => {
console.log(data)
});
I have been trying to upload PDF file from phone storage to Firebase. Its working fine when I use react-native-document-picker to select a pdf file, which returns the path of the file something like content://com.android.externalstorage.documents/document/primary%3Adocs%2F' +'<filename>'.pdf'
which is then passed to fetch
const response = await fetch(path);
blob = await response.blob();
and then the blob is uploaded to Firebase.
But the problem is that I don't want to pick a document every time. if I don't use react-native-document-picker and just curate the exact path return by the module. it gives network error when passed to fetch, so unable to create a blob.
I also tried react-native-fs but couldn't figure out how to create a blob of PDF residing in emulated storage.
I'm using Cloud Functions to convert audio/mp4 from getUserMedia() placed in Storage bucket
To audio/x-flac format using ffmpeg for being able to transcribe it using Google STT
bucket
.file(file.name)
.download({ destination })
.then(() =>
ffmpeg(destination)
.setFfmpegPath(ffmpeg_static.path)
.audioChannels(1)
.audioFrequency(16000)
.format('flac')
.on('error', console.log)
.on('end', () =>
bucket
.upload(targetTempFilePath, { destination: targetStorageFilePath })
.then(() => {
fs.unlinkSync(destination);
fs.unlinkSync(targetTempFilePath);
});
)
.save(targetTempFilePath);
)
);
Workflow: client-side MP4 => Storage bucket trigger => STT => Firestore
It works great and I get clean FLAC files and STT works flawlessly in this combination!
But only IF
Input files are not larger than 1-2 Mb each (usually I have a series of 5-10 files coming in at once).
I'm aware of 10 Mb limit and now I want to let Cloud Functions handle image processing only and move all audio stuff to some dedicated GAE or GCE instance.
What's better to use: in this case GAE or GCP, dockerized or native, Python or Node, etc.
How exactly could the workflow be triggered on GCP instance after placing files on Storage?
Any thoughts or ideas would be greatly welcomed!
I would recommend you to use the Cloud Function as a Cloud Storage trigger.
In this way, you will be able to get the name of the file uploaded in your specific bucket.
You can check this documentation about Google Cloud Storage Triggers, in order to see some examples.
If you use Python, you can see the file name by using:
print('File: {}'.format(data['name']))
Once you got the name of the file, you can do the request to GAE in order to convert the audio.
I also found this post that explains how to call an URL hosted in Google App Engine, and I think it might be useful for you.
Hope this helps!
I have created EC2 instances, with a load balancer and auto scaling as described in the documentation for AWS from the following link:
http://docs.aws.amazon.com/gettingstarted/latest/wah-linux/getting-started-application-server.html
I would like to store all of the user images and files on a S3 bucket, but I'm not sure of the best way to connect it to my web-app.
The web-app has an API coded in PHP, which would need to upload files to the bucket. The front end is coded in JavaScript which would need to be able to access the files. These files will be on every page, and range from user images to documents.
Currently all the media is loaded locally, and nothing is stored on the S3 bucket. The front end and the API are both stored on EC2 instances and would need to access the S3 bucket contents.
What is the most efficient way to load the data stored on the S3 bucket? I have looked into the AWS SDK for JavaScript, but it doesn't seem to be the best way of getting the files. Here is the code that seems relevant:
var s3 = new AWS.S3();
var params = {Bucket: 'myBucket', Key: 'myKey'};
s3.getSignedUrl('getObject', params, function (err, url) {
console.log("The URL is", url);
});
If this is the way to go, would I have to do this for each image?
Thank you