How to upload image on hyperledger composer playground? - javascript

I am trying to build a Block chain application for distributed image sharing and copyright protection. I am using image as an asset.
So now I want to upload an image on Hyper ledger Composer playground. How can I do that?

You can store your file data into the IPFS. IPFS is a protocol and network designed to create a content-addressable, peer-to-peer method of storing and sharing hypermedia in a distributed file system.
For IPFS I recommend you to follow the link
In your application, In js file where you need to store Image. There you have to just write ipfs connectivity code. When you run the application at that time just make sure ipfs daemon started.
IPFS will give you a Hash link after successfully upload a file. You can store that hash into an asset or participate of hyperledger composer.
for example
function toIPFS(file) {
return new Promise(resolve => {
const reader = new FileReader();
reader.onloadend = function() {
const ipfs = window.IpfsApi('ipfs', 5001,{protocol : "https"}) // Connect to IPFS
const buf = buffer.Buffer(reader.result) // Convert data into buffer
ipfs.files.add(buf, (err, result) => { // Upload buffer to IPFS
if(err) {
return
}
let url = `https://ipfs.io/ipfs/${result[0].hash}`
resolve('resolved url');
})
}
reader.readAsArrayBuffer(file); // Read Provided File
});
}
I hope it will help you. :)

Related

Uploading a file from front-end to firebase storage via cloud functions

I am trying to achieve the following:
User selects file on website
User calls Firebase Cloud function and passes file into the function
Cloud function uploads the file that to storage.
So far I am able to do all of the above, however, when I try to access the above file in storage, a file with no extension is downloaded. The original file was a pdf, but I am still unable able to open it with PDF viewers. It appears I am storing something in storage, although I am not exactly sure what.
Here is an example of how my front-end code works:
const getBase64 = file => new Promise((resolve, reject) => {
const reader = new FileReader();
reader.readAsDataURL(file);
reader.onload = () => resolve(reader.result);
reader.onerror = error => reject(error);
});
var document_send = document.getElementById('myFile')
var send_button = document.getElementById('send_button')
send_button.addEventListener('click', async () => {
var sendDocument = firebase.functions().httpsCallable('sendDocument')
try {
await sendDocument({
docu: await getBase64(document_send.files[0])
})
} catch (error) {
console.log(error.message);
}
})
Here is an example of how my cloud function works:
const functions = require("firebase-functions");
const admin = require("firebase-admin");
exports.sendDocument = functions.https
.onCall((data, context) => {
return admin.storage().bucket()
.file("randomLocationName")
//.file("randomLocationName"+".pdf") - tried this also
.save(data.docu);
})
.catch((error) => {
console.log(error.message);
return error;
});
});
I do not receive an error message as the function runs without error.
The save() function seems to take either a string or Buffer as first parameter.
> save(data: string | Buffer, options?: SaveOptions)
The issue arises when you pass the base64 string directly instead of a Buffer. Try refactoring the code as shown below:
return admin.storage().bucket()
.file("randomLocationName" + ".pdf") // <-- file extension required
.save(Buffer.from(data.docu, "base64"));
Cloud Functions also have a 10 MB max request size so you won't be able to upload large images that way. You can use Firebase client SDKs to upload files directly, restrict access using security rules and use Cloud Storage Triggers for Cloud Function in case you want to process the file and update the database. Alternatively, use signed URLs for uploading the files if you are using GCS without Firebase.

Download file to Firebase Storage via Firebase Functions

I have a program that allows users to upload video files to firebase storage. If the file is not an mp4 I then send it to a third party video converting site to convert it to mp4. They hit a webhook(firebase function) with the URL and other information about the converted file.
Right now I'm trying to download the file to the tmp dir in firebase functions and then send it to firebase storage. I have the following questions.
Can I bypass downloading a file to the functions tmp dir and just save it directly to storage? if so how?
I'm currently having trouble downloading to the functions tmp dir below is my code. The function is returning Function execution took 6726 ms, finished with status: 'crash'
export async function donwloadExternalFile(url:string, fileName:string) {
try {
const axios = await import('axios')
const fs = await import('fs')
const workingDir = join(tmpdir(), 'downloadedFiles')
const tmpFilePath = join(workingDir, fileName)
const writer = fs.createWriteStream(tmpFilePath);
const response = await axios.default.get(url, { responseType: 'stream' })
response.data.pipe(writer);
await new Promise((resolve, reject) => {
writer.on('error', err => {
writer.close();
reject(err);
});
writer.on('close', () => {
resolve();
});
});
return
} catch (error) {
throw error
}
}
As mentioned above in the comments section, you can use the Cloud Storage Node.js SDK to effectuate the upload of your file to Cloud Storage.
Please take a look at the SDK Client reference documentation where you can find numerous samples and more information about this Cloud Storage client library.
Also, I'd like to remind you that you can bypass writing to /tmp by using a pipeline. According to the documentation for Cloud Functions, "you can process a file on Cloud Storage by creating a read stream, passing it through a stream-based process, and writing the output stream directly to Cloud Storage."
Last but not least, always delete temporary files from the Cloud Function's local system. Failing to do so can eventually result in out-of-memory issues and subsequent cold starts.

NodeJs Microsoft Azure Storage SDK Download File to Stream

I just started working with the Microsoft Azure Storage SDK for NodeJS (https://github.com/Azure/azure-storage-node) and already successfully uploaded my first pdf files to the cloud storage.
However, now I started looking at the documentation, in order to download my files as a node_buffer (so I dont have to use fs.createWriteStream), however the documentation is not giving any examples of how this works. The only thing they are writing is "There are also several ways to download files. For example, getFileToStream downloads the file to a stream:", but then they only show one example, which is using the fs.createWriteStream, which I dont want to use.
I was also not able to find anything on Google that really helped me, so I was wondering if anybody has experience with doing this and could share a code sample with me?
The getFileToStream function need a writable stream as param. If you want all the data wrote to a Buffer instead of a file, you just need to create a custom writable stream.
const { Writable } = require('stream');
let bufferArray = [];
const myWriteStream = new Writable({
write(chunk, encoding, callback) {
bufferArray.push(...chunk)
callback();
}
});
myWriteStream.on('finish', function () {
// all the data is stored inside this dataBuffer
let dataBuffer = Buffer.from(bufferArray);
})
then pass myWriteStream to getFileToStream function
fileService.getFileToStream('taskshare', 'taskdirectory', 'taskfile', myWriteStream, function(error, result, response) {
if (!error) {
// file retrieved
}
});

Download large data stream (> 1Gb) using javascript

I was wondering if it was possible to stream data from javascript to the browser's downloads manager.
Using webrtc, I stream data (from files > 1Gb) from a browser to the other. On the receiver side, I store into memory all this data (as arraybuffer ... so the data is essentially still chunks), and I would like the user to be able to download it.
Problem : Blob objects have a maximum size of about 600 Mb (depending on the browser) so I can't re-create the file from the chunks. Is there a way to stream these chunks so that the browser downloads them directly ?
if you want to fetch a large file blob from an api or url, you can use streamsaver.
npm install streamsaver
then you can do something like this
import { createWriteStream } from 'streamsaver';
export const downloadFile = (url, fileName) => {
return fetch(url).then(res => {
const fileStream = createWriteStream(fileName);
const writer = fileStream.getWriter();
if (res.body.pipeTo) {
writer.releaseLock();
return res.body.pipeTo(fileStream);
}
const reader = res.body.getReader();
const pump = () =>
reader
.read()
.then(({ value, done }) => (done ? writer.close() : writer.write(value).then(pump)));
return pump();
});
};
and you can use it like this:
const url = "http://urltobigfile";
const fileName = "bigfile.zip";
downloadFile(url, fileName).then(() => { alert('done'); });
Following #guest271314's advice, I added StreamSaver.js to my project, and I successfully received files bigger than 1GB on Chrome. According to the documentation, it should work for files up to 15GB but my browser crashed before that (maximum file size was about 4GB for me).
Note I: to avoid the Blob max size limitation, I also tried to manually append data to the href field of a <a></a> but it failed with files of about 600MB ...
Note II: as amazing as it might seem, the basic technique using createObjectURL works perfectly fine on Firefox for files up to 4GB !!

How to hash the contents of a file uploaded in Meteor.js

I'm just starting out with Meteor and (and coding in general) I have done the tutorial projects and examples etc and am looking to start my own project. My project is I want users to be able to select a file on their computer with an field, user selects file, the contents of the file is read and the webpage provides a hash of the contents. Possible to be done clientside without the file being uploaded to a server?
A bit lost where I should be looking- HTML5 file-read API, cryptoJS, or something else? How would I go about providing that functionality in a webpage?
Yes, this can be done using the HTML5 FileReader API.
Template.fileUpload.helpers({
'change #file': function (e) {
var files = e.target.files;
var file = files[0];
var reader = new FileReader();
reader.onload = function() {
console.log(this.result);
}
reader.readAsText(file);
}
});

Categories