How to enable and use file versioning in Firebase Storage? - javascript

Firebase storage is based on Google Cloud Platform which allows versioning of files.
In the Firebase console there are no options regarding the GCP bucket, and when accessing the GCP console, there doesn't seem to be a way on enabling versioning in the bucket pertaining to the Firebase project.
Also, the Firebase SDK does not mention how to access previous versions of files even if versioning was enabled.
Is versioning possible with Firebase Storage?

Firebase Storage is built on GCS so many of the features of GCS can be accessed via Firebase Storage. Firebase Storage also shares a GCS bucket named <project-id>.appspot.com (or similar), that can be accessed via both the Firebase console and the Cloud console.
You can enable object versioning on your bucket by using the gsutil tool (probably the easiest way) like so:
gsutil versioning set on gs://<project-id>.appspot.com
That said, there's no way of using the Firebase Storage clients to retrieve anything other than the most recent version. This was intentional, since Firebase Storage provides a simpler, mobile focused subset of the GCS APIs, and we didn't have a super compelling use case for providing an intuitive object versioning story for mobile. Per user data backups (initiated by the user without dev intervention), and document diffs are the two I can think of, but if you've got another, we'd love to hear it :)
We anticipate that a majority of devs will turn this on in order to prevent deletions from being permanent (and indeed, we mention doing this in our delete docs), and will thus use tools like gsutil or their own custom backends to retrieve and restore the appropriate files.
EDIT 10/1: Since these use cases have become more common, we've updated our docs to include more things you can do with Google Cloud Platform in our GCP Integration guide.

After Object Versioning enabled its possible to listing noncurrent object versions.
Get previous object version:
Long previousGeneration = ...;
byte[] previousContent = bucket.getStorage().get(BlobId.of(bucket.getName(), objectName, previousGeneration)).getContent();

Related

Store and edit data using ReactJS

I am building an application using ReactJS. I am trying to find out how to store data and to edit it. I tried to store it on my computer with 'fs, 'browserify-fs' but it didn't work.
Should I use express, or is there any other alternatives ?
If you are using React you are operating in the browser. Your option for storage is in local storage. This is explained here.
Examples of code are:
// setter
localStorage.setItem('myData', data);
// getter
localStorage.getItem('myData');
// remove
localStorage.removeItem('myData');
// remove all
localStorage.clear();
Note this is stored in the browser and can be easily cleared. You are going to realize that you need a back end solution. This is a server you can send requests to which has an API (a place you send requests to) which executes some form of operation (normally CRUD - Create Read Update Delete via a REST endpoint or GRAPHQL) to serve you back the data you are requesting from a database (MySQL, Postgres, MongoDB). This is a whole different discussion.
To store an array in local storage you will need to make it a string via JSON.stringify. An example would be:
localStorage.setItem("array", JSON.stringify(array));
In developer tools in Chrome you can go to Application -> Storage -> Local Storage and see what is saved. Here is an example:
If you want to share the data along multiple clients you should use server-side solution or if you just want to save the data for a client only you could use client-side solution provided by #diesel.
Create your own web-server
You need to create web server and a database to store your data. Database is used to store data. You could use: MySQL, PostgreSQL, SQLite3, MongoDB, ... You also need to create web service to make secure database calls.
To create web server you could use Express.js to write your web server easily.
Headless Content Management Systems (abbr: CMS)
If you don't want to spent time on creating your own web-server you could install a headless CMS to read/write your data using api endpoints provided by CMSs. Here's list of headless CMS softwares: headlesscms.org. I tried strapi which has lots of features you might need.
Here's some strapi features:
Open-source
Model builder
Extensible (plugin support)
Content editor (eg: to edit articles)
and many more
Firebase
If you don't want to spend your time on installing CMS software to your server and maintaining it regularly you could use Database service provided by Google Firebase. It is also feature rich too. Here's some features supported by Firebase.
NoSQL Database (to store your data)
Authentication (to authenticate users)
Storage (to store files)
Functions (to write serverless functions)
Machine Learning
and many more

Safest way to store access tokens (Tizen)

My Samsung Gear (Tizen 2.4, Web App) application makes use of several paid APIs which are protected with secret access tokens.
At the moment I simply have those tokens inside a js file, this does not feel like a safe way to store sensitive information.
What is the recommended way to store this kind of information.
The documentation mentions a key manager:
https://developer.tizen.org/ko/development/api-references/web-application?redirect=/dev-guide/3.0.0/org.tizen.web.apireference/html/device_api/wearable/tizen/keymanager.html&langredirect=1
But I think the watch user has access to that? Which is exactly what I try to avoid.
Inside the config file, I can set some preferences, which I can then fetch with the preferences API. Is this secure? Or is this information extractable as well?
I was wondering what the safest way to store senstive app information (such as usernames, passwords, tokens, keys, ...), to which the watch user should in no way have access to, is to put inside a gear app. Or is the code assured to be protected in the compiled WGT file?
Or is the code assured to be protected in the compiled WGT file?
There is a feature of encrypting Encryption available in Tizen applications. It protects html, js and css files after installation on the device. Maybe you can use it to somehow protect some sensitive data, but please notice that encryption happen during installation on the device - not during wgt file creation.

Storing AWS credentials in the frontend

I am trying to get hold of my image objects from S3, from my JavaScript frontend application.
Acording to the documentation, these are the steps required:
import * as AWS from "aws-sdk";
AWS.config.update({accesKeyId, secretAccesKey, region});
let s3 = new AWS.S3();
And then, you can get the objects like so:
function listObjects(bucketName, folderName) {
return new Promise((resolve) => {
s3.listObjects({Bucket: bucketName, Prefix: folderName}).promise()
.then((data) => {
resolve(data.Contents);
})
});
}
All seems to work correctly, but what worries me is that I also need to keep the accessKeyId and the secretAccessKey in my frontend application, in order to access the bucket.
How does one secure the bucket, or access the objects without providing these confidential data?
You're right to worry. Any one will be able to take the credentials out of your app. There's a few approaches to this:
if the objects aren't actually sensitive, then there's nothing lost if the credential can only take the actions you wish to allow everyone. For that matter you should be able to get rid of the need for credentials all together if you set the permissions on your bucket properly .. I think that includes list permissions if necessary.
if the objects are sensitive, then you already have some sort of authentication system for your users. IF you're using Oauth accounts to auth ( google, amazon, facebook ,etc ) then you can use AWS Cognito to generate short lived AWS credentials that are associated to that user, which would allow you to differentiate permissions between users ... it's pretty slick and a great fit if already using oauth. IF you're not using oauth, consider whether you should be. It's far more secure than having to hande your own auth creds layer for your users. https://aws.amazon.com/cognito/
if you don't want to or can't user cognito, you can still assume an AWS role from the backend and generate temporary credentials that automatically expire in anywhere from 15 minutes to 1 hour or more and then pass those credentials to the front end. I'd call it "poor man's cognito" but I think it's probably actually more expensive to run the infra to provide the service than cognito costs.
Or, as #Tomasz Swinder suggests, you can simply proxy the requests through your application, resolving the asset the user requests to an s3 resource and pulling it in your backend and then serving to your user. This is an inferior solution in most cases because your servers are farther away from the end user than s3's endpoints are likely to be. And, you have to run infrastructure to proxy. But, that having been said, it has it's place.
Finally, pre-signed s3 urls may be a good fit for your application. Typically a backend would sign the s3 urls directly before providing them to the user. The signature is enough to authorize the operation ( which can be PUT or GET) but doesn't itself contain the private key used to sign - in other words, presigned urls provide an authorized URL but not the credentials used to authorize them, so they're a great way to provide ad hoc authorization to s3.
Overall it's really awesome to have a backend-free application, and for that you're going to need a 3rd party auth and something like cognito. but once you start using it, you can then use all sorts of aws services to provide what would otherwise be done by a backend. Just be careful with permissions because aws is all pay as you go and there's usually no capability to limit the calls to a service to make sure a cruel internet user strives to drive up your AWS bill by making tons of calls with the temporary creds you've provided them. One notable exception to that is API Gateway, which does allow per user rate limits and therefore is a great fit for a cognito-authorized serverless backend.
Also bear in mind that LISTing s3 objects is both much slower, and much more expensive ( still cheap per op, but 10x ) than GETing s3 objects, so it's usually best to avoid calling lIST whenever possible. I'm just throwing that out there, I suspect you're just doing that to test out the s3 connection.
Could you request that through your server? or is it a static site?
If this is a static site then, You can create IAM User for s3 that can only read the content that you are going to use and show in the fronted anyway.
One method we use is to store the credentials in a .env file and use dotenv (https://github.com/motdotla/dotenv) to read in the variables. These can then be accessed through process.env. For example, the .env file would contain:
AWSKEY=1234567abcdefg
AWSSECRET=hijklmn7654321
REGION=eu-west
Then in your code you would then call require('dotenv').load() to read the environment variables. You then access them as:
AWS.config.update({process.env.AWSKEY, process.env.AWSSECRET, process.env.REGION});
Make sure that the .env file is not committed into your repo. If you want you could have a env.example and instructions on how to create a .env when creating either a dev or production install.
As for securing the bucket, you can do so by restricting the read/write access to an IAM user that owns the AWS key/secret pair.

Firebase Javascript API on trigger.io - load script from local file

I'm using the Firebase JS API in my trigger.io app.
My app must be able to start up and operate in Airplane Mode. Would it be acceptable for me to reference a local copy of the Firebase JS file, or must this always be loaded from the CDN url?
Alternatively, is there a way the file could be cached locally and requested on a scheduled basis to get the latest version, or is there another mechanism I should use that I'm missing out on.
If you referenced a local copy of the firebase.js lib, it would work as well as the remote copy, at least initially. Since Firebase is in beta, changes can be pushed to that lib at any time, making your local copy obsolete.
Utilizing a local copy wouldn't, by itself, solve the issue you are hoping to address. While Firebase will survive temporary outages and spotty coverage, there is no locally stored copy of the data, so you'll need to either connect to Firebase initially and obtain that data, or use set() to create some sort of local default if offline.
More robust offline support is on the Firebase road map.
Some additional and very informative reading can be found here:
using firebase on offline networks
Does Firebase allow an app to start in offline mode?
How to sync offline database with Firebase when device is online?

Three.JS, Amazon S3, and access control origin errors for hosted JS files

Background
We use the Javascript library Three.JS for visualizing models stored up on Amazon S3.
I use the JSONLoader for all of my models. Other formats lack the toolchain support our team needs, and common formats like COLLADA or OBJ seem to be second-class citizens as far as the included loader libraries go (they are found, for instance, in the source tree under "examples"... the JSONLoader is in the core loaders folder).
I have large model files, and so store them and their associated assets up on Amazon S3 storage, where bandwidth and space are relatively cheap. The intent is that the web app using Three.JS loads models from our storage on Amazon, and everything is okay.
Problem
Unfortunately, the models are Javascript files ("modelBlah.js", for example) and when they are loaded by the JSONLoader any sane browser immediately pouts about the fact that we're violating the same-origin policy for scripting--e.g., we're loading and attempting to evaluate scripts from a different domain than the calling script (which is the main harness for the app).
So, it would seem that we've flown in the face of many years of web security best practices.
Solutions looked at so far
Host the models ourselves? We're using Heroku for now, and ideally we'd like to use a service specifically billed as "Big Buckets of Bits and Bandwidth" instead of doing it ourselves.
Use DNAME records to spoof where the resources come from? Unfortunately, this doesn't seem sufficient to fool browsers, as the subdomain used for the media hosting would still enrage the browser security.
Use CORS, specifically Access-Control-Allow-Origin headers? Brief skimming of Amazon S3 doesn't seem to allow this, though I am hopefully mistaken. Even so, would that be sufficient?
Any ideas?
You can now finally use CORS on Amazon: http://docs.amazonwebservices.com/AmazonS3/latest/dev/cors.html
S3 does not yet allow you to set CORs. To work around this exact problem, I ended up running an EC2 instance to act as a proxy server for downloading the models. The proxy (currently just running node) grabs the file from S3, sets the CORs header and passes it down to the application. There are a lot of options for the node setup including the knox or bufferjs.
You definitely need CORS and I think S3 allows it. Otherwise, this weekend I had to setup a bucket on Google's Cloud Storage with CORS enabled and it was fairly easy (with gsutil).

Categories