How to get google drive folders path - javascript

I have authenticated for google drive using passport js
Now I have access token
So I am looking for to get all folders path with their respective folder Id.
I am not aware what parameters need to set to get only folders path and folder id
I am using google-drive library for fetching lists
Here is my code
var googleDrive = require('google-drive')
var param={}
googleDrive(accessToken).files().list(params, (err, response, body) => {
//fetch folders path
})
Thanks.

Try changing
var param={}
to
var param={q: "mimeType='application/vnd.google-apps.folder' and trashed=false"}
With all due respect to the authors of the library you are using, be very careful. Either a library deal with all of Drive's subtleties (eg. following nextPageTokens) or it will cause you problems. The Drive REST API is a very well formed API, and you could just as easily access it using Fetch, which, being Promise based, makes for much cleaner code and allows you to use async await.

Related

How to upload massive number of books automatically to database

I have a client who has 130k books (~4 terabyte) and he wants a site to upload them into it and make an online library. So, how can I make it possible for him to upload them automatically or at least upload multiple books per time? I'll be using Node.js + mysql
I might suggest using Object Storage on top of MySQL to speed up book indexing and retrieval- but that is entirely up to you.
HTTP is a streaming protocol, and node, interestingly, has streams.
This means that, when you send a HTTP request to a node server, internally, node handles it as a stream. This means, theoretically, you can upload massive books to your web server while only having a fraction of it in memory at a time.
The first thing is- book can be very large. In order to efficiently process them, we must process the metadata (name, author, etc) and content seperately.
One example, using an express-like framework, could be: pseudo-code
app.post('/begin/:bookid', (Req, Res) => {
ParseJSON(Req)
MySQL.addColumn(Req.params.bookid,Req.body.name,Req.body.author)
})
app.put('/upload/:bookid', (Req, Res) => {
// If we use MySQL to store the books:
MySQL.addColumn(Req.params.bookid,Req.body)
// Or if we use object storage:
let Uploader = new StorageUploader(Req.params.bookid)
Req.pipe(Uploader)
})
If you need inspiration, look at how WeTransfer has created their API. They deal with lots of data daily- their solution might be helpful to you.
Remember- your client likely won't want to use Postman to upload their books. Build a simple website for them in Svelte or React.

How to add an additional endpoint to the AWS Amplify Express Server?

I've generated an REST express server with Amplify.
I tried adding to more endpoints to it:
// using serverless express
app.post('/myendpoint', function(req, res) {
console.log('body: ', req.body)
res.json(req.body)
});
// using serverless express
app.get('/myendpoint', function(req, res) {
res.json({success: 'get call to my endpoint succeed!', url: req.url});
});
After running amplify push I don't see these endpoints in the console, and I can't make requests to them via amplify.
The endpoints that were generated as part of the initial configuration work.
What is the correct way to add more REST endpoints? I have a feeling that I'm missing some additional configuration step.
After deploy the API and function, to add additional path you should use "amplify update api"
As Kevin Le stated running "amplify update api" will allow you to add another "root" path such as "/items" or "/customers", but I have been running into problems adding nested paths such as "/customer/addAddress" after the initial api creation.
What I have tried:
I can add them during the initial creation of the API, but not during an update
I can change the cloud formation template to include them in the API gateway, but any calls to the nested path are picked up by the root paths proxy.
I can add them in the express lambda function ( defeats the purpose of API Gateway )
They need to add a few enhancements to amplify rest API so we can take advantage of the API Gateway. It's not truly serverless until we can separate functions for GET, POST, PUT, etc.
UPDATE: I wanted to follow up after futher investigation. A solution I found is to remove the "root" path's {proxy} endpoint in the cloud formation file for the API gateway. The file should be located at: "project root"/amplify/backend/api/"api name"/"api name"-cloudformation-template.json. Then remove the path located at Resources->"api name"->Properties->Body->paths->"api name"/{proxy+}.
FURTHER INVESTIGATION: I have not tried this yet, but the order of the path's in the cloud formation file might have an affect on how the requests are processed, so if you move the "root" proxy to the last path, you might not have to remove it. Also thanks to Piotr for fixing my poor grammar!
You can run "amplify update api"
then choose to add a new endpoint
and choose assign new(old) lambda function for this endpoint

Is there a way in firebase functions to delete storage files that match a regular expression?

I can't find a way to delete all the files inside firebase storage that match a regular expression. I would like to use something like:
const bucket = storage.bucket(functions.config().firebase.storageBucket);
const filePath = 'images/*.png';
const file = bucket.file(filePath);
file.delete();
Or similar to be able to delete all files inside "images" with png extension.
I tried searching in Stackoverflow and in the samples repository without luck https://github.com/firebase/functions-samples
Any help would be much appreciated. Thanks!
No. In order to delete a file from Cloud Storage, you need to be able to construct a full path to that file. There are no wildcards or regular expressions.
It's common to store the paths to files in a database in order to easily discover the names of files to delete by using some query.
If you want to do this in Cloud Functions, you'll be using the Firebase Admin SDK. And the Cloud Storage functionality in there (admin.storage()) is a thin wrapper around the Cloud Storage SDK for Node.js. So if you search for cloud storage node.js path regular expression you'll get some relevant results.
How to get files from Google Cloud Storage bucket based on a regular expression?, which indicates that prefix matching is quite possible with bucket.getFiles({ prefix: 'path/to/file' }....
What about this solution
FirebaseStorage storage = FirebaseStorage.getInstance();
StorageReference mStorageRef =
storage.getReference().child("fotos_usuarios/"+id);
mStorageRef.listAll().addOnSuccessListener(listResult -> {
for (StorageReference item : listResult.getItems()) {
if (item.getName().contains("SOME_WORD")){
item.delete();
}
}
})

Hiding contentful Space id and access token, client side javascript file

I am new in contentful and I am trying to display content from contentful to a web page. I am displaying the content using contentful.js, I wanted to know How can i hide these information(space id and access token values) from public users when i am using it in a js file to display contents in a web page. Below is the Javascript code which i am using in main Js file to display the content in html file.
var client = contentful.createClient({
accessToken: 'b4c0n73n7fu1',
space: 'cfexampleapi'
});
client.entries()
.then(function (entries) {
// log the file url of any linked assets on image field name
entries.forEach(function (entry) {
if(entry.fields.SampleContent) {
document.getElementById('sample_content_block').innerHTML = entry.fields.SampleContent;
}
})
})
Thanks in advance!
There a few different ways that you could do this. The easiest way would be to move your space ID and accessToken into an environment variable. Take a look at this medium post on how to do that with Javascript: https://medium.com/ibm-watson-data-lab/environment-variables-or-keeping-your-secrets-secret-in-a-node-js-app-99019dfff716.
Then on your hosting provider, you'd be able to set the environment variable as those variables and your code would be able to utilize them.
Something interesting to note is that all the content that you put onto Contentful is assumed to be read-only. Exposing your Content Delivery API key (the access key in your example) and your space ID isn't the end of the world in a way that sharing an API Key for an alternative service can be. Using that CDA Key you posted wouldn't enable someone to edit any of the data you have stored on Contentful, just to read it.
However, you want to make sure you don't expose your CMA Key (Content Management API Key) as that would allow people to edit and change your content in Contentful.
You could use your own server to proxy requests, or something like Aerobatic's express request proxy feature: https://www.aerobatic.com/docs/http-proxy

how to insert an attachment in a timeline using google-api-nodejs-client?

I am trying the Google Glass Mirror APIs now. My test app is a simple node.js/express server with googleapis (https://github.com/google/google-api-nodejs-client).
So far I could do almost all the basic operations of timelines successfully, such as list/get/update/delete, without attachments. Here is how I insert a timeline card:
var googleapis = require('googleapis');
app.all('/timeline_insert', function(req, res) {
var timeline = {'text': req.query.text};
googleapis.discover('mirror', 'v1')
.execute(function(err, client) {
client.mirror.timeline.insert({resource: timeline})
.withAuthClient(oauth2client)
.execute(function(err, result) {
// ...
});
});
}
Now I want to go one more step further to test the attachment features. However, I have no idea how to use the APIs via googleapis and node.js. Is there any sample code for the attachment operations, such as insert/get? I know I can always use raw HTTP format to do it. But since googleapis already provides the APIs, I just want to directly use them. Thanks.
The Node.js client library, which is based on the JavaScript client library, has no built-in support for media upload: you will need to build the request "manually".
This answer should help you get started on building this request.
More information on Google's media upload protocol can be found in our documentation.

Categories