I'm having a problem with the google drive API.
I'm trying to upload an excel file with this API, but it's not working. Even copying the google API documentation doesn't work.
Here is a sample of my code:
#Get('teste')
async teste(){
const keys = require(path.resolve('src', 'files', 'api', 'keys'))
const client = new google.auth.JWT(
keys.client_email,
null,
keys.private_key,
['https://www.googleapis.com/auth/drive.metadata.readonly']
)
client.authorize((err, tokens) =>{
if(err){
console.log(err)
return;
} else{
this.gdrun(client)
}
})
}
gdrun(client){
const drive = google.drive({version: 'v3', auth: client});
var fileMetadata = {
name: 'My Report',
mimeType: 'application/vnd.google-apps.spreadsheet'
};
var media = {
mimeType: 'application/vnd.ms-excel',
body: require(path.resolve('src', 'files', 'excel', 'solargroup.xlsx'))
};
drive.files.create({
resource: fileMetadata,
media: media,
fields: 'id'
}, function (err, file: any) {
if (err) {
// Handle error
console.error(err);
} else {
console.log('File Id:', file.id);
}
});
}
I received this error:
I believe your goal as follows.
You want to upload a file (XLSX file) to Google Drive of the service account.
You want to achieve this using the service account with googleapis for Node.js.
From your script, I thought that you might wanted to upload a XLSX file as Google Spreadsheet by converting.
Modification points:
When you want to upload a file to Google Drive, in this case, please use the scope of https://www.googleapis.com/auth/drive instead of https://www.googleapis.com/auth/drive.metadata.readonly.
When you want to upload XLSX file as the XLSX file, the mimeType is application/vnd.openxmlformats-officedocument.spreadsheetml.sheet.
When you want to upload a file, body: require(path.resolve('src', 'files', 'excel', 'solargroup.xlsx')) cannot be used. In this case, please use body: fs.createReadStream(path.resolve('src', 'files', 'excel', 'solargroup.xlsx')). I thought that your error message might be due to this.
When you want to retrieve the file ID of the uploaded file, please modify file.id to file.data.id.
When above points are reflected to your script, it becomes as follows.
Modified script:
From:
const client = new google.auth.JWT(
keys.client_email,
null,
keys.private_key,
['https://www.googleapis.com/auth/drive.metadata.readonly']
)
To:
const client = new google.auth.JWT(
keys.client_email,
null,
keys.private_key,
['https://www.googleapis.com/auth/drive'] // Modified
)
And also, please modify your gdrun() as follows.
gdrun(client){
const drive = google.drive({ version: "v3", auth: client });
var fileMetadata = {
name: "My Report",
mimeType: "application/vnd.google-apps.spreadsheet",
};
var media = {
mimeType: "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", // Modified
body: fs.createReadStream(path.resolve('src', 'files', 'excel', 'solargroup.xlsx')), // Modified
};
drive.files.create(
{
resource: fileMetadata,
media: media,
fields: "id",
},
function (err, file) {
if (err) {
console.error(err);
} else {
console.log("File Id:", file.data.id); // Modified
}
}
);
}
In this case, please use const fs = require("fs").
Result:
When above script is run, the following result is obtained.
File Id: ###fileId###
Note:
Your script uploads a XLSX file to Google Drive of service account as Google Spreadsheet. In this case, you cannot directly seen the uploaded file at your Google Drive. Because the Google Drive of the service account is different from your Google Drive. When you want to see the uploaded file at your Google Drive, please create a folder on your Google Drive and share the created folder with the email of the service account. And then, please upload the file to the shared folder. By this, you can see the uploaded file on your Google Drive with your browser. For this, please modify fileMetadata as follows.
var fileMetadata = {
name: "My Report",
mimeType: "application/vnd.google-apps.spreadsheet",
parents: ["### folderId ###"], // Please set the folder ID of the folder shared with the service account.
};
In above script, the maximum file size is 5 MB. Please be careful this. When you want to upload a file more than 5 MB, please use resumable upload. Ref
References:
Upload file data
Files: create
Related
I am creating an application using asp.net mvc and javascript in which I want to create folders inside my existing google drive folder.
below is my code which I got from stackoverflow
function createFolder() {
var body = {
'title': document.getElementById('txtFolderName').value,
'mimeType': "application/vnd.google-apps.folder"
};
var request = gapi.client.drive.files.insert({
'resource': body
});
request.execute(function (resp) {
console.log('Folder ID: ' + resp.id);
});
}
I am getting the below error
index.html:61 Uncaught TypeError: Cannot read properties of undefined
(reading 'files')
on the following line
var request = gapi.client.drive.files.insert({
here gapi.client.drive is appearing to be undefined
below is my code to authenticate and load google api client
function authenticate(callback) {
return gapi.auth2.getAuthInstance()
.signIn({ scope: "https://www.googleapis.com/auth/documents https://www.googleapis.com/auth/drive https://www.googleapis.com/auth/drive.file" })
.then(function () {
console.log("Sign-in successful");
callback == undefined ? '' : callback();
},
function (err) {
console.error("Error signing in", err);
});
}
function loadClient() {
gapi.client.setApiKey("APIKEY");
return gapi.client.load("https://docs.googleapis.com/$discovery/rest?version=v1")
.then(function () {
console.log("GAPI client loaded for API");
},
function (err) {
console.error("Error loading GAPI client for API", err);
});
}
what is the problem here?
and what I need to do if I want to create a folder inside another folder?
thanks in advance
I thought that in your script, https://docs.googleapis.com/$discovery/rest?version=v1 of gapi.client.load("https://docs.googleapis.com/$discovery/rest?version=v1") is used for Google Docs API v1. I think that the reason for your error message of Uncaught TypeError: Cannot read properties of undefined (reading 'files') is due to this.
In your goal, it seems that you want to create a folder into a specific folder. In this case, please use Drive API. But, when I saw your current script for creating the folder, Drive API v2 is used. So, please modify as follows.
From:
return gapi.client.load("https://docs.googleapis.com/$discovery/rest?version=v1")
To:
return gapi.client.load("https://www.googleapis.com/discovery/v1/apis/drive/v2/rest")
By this modification, I thought that your createFolder() works. But in your current createFolder(), the folder is created to the root folder. When you want to create the folder into a specific folder, please modify the request body as follows.
var body = {
'title': document.getElementById('txtFolderName').value,
'mimeType': "application/vnd.google-apps.folder",
'parents': [{'id': '###folderId###'}]
};
Note:
As additional information, if you want to use Drive API v3, please modify it as follows.
From
return gapi.client.load("https://docs.googleapis.com/$discovery/rest?version=v1")
To
return gapi.client.load("https://www.googleapis.com/discovery/v1/apis/drive/v3/rest")
And, please modify createFolder() as follows.
var body = {
'name': document.getElementById('txtFolderName').value,
'mimeType': "application/vnd.google-apps.folder",
'parents': ['###folderId###']
};
var request = gapi.client.drive.files.create({ 'resource': body });
References:
Files: insert of Drive API v2
Files: create of Drive API v3
I'm using googleapis to upload different files to the Google Drive. The scenario is:
User provides a document and it's information through my REST API (I'm using NodeJS).
The REST API creates the directory that will contain the document, if it's not already exist.
The REST API uploads the document to that directory.
The structure of the drive is:
/root/documents/$type/$new_document
where $type is one of the user's provided fields and the $new_document is the document that was provided by the user.
The way I connect:
oauth2Client.setCredentials({ refresh_token: REFRESH_TOKEN });
drive_instance = google.drive({
version: 'v3',
auth: oauth2Client,
});
I figured how to upload the document to root folder of the Google Drive:
}
try {
const response = await drive.files.create({
requestBody: {
name: file.name,
mimeType: file.mimetype,
},
media: {
mimeType: file.mimetype,
body: file.data,
},
});
console.log(response.data);
} catch (error) {
console.log(error.message);
}
What I'm struggling is:
How to create the directory /root/documents/$type if it's not already existing?
How to upload the $new_document to /root/documents/$type?
For the second question, I know that the docs provide an option of parents[] that will contain all the folder IDs. but then, how can I get the folder ID of /root/documents/$type? Is there someway to combine the steps (like maybe mkdir -p for the directories or creating the directory will return the ID of the directory).
1. Try found folder you need via drive.files.list() method
You need set filter. Example:
add 'q': "..." in requestBody to search what you need
use "name = 'Some Folder Name'" to search by name
use "mimeType = 'application/vnd.google-apps.folder'" to search only folders
Thus, combine via and:
'q': "name = 'Some Folder Name' and mimeType = 'application/vnd.google-apps.folder'" to find all folders with name you chose
const folderName = 'Some Folder Name'
gapi.client.drive.files
.list({
'q': "name = 'Some Folder Name' and mimeType = 'application/vnd.google-apps.folder'",
'pageSize': 1000,
'fields': "files(id, name, parents)"
})
.then((response) => {
const files = response.result.files;
if (files.length > 0) {
handleSearchResult(files)
} else {
createFolder()
}
console.log(total / (1024*1024*1024))
});
About handleSearchResult() or createFolder:
It maybe more than 1 file. So you can find necessary root getting files[i].parents . That's why I added parents in 'fields': "files(id, name, parents)". https://developers.google.com/drive/api/v3/reference/files
Also you can add searching rule e.g. 'parents contain "..."''
https://developers.google.com/drive/api/v3/search-files
If search brings 0 files result so just create folder by yourself. You can create path\directory step-by step. Create first folder in drive root and remember id. After that create second folder and add in requestBody parentId that equal first folder id. And etc... Btw you can use almost the same logic to search.
2. Create folder if its necessary
Example:
// name = 'Folder Name',
// parents = ['some-parent1-id', 'some-parent2-id', ...]
function createFolder(name, parents) {
const fileMetadata = {
'name' : name,
'mimeType' : 'application/vnd.google-apps.folder',
'parents': parents
};
gapi.client.drive.files
.create({
resource: fileMetadata,
}).then((response) => {
switch(response.status){
case 200:
const file = response.result;
console.log('Created Folder Id: ' + file.id);
break;
default:
console.log('Error creating the folder, '+response);
break;
}
});
}
3. Upload file with setted parents
you should add parents = ['id-of-folder'] in requestBody
Read more in Google Drive API - Files: create
I hope it will help at least a bit:) Keep it up!
You can upload a folder inside a folder using the follow method
You should have the id of the folder you want to store the new folder in (can be extracted using nodejs api or by opening the folder and looking at the characters after last / in the url)
Use a special mimetype reserved for folders in google drive ( application/vnd.google-apps.folder )
considering your example
drive.files.create({
requestBody:{
name:"SomeFolder",
mimeType:"application/vnd.google-apps.folder"
},(error,folder)=>{
console.log(folder.data.id);
drive.files.create:({
requestBody:{
name:"SomeFolderInsideAFolder",
mimeType:"application/vnd.google-apps.folder"
},
parents:[folder.data.id]
})
})
})
You can even easily create a recursively uploading folder function by combining file upload and folder upload which can upload a whole folder
Not sure if this is the best place to post this question, please redirect me if this isn't then I will remove the post and post it to the correct location.
I know that recently amazon s3 has changed their url while accessing files.
It used to be something like http://s3.amazonaws.com/<bucket> or http://s3.<region>.amazonaws.com/<bucket>
But there's been changes into http://<bucket>.s3-<aws-region>.amazonaws.com or http://<bucket>.s3.amazonaws.com, due to this documentation https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro
http://<bucket>.s3.amazonaws.com would not be reachable after March 20, 2019, BUT when I use aws-sdk in javascript to do file upload with skipper-better-s3 the url I get in return from aws is http://<bucket>.s3.amazonaws.com/<Key>
If that url is not suppose to be reachable why would aws return such url? (I can still access the file using the url)
If that url is not suppose to be reachable in the near future, am I suppose to add in the region myself or modify the url myself instead of using the url returned by aws?
Or it might my code's problem?
Below is my code for the upload
const awsOptions = { // these fields are different because this uses skipper
adapter: require('skipper-better-s3'),
key: aws_access_key,
secret: aws_secret_key,
saveAs: PATH,
bucket: BUCKET,
s3params: {
ACL: 'public-read'
},
}
const fieldName = req._fileparser.upstreams[0].fieldName;
req.file(fieldName).upload(awsOptions, (err, filesUploaded) => {
if (err) reject(err);
const filesUploadedF = filesUploaded[0]; // F = first file
const url = filesUploadedF.extra.Location; // image url -> https://<bucket>.s3.amazonaws.com/<Key>
console.log(url, 'urlurlurl');
});
filesUploadedF would return
UploadedFileMetadata {
fd: '<Key>',
size: 4337,
type: 'image/png',
filename: 'filename.png',
status: 'bufferingOrWriting',
field: 'image',
extra:
{ ETag: '111111111111111111111',
Location: 'https://<bucket>.s3.amazonaws.com/<Key>',
key: '<key>',
Key: '<Key>',
Bucket: '<Bucket>',
md5: '32890jf32890jf0892j3f',
fd: '<Key>',
ContentType: 'image/png' }
}
The documentation you linked to for http://<bucket>.s3.amazonaws.com style naming says this:
Note
Buckets created in Regions launched after March 20, 2019 are not reachable via the https://bucket.s3.amazonaws.com naming scheme.
The wording there is important. They're only talking about new regions brought online after March 20, 2019.
To date, that's only buckets created in Middle East (Bahrain) and Asia Pacific (Hong Kong) regions.
I intend to download a dynamically generated pdf file using a remote method, the file exists at a particular path and I am using return type "file". My implementation is:
customer.downloadFile = function downloadFile(userId, res, cb){
var reader = fs.createReadStream(__dirname + '/../document.pdf');
cb(null, reader, 'application/pdf');
};
customer.remoteMethod(
'downloadFile',
{
isStatic: true,
accepts: [
{ arg: 'id', type: 'string', required: true },
{ arg: 'res', type: 'object', 'http': { source: 'res' } }
],
returns: [
{ arg: 'body', type: 'file', root: true },
{ arg: 'Content-Type', type: 'string', http: { target: 'header' } }
],
http: {path: '/:id/downloadFile', verb: 'get'}
}
);
The issue with the above code is that the browser although displays the beautiful pdf file container, but instead of the file following error is shown:
Please point out as to what is wrong with the code and how to correct.
Got lead from this URL: https://github.com/strongloop/loopback-swagger/issues/34
Got that working with following:
fs.readFile(fileName, function (err, fileData) {
res.contentType("application/pdf");
res.status(200).send(fileData);
if (!err) {
fs.unlink(fileName);
}
else {
cb(err);
}
});
You should use loopback-component-storage to manage downloadable files.
Files are grouped in so-called containers (Basically, a folder with a single level of hierarchy, not more).
How it is done:
Create a model called container for instance.
Create a storage datasource that uses as connector the loopback-component-storage
Give to container model the datasource storage
That's it. You can upload and download files to/from your container.
With a container, you can store files to a local filesystem, or move later to Cloud solutions.
I'm trying to upload files to my Amazon S3 Bucket. S3 and amazon is set up.
This is the error message from Amazon:
Conflicting query string parameters: acl, policy
Policy and signature is encoded, with Crypto.js for Node.js
var crypto=Npm.require("crypto");
I'm trying to build POST request with Meteor HTTP.post method. This could be wrong as well.
var BucketName="mybucket";
var AWSAccessKeyId="MY_ACCES_KEY";
var AWSSecretKey="MY_SECRET_KEY";
//create policy
var POLICY_JSON={
"expiration": "2009-01-01T00:00:00Z",
"conditions": [
{"bucket": BucketName},
["starts-with", "$key", "uploads/"],
{"acl": 'public-read'},
["starts-with", "$Content-Type", ""],
["content-length-range", 0, 1048576],
]
}
var policyBase64=encodePolicy(POLICY_JSON);
//create signature
var SIGNATURE = encodeSignature(policyBase64,AWSSecretKey);
console.log('signature: ', SIGNATURE);
This is the POST request I'm using with Meteor:
//Send data----------
var options={
"params":{
"key":file.name,
'AWSAccessKeyId':AWSAccessKeyId,
'acl':'public-read',
'policy':policyBase64,
'signature':SIGNATURE,
'Content-Type':file.type,
'file':file,
"enctype":"multipart/form-data",
}
}
HTTP.call('POST','https://'+BucketName+'.s3.amazonaws.com/',options,function(error,result){
if(error){
console.log("and HTTP ERROR:",error);
}else{
console.log("result:",result);
}
});
and her I'm encoding the policy and the signature:
encodePolicy=function(jsonPolicy){
// stringify the policy, store it in a NodeJS Buffer object
var buffer=new Buffer(JSON.stringify(jsonPolicy));
// convert it to base64
var policy=buffer.toString("base64");
// replace "/" and "+" so that it is URL-safe.
return policy.replace(/\//g,"_").replace(/\+/g,"-");
}
encodeSignature=function(policy,secret){
var hmac=crypto.createHmac("sha256",secret);
hmac.update(policy);
return hmac.digest("hex");
}
A can't figure out whats going on. There might already be a problem at the POST method, or the encryption, because I don't know these methods too well. If someone could point me to the right direction, to encode, or send POST request to AmazonS3 properly, it could help a lot.
(I don't like to use filepicker.io, because I don't want to force the client to sign up there as well.)
Thanks in advance!!!
Direct uploads to S3 you can use the slingshot package:
meteor add edgee:slingshot
On the server side declare your directive:
Slingshot.createDirective("myFileUploads", Slingshot.S3Storage, {
bucket: "mybucket",
allowedFileTypes: ["image/png", "image/jpeg", "image/gif"],
acl: "public-read",
authorize: function () {
//You can add user restrictions here
return true;
},
key: function (file) {
return file.name;
}
});
This directive will generate policy and signature automatically.
And them just upload it like this:
var uploader = new Slingshot.Upload("myFileUploads");
uploader.send(document.getElementById('input').files[0], function (error, url) {
Meteor.users.update(Meteor.userId(), {$push: {"profile.files": url}});
});
Why don't you use the aws-sdk package? It packs all the needed methods for you. For example, here's the simple function for adding a file to bucket:
s3.putObject({
Bucket: ...,
ACL: ...,
Key: ...,
Metadata: ...,
ContentType: ...,
Body: ...,
}, function(err, data) {
...
});
check out the S3 meteor package. The readme has a very comprehensive walkthrough of how to get started
First thing is to add the package for s3 file upload.
For Installation: ADD (AWS SDK Smart Package)
$ meteor add peerlibrary: aws-sdk
1.Create Directive upload.js and paste this code.
angular.module('techno')
.directive("fileupload", [function () {
return {
scope: {
fileupload: "="
},
link: function(scope,element, attributes){
$('.button-collapse').sideNav();
element.bind("change", function (event) {
scope.$apply(function () {
scope.fileupload = event.target.files[0];
});
})
}};
}]);
2.Get Access key and paste it in your fileUpload.js file.
AWS.config.update({
accessKeyId: ' AKIAJ2TLJBEUO6IJLKMN ',
secretAccessKey: lqGE9o4WkovRi0hCFPToG0B6w9Okg/hUfpVr6K6g'
});
AWS.config.region = 'us-east-1';
let bucket = new AWS.S3();
3.Now put this upload code in your directive fileUpload.js
vm.upload = (Obj) =>{
vm.loadingButton = true;
let name = Obj.name;
let params = {
Bucket: 'technodheeraj',
Key: name,
ContentType: 'application/pdf',
Body: Obj,
ServerSideEncryption: 'AES256'
};
bucket.putObject(params, (err, data) => {
if (err) {
console.log('---err------->', err);
}
else {
vm.fileObject = {
userId: Meteor.userId(),
eventId: id,
fileName: name,
fileSize: fileObj.size,
};
vm.call("saveFile", vm.fileObject, (error, result) => {
if (!error){
console.log('File saved successfully');
}
})
}
})
};
4.Now in “saveFile” method paste this code
saveFile: function(file){
if(file){
return Files.insert(file);
}
};
5.In HTML paste this code
<input type="file" name="file" fileupload="file">
<button type="button" class="btn btn-info " ng-click="vm.upload(file)"> Upload File</button>