Google Drive API Export GAS file specific version - javascript

I am currently trying to bring some order to our Google App Script files and develop a HTMLService app that finds and parses GAS files in Google Drive and produces API documentation based on jsdoc style comments.
I have the WebApp functional and can pull all the data I need and parse the comments but by default it exports the current GAS file contents, regardless of if it's published or not.
What I would like to do is pull the contents of the latest Saved Version rather than the current dev content, is there a way I can specify a version to export?
I am using a URLFetch() to get the content, as per below:
var params = {
headers:{
'Accept':'application/vnd.google-apps.script+json',
'Authorization':'Bearer '+ ScriptApp.getOAuthToken()
},
method:'get'
};
var fileDrive = Drive.Files.get(fileId);
var link = JSON.parse(fileDrive)['exportLinks']['application/vnd.google-apps.script+json'];
var fetched = UrlFetchApp.fetch(link, params);
return { meta: fileDrive, source: JSON.parse(fetched.getContentText()) };
Any help would be appreciated, thank you!

Based from this documentation, each change is referred to as a "Revision" and access to revisions is provided through the Revisions resource. You can programmatically save new revisions of a file or query the version history as detailed in the Revisions reference.
Listing revisions
The following example demonstrates how to list the revisions for a given file. Note that some properties of revisions are only available for certain file types. For example, G Suite application files do not consume space in Google Drive and thus return a file size of 0.
function listRevisions(fileId) {
var revisions = Drive.Revisions.list(fileId);
if (revisions.items && revisions.items.length > 0) {
for (var i = 0; i < revisions.items.length; i++) {
var revision = revisions.items[i];
var date = new Date(revision.modifiedDate);
Logger.log('Date: %s, File size (bytes): %s', date.toLocaleString(),
revision.fileSize);
}
} else {
Logger.log('No revisions found.');
}
}
Hope this helps!

Related

Retrieving the output of a Google Apps script to local computer

I'm running a Google Apps Script (based on Gmail data) and I want to save some data from Javascript to local computer. How is it possible to save/download some data, computed in Javascript here:
to a local file?
All I have found is:
var addresses = [];
var threads = GmailApp.search("label:mylabel", 0, 10);
for (var i = 0; i < threads.length; i++) {
var messages = threads[i].getMessages();
for (var j = 0; j < messages.length; j++) {
addresses.push(messages[j].getFrom());
}
}
Logger.log(addresses);
but Logger.log(...) is not very useful to save this data to local computer.
I propose this as an other answer.
If I got data from Google Apps Script and Google Drive, I think that Web Apps can be used for the situation. For example, it retrieves data from Google as a trigger run=ok, a following sample script can be used.
Script :
function doGet(e) {
if (e.parameter.run == "ok") {
// do something
var result = "sample data"
return ContentService.createTextOutput(result).setMimeType(ContentService.MimeType.TEXT);
}
return
}
You can retrieve data of result using curl on your local PC as follows. The URL is Web Apps URL.
curl -L https://script.google.com/macros/s/#####/exec?run=ok
Result :
sample data
A Google Apps Script can't save anything directly to your computer. But it can save data to a Google Spreadsheet, which you can then download.
You may want to either use an existing spreadsheet, or create a new one. So, the logic begins either with
var ss = SpreadsheetApp.create("Output"); // new spreadsheet
or
var ss = SpreadsheetApp.openByUrl(url); // existing spreadsheet
Either way, suppose you want to store the list of addresses in Column A of the first sheet in this spreadsheet. That would be
var sheet = ss.getSheets()[0];
var range = sheet.getRange(1, 1, addresses.length, 1);
var data = addresses.map(function(x) {return [x];});
range.setValues(data);
The intermediate step with data is to turn the 1-dimensional array addresses into two-dimensional one, [[addr1], [addr2], ...] - this is the way that values are represented in Google Sheets.
Last night I was having some trouble debugging a program and I wanted to be able to view the json in the new UltraEdit on my computer so this is how I got the data from my google script to my computer.
function getText(obj)
{
var presentationId=getPresentationId();
var data=Slides.Presentations.Pages.get(obj.presId, obj.pageId);
myUtilities.saveFile(data)
for (var i=0;i<data.pageElements.length;i++)
{
if(data.pageElements[i].shape && data.pageElements[i].shape.text && data.pageElements[i].shape.text.textElements.length)
{
for(var j=0;j<data.pageElements[i].shape.text.textElements.length;j++)
{
if(data.pageElements[i].shape.text.textElements[j].textRun)
{
var text=data.pageElements[i].shape.text.textElements[j].textRun.content;
}
}
}
}
return text;
}
myUtilities.saveFile(data) is part of a utilities library I wrote that makes it easy to store all kinds of data in ascii files. It just takes a few minutes and the file is auto synced down to my computers Google Drive and I opened the file with UltraEdit and begin analyzing it. Nothing remarkable. No fancy foot work just taking advantage of what's already there.

Download whole Folder (google drive api)

Hey guy's I have an google drive app.
Where I can download upload and many more things.
The problem is that I need to be able to download the "folder" as well.
So the scenario is:
Folder1
-FolderA
--fileA1
--fileA2
--FolderAA
---fileAA1
---FileAA2
---FolderAAA
----FileAAA1
-FolderB
-FolderC
--FileC1
If I click on download folder 1 I want it to download all the things u see
If I click on download folderC he only download Folderc (or zip) with filec1 in it.
The files are easy to download because they have webContentLink
I already read:
Download folder with Google Drive API
You will need to do a Files.list which will return a list of each of the files.
{
"kind": "drive#fileList",
"nextPageToken": string,
"incompleteSearch": boolean,
"files": [
files Resource
]
}
Loop though each of the files and download it. If you are after a way of doing it in a single request then there isn't one. You will need to download each file one by one.
Although question is already answered, I had a similar situation and wanted to share some code. The code recursively digs through folders and saves the files in the exact hierarchy.
The code is C#, but also pretty much self-explanatory. Hoping it might do some help.
private void downloadFile(DriveService MyService, File FileResource, string path)
{
if (FileResource.MimeType != "application/vnd.google-apps.folder")
{
var stream = new System.IO.MemoryStream();
MyService.Files.Get(FileResource.Id).Download(stream);
System.IO.FileStream file = new System.IO.FileStream(path + #"/" + FileResource.Title, System.IO.FileMode.Create, System.IO.FileAccess.Write);
stream.WriteTo(file);
file.Close();
}
else
{
string NewPath = Path + #"/" + FileResource.Title;
System.IO.Directory.CreateDirectory(NewPath);
var SubFolderItems = RessInFolder(MyService, FileResource.Id);
foreach (var Item in SubFolderItems)
downloadFile(Item, NewPath);
}
}
public List<File> RessInFolder(DriveService service, string folderId)
{
List<File> TList = new List<File>();
var request = service.Children.List(folderId);
do
{
var children = request.Execute();
foreach (ChildReference child in children.Items)
TList.Add(service.Files.Get(child.Id).Execute());
request.PageToken = children.NextPageToken;
} while (!String.IsNullOrEmpty(request.PageToken));
return TList;
}
Note that this code was written for API v2 and Children.List is not there in API v3. Do use files.list with ?q='parent_id'+in+parents if you use v3.

Solution to map different excel files to db

I have to map a lot of different files with different structures to a db. There is a lot of different tables in those xlsx so I thought about schemeless noSQL approach, but I'm quite newbie in this field.
It should be a microservice with client interface for choosing tables/cells for parsing xlsx files. I do not have strict technology; it could be JAVA, GROOVY, Python or even a JavaScript engine.
Do you know any working solution for doing it?
Here is example xlsx (but I've got also other files, also in xls format): http://stat.gov.pl/download/gfx/portalinformacyjny/pl/defaultaktualnosci/5502/11/13/1/wyniki_finansowe_podmiotow_gospodarczych_1-6m_2015.xlsx
The work you have to do is called ETL (Extract Transform Load). You need to either find a good ETL software (here is a discussion about open source ETL) or to script your own solution in a language you are used with.
The advantage of a ready made GUI software is that you just have to drag and drop data but if you have some custom logic or semi structured data like in your xlsx example, you have limited support.
The advantage of writing your own script is you have all the freedom you need.
I have done some ETL work and I used successfully Groovy for writing my own solution with custom logic and so on, and in terms of GUI I used Altova Mapforce when I had to import some exotic file types.
If you decide to write your own solution you have to:
Convert all data to an easy to load format. In your case you have to convert each xls or xlsx tab to CSV with a naming convention.
Load your files in your chosen language for transforming
Do your logic to put data in a desirable format
Save it in a database (SQL or noSQL)
Maybe you should try Google Sheets to display excel and Google Apps Script (https://developers.google.com/apps-script/overview) to write custom add-on for parsing data to JSON.
Spreadsheet Service (https://developers.google.com/apps-script/reference/spreadsheet/) has plenty methods to access data in sheets.
Next you can send this JSON over API (https://developers.google.com/apps-script/reference/url-fetch/url-fetch-app) or put directly into database (https://developers.google.com/apps-script/guides/jdbc).
Maybe isn't clean, but fast solution.
I had a project that done work almost the same as your problem but it seem easier as I had a fixed structure of xlsx files.
For xlsx parsing, I had experiment with Python and Openpyxl and had no struggle while working with them, they are simple, fast and easy to use.
For database, I recommend using MongoDB, you can deal with documents and collections in MongoDB just as simple as working with JSON objects or a set of JSON objects. PyMongo is the best and recommended way to work with MongoDB from Python I think.
The problem is you have different files with different structures. I cannot recommend anything deeper on this without viewing your data. But you should find the general structure of them or you have to figure out the way to classify them into common sets, each set will be parsed using appropriate algorithm.
Javascript solution, as xlsx2csv (you can make export anywhere):
var def = "1.xlsx";
if (WScript.Arguments.length>0) def = WScript.Arguments(0);
var col = [];
var objShell = new ActiveXObject( "Shell.Application" );
var fs = new ActiveXObject("Scripting.FileSystemObject");
function flush(){
WScript.Echo(col.join(';'));
}
function import_xlsx(file) {
var strZipFile = file; // '"1.xlsx" 'name of zip file
var outFolder = "."; // 'destination folder of unzipped files (must exist)
var pwd =WScript.ScriptFullName.replace( WScript.ScriptName, "");
var i,j,k;
var strXlsFile = strZipFile;
var strZipFile = strXlsFile.replace( ".xlsx",".zip").replace( ".XLSX",".zip");
fs.CopyFile (strXlsFile,strZipFile, true);
var objSource = objShell.NameSpace(pwd+strZipFile).Items();
var objTarget = objShell.NameSpace(pwd+outFolder);
for (i=0;i<objSource.Count;i++)
if (objSource.item(i).Name == "xl"){
if (fs.FolderExists("xl")) fs.DeleteFolder("xl");
objTarget.CopyHere(objSource.item(i), 256);
}
var xml = new ActiveXObject("Msxml2.DOMDocument.6.0");
xml.load("xl\\sharedStrings.xml");
var sel = xml.selectNodes("/*/*/*") ;
var vol = [];
for(i=0;i<sel.length;i++) vol.push(sel[i].text);
xml.load ("xl\\worksheets\\sheet1.xml");
ret = "";
var line = xml.selectNodes("/*/*/*");
var li, line2 = 0, line3=0, row;
for (li = 0; li< line.length; li++){
if (line[li].nodeName == "row")
for (row=0;row<line[li].childNodes.length;row++){
r = line[li].childNodes[row].selectSingleNode("#r").text;
line2 = eval(r.replace(r.substring(0,1),""));
if (line2 != line3) {
line3 = line2;
if (line3 != 0) {
//flush -------------------------- line3
flush();
for (i=0;i<col.length;i++) col[i]="";
}
}
try{
t = line[li].childNodes[row].selectSingleNode("#t").text;
//i = instr("ABCDEFGHIJKLMNOPQRSTUVWXYZ", left(r,1))
i = ("ABCDEFGHIJKLMNOPQRSTUVWXYZ").indexOf(r.charAt(0));
while (i > col.length) col.push("");
if (t == "s"){
t = eval(line[li].childNodes[row].firstChild.text)
col[i] = vol[t];
} else col[i] = line[li].childNodes[row].firstChild.text;
} catch(e) {};
}
flush();
}
if (fs.FolderExists("xl")) fs.DeleteFolder("xl");
if (fs.FileExists(strZipFile)) fs.DeleteFile(strZipFile);
}
import_xlsx(def);

how to access a file on the client's machine

I have a few clients that will be using my website, and I want each client to have their own "config" file (EX: location=1 for one computer, location=2 for another). I want to do this using a file I place on the client's machine and then when they access the website the client looks on their own machine and figures out what to load based on what's in that file. This file can be a CSV file, plain text file, or any other kind of file that it needs to be for this to work.
Looking online all I've seen is stuff with file uploader. I don't want them to have to select the file, just have the file contents load and call a javascript function when they do.
Example of file
Location=1
AnswerToQuestion=42
and another file
Location=2
AnswerToQuestion=15
and my JS function
var setAnswerToQuestion = function(answer){
locationConfig.setAnswer(answer)
}
Take a look at localstorage. It's a persistent key/value system that the browser implements to keep data for your website/webapp.
The Basic Principle:
To set a variable:
localStorage.setItem('answer_1', '42');
To get a variable:
localStorage.getItem("answer_1");
I guess if you have lots of answers you would end up with an array/object something like this:
var answers = [42, 15];
Towards a Solution:
You could store and retrieve that by using JSON.stringify
localStorage.setItem('answers', JSON.stringify(answers));
var answers = JSON.stringify(localStorage.getItem('answers'));
Be Educated
Smashing Magazine has a tutorial here
Dive into HTML5 has a tutorial here
You can't access files on local machines without using "file upload". You could store your config files on browser localstorage as:
var getConfigData = function() {
return JSON.parse(localStorage.getItem('config'));
}
var saveConfigData = function(config) {
localStorage.setItem('config', JSON.stringify(config));
}
var addDataToConfig = function(key, value) {
var config = getConfigData();
config[key] = value;
saveConfigData(config);
}
var config = {
Location: 1,
AnswerToQuestion: 42
};
// save new config
saveConfigData(config);
// add new data to config
addDataToConfig('name', 'John Doe');

createBlockBlob and commitBlobBlocks create empty files in BlobStorage

I'm developing a web app that can upload large file into the Azure Blob Storage.
As a backend, I am using Windows Azure Mobile Services (the web app will generate contents for mobile devices) in nodeJS.
My client can successfully send chunks of data to the backend, everything looks fine but, at the end, the uploaded file is empty. The data upload has been prepared by following this tutorial: it works perfectly when the file is small enough to be uploaded in a single requests. The process fails when the file needs to be broken in chunks. It uses the ReadableStreamBuffer from the tutorial.
Can somebody help me?
Here the code:
Back-end : createBlobBlockFromStream
[...]
//Get references
var azure = require('azure');
var qs = require('querystring');
var appSettings = require('mobileservice-config').appSettings;
var accountName = appSettings.STORAGE_NAME;
var accountKey = appSettings.STORAGE_KEY;
var host = accountName + '.blob.core.windows.net';
var container = "zips";
//console.log(request.body);
var blobName = request.body.file;
var blobExt = request.body.ext;
var blockId = request.body.blockId;
var data = new Buffer(request.body.data, "base64");
var stream = new ReadableStreamBuffer(data);
var streamLen = stream.size();
var blobFull = blobName+"."+blobExt;
console.log("BlobFull: "+blobFull+"; id: "+blockId+"; len: "+streamLen+"; "+stream);
var blobService = azure.createBlobService(accountName, accountKey, host);
//console.log("blockId: "+blockId+"; container: "+container+";\nblobFull: "+blobFull+"streamLen: "+streamLen);
blobService.createBlobBlockFromStream(blockId, container, blobFull, stream, streamLen,
function(error, response){
if(error){
request.respond(statusCodes.INTERNAL_SERVER_ERROR, error);
} else {
request.respond(statusCodes.OK, {message : "block created"});
}
});
[...]
Back-end: commitBlobBlock
[...]
var azure = require('azure');
var qs = require('querystring');
var appSettings = require('mobileservice-config').appSettings;
var accountName = appSettings.STORAGE_NAME;
var accountKey = appSettings.STORAGE_KEY;
var host = accountName + '.blob.core.windows.net';
var container = "zips";
var blobName = request.body.file;
var blobExt = request.body.ext;
var blobFull = blobName+"."+blobExt;
var blockIdList = request.body.blockList;
console.log("blobFull: "+blobFull+"; blockIdList: "+JSON.stringify(blockIdList));
var blobService = azure.createBlobService(accountName, accountKey, host);
blobService.commitBlobBlocks(container, blobFull, blockIdList, function(error, result){
if(error){
request.respond(statusCodes.INTERNAL_SERVER_ERROR, error);
} else {
request.respond(statusCodes.OK, result);
blobService.listBlobBlocks(container, blobFull)
}
});
[...]
The second method returns the correct list of blockId, so I think that the second part of the process works fine. I think that it is the first method that fails to write the data inside the block, as if it creates some empty blocks.
In the client-side, I read the file as an ArrayBuffer, by using the FileReader JS API.
Then I convert it in a Base4 encoded string by using the following code. This approach works perfectly if I create the blob in a single call, good for small files.
[...]
//data contains the ArrayBuffer read by the FileReader API
var requestData = new Uint8Array(data);
var binary = "";
for (var i = 0; i < requestData.length; i++) {
binary += String.fromCharCode( requestData[ i ] );
}
[...]
Any idea?
Thank you,
Ric
Which version of the Azure Storage Node.js SDK are you using? It looks like you might be using an older version; if so I would recommend upgrading to the latest (0.3.0 as of this writing). We’ve improved many areas with the new library, including blob upload; you might be hitting a bug that has already been fixed. Note that there may be breaking changes between versions.
Download the latest Node.js Module (code is also on Github)
https://www.npmjs.org/package/azure-storage
Read our blog post: Microsoft Azure Storage Client Module for Node.js v. 0.2.0 http://blogs.msdn.com/b/windowsazurestorage/archive/2014/06/26/microsoft-azure-storage-client-module-for-node-js-v-0-2-0.aspx
If that’s not the issue, can you check a Fiddler trace (or equivalent) to see if the raw data blocks are being sent to the service?
Not too sure if your still suffering from this problem but i was experiencing the exact same thing and came across this looking for a solution. Well i found one and though id share.
My problem was not with how i push the block but in how i committed it. My little proxy server had no knowledge of prior commits, it just pushes the data its sent and commits it. Trouble is i wasn't providing the commit message with the previously committed blocks so it was overwriting them with the current commit each time.
So my solution:
var opts = {
UncommittedBlocks: [IdOfJustCommitedBlock],
CommittedBlocks: [IdsOfPreviouslyCommittedBlocks]
}
blobService.commitBlobBlocks('containerName', 'blobName', opts, function(e, r){});
For me the bit that broke everything was the format of the opts object. I wasn't providing an array of previously committed block names. Its also worth noting that i had to base64 decode the existing block names as:
blobService.listBlobBlocks('containerName', 'fileName', 'type IE committed', fn)
Returns an object for each block with the name being base64 encoded.
Just for completeness here's how i push my blocks, req is from the express route:
var blobId = blobService.getBlockId('blobName', 'lengthOfPreviouslyCommittedArray + 1 as Int');
var length = req.headers['content-length'];
blobService.createBlobBlockFromStream(blobId, 'containerName', 'blobName', req, length, fn);
Also with the upload i had a strange issue where the content-length header caused it to break and so had to delete it from the req.headers object.
Hope this helps and is detailed enough.

Categories