Retrieving the output of a Google Apps script to local computer - javascript

I'm running a Google Apps Script (based on Gmail data) and I want to save some data from Javascript to local computer. How is it possible to save/download some data, computed in Javascript here:
to a local file?
All I have found is:
var addresses = [];
var threads = GmailApp.search("label:mylabel", 0, 10);
for (var i = 0; i < threads.length; i++) {
var messages = threads[i].getMessages();
for (var j = 0; j < messages.length; j++) {
addresses.push(messages[j].getFrom());
}
}
Logger.log(addresses);
but Logger.log(...) is not very useful to save this data to local computer.

I propose this as an other answer.
If I got data from Google Apps Script and Google Drive, I think that Web Apps can be used for the situation. For example, it retrieves data from Google as a trigger run=ok, a following sample script can be used.
Script :
function doGet(e) {
if (e.parameter.run == "ok") {
// do something
var result = "sample data"
return ContentService.createTextOutput(result).setMimeType(ContentService.MimeType.TEXT);
}
return
}
You can retrieve data of result using curl on your local PC as follows. The URL is Web Apps URL.
curl -L https://script.google.com/macros/s/#####/exec?run=ok
Result :
sample data

A Google Apps Script can't save anything directly to your computer. But it can save data to a Google Spreadsheet, which you can then download.
You may want to either use an existing spreadsheet, or create a new one. So, the logic begins either with
var ss = SpreadsheetApp.create("Output"); // new spreadsheet
or
var ss = SpreadsheetApp.openByUrl(url); // existing spreadsheet
Either way, suppose you want to store the list of addresses in Column A of the first sheet in this spreadsheet. That would be
var sheet = ss.getSheets()[0];
var range = sheet.getRange(1, 1, addresses.length, 1);
var data = addresses.map(function(x) {return [x];});
range.setValues(data);
The intermediate step with data is to turn the 1-dimensional array addresses into two-dimensional one, [[addr1], [addr2], ...] - this is the way that values are represented in Google Sheets.

Last night I was having some trouble debugging a program and I wanted to be able to view the json in the new UltraEdit on my computer so this is how I got the data from my google script to my computer.
function getText(obj)
{
var presentationId=getPresentationId();
var data=Slides.Presentations.Pages.get(obj.presId, obj.pageId);
myUtilities.saveFile(data)
for (var i=0;i<data.pageElements.length;i++)
{
if(data.pageElements[i].shape && data.pageElements[i].shape.text && data.pageElements[i].shape.text.textElements.length)
{
for(var j=0;j<data.pageElements[i].shape.text.textElements.length;j++)
{
if(data.pageElements[i].shape.text.textElements[j].textRun)
{
var text=data.pageElements[i].shape.text.textElements[j].textRun.content;
}
}
}
}
return text;
}
myUtilities.saveFile(data) is part of a utilities library I wrote that makes it easy to store all kinds of data in ascii files. It just takes a few minutes and the file is auto synced down to my computers Google Drive and I opened the file with UltraEdit and begin analyzing it. Nothing remarkable. No fancy foot work just taking advantage of what's already there.

Related

Running into a 1.7mb limit with CSOM based upload functionality

Running into the following error when I try to upload files larger than 1.7 MB:
"Request failed with error message - The request message is too big. The server does not allow messages larger than 2097152 bytes. . Stack Trace - undefined"
function uploadFile(arrayBuffer, fileName)
{
//Get Client Context,Web and List object.
var clientContext = new SP.ClientContext();
var oWeb = clientContext.get_web();
var oList = oWeb.get_lists().getByTitle('CoReTranslationDocuments');
var bytes = new Uint8Array(arrayBuffer);
var i, length, out = '';
for (i = 0, length = bytes.length; i < length; i += 1)
{
out += String.fromCharCode(bytes[i]);
}
var base64 = btoa(out);
var createInfo = new SP.FileCreationInformation();
createInfo.set_content(base64);
createInfo.set_url(fileName);
var uploadedDocument = oList.get_rootFolder().get_files().add(createInfo);
clientContext.load(uploadedDocument);
clientContext.executeQueryAsync(QuerySuccess, QueryFailure);
}
We just switched from SP2013 to Sharepoint Online. This code worked well with even larger files previously. Does the 2MB limit refer to the file being uploaded or the size of the REST request?
I also did read about a possible solution using filestream - is that something I can use in this scenario?
Any suggestions/ modifications to the code will be much appreciated.
SharePoint has its own limits for CSOM. Unfortunately, these limits cannot be configured in Central Administration and also cannot be set using CSOM for obvious reasons.
When googling for the issue, mostly a solution is given by setting the ClientRequestServiceSettings.MaxReceivedMessageSize property to the desired size.
Call the following PowerShell script from SharePoint Management Shell :
$ws = [Microsoft.SharePoint.Administration.SPWebService]::ContentService
$ws.ClientRequestServiceSettings.MaxReceivedMessageSize = 209715200
$ws.Update()
This will set the limit to 200 MB.
However, in SharePoint 2013 Microsoft apparently added another configuration setting to also limit the amount of data which the server shall process from a CSOM request (Why anyone would configure this one differently is beyond me...). After reading a very, very long SharePoint Log file and crawling through some disassembled SharePoint server code, I found that this parameter can be set via the property ClientRequestServiceSettings.MaxParseMessageSize.
We are now using the following script with SharePoint 2013 and it works great:
$ws = [Microsoft.SharePoint.Administration.SPWebService]::ContentService
$ws.ClientRequestServiceSettings.MaxReceivedMessageSize = 209715200
$ws.ClientRequestServiceSettings.MaxParseMessageSize = 209715200
$ws.Update()
Hope that saves some people a headache!

How to open and write data into file from API call using node.js

I wrote an API and the response from that API is array of data. Whenever the response coming from that API i want to store that response in file that is.txt format. I tried but it shows an error like "No such directory or NO such path". How to create file and write data into file from API using node.js. This is the code i wrote :
exports.Entry = functions.https.onRequest((req, res) => {
var fs = require('fs');
var a = ['6', '7', '8'];
var b = ['22', '27', '20'];
var eachrecord = [];
for (var i = 0; i < a.length; i++) {
eachrecord += a + b;
}
console.log("eachrecord is", eachrecord);
//Writing each record value into file
fileWriteSync('./filewriting1.txt');
function fileWriteSync(filePath){
var fd = fs.openSync(filePath,'w');
var length = eachrecord.length;
for(i = 0;i<length;i++){
var eachrecordwrite = fs.writeSync(fd,eachrecord[i] + '\n',null,null);
console.log("hii",eachrecord[i]);
}
fs.closeSync(fd);
}
});
How to write data into file from API using node.js
You can only write files to os.tmpdir(), which is /tmp on Cloud Functions. /tmp is a memory based filesystem. Everything else is read-only. If you don't intend to do anything with that written file, it will consume memory indefinitely. You should always delete files written to /tmp before the function terminates. Writing a file to memory like this is almost certainly not the best solution to a problem, unless there is a consumer for that content that can only read it off the local filesystem.
Since you haven't really said what problem you're trying to solve, it's not possible to say what you could be doing instead (that's something for a different question). But anyway, you can only write to /tmp.

Google Drive API Export GAS file specific version

I am currently trying to bring some order to our Google App Script files and develop a HTMLService app that finds and parses GAS files in Google Drive and produces API documentation based on jsdoc style comments.
I have the WebApp functional and can pull all the data I need and parse the comments but by default it exports the current GAS file contents, regardless of if it's published or not.
What I would like to do is pull the contents of the latest Saved Version rather than the current dev content, is there a way I can specify a version to export?
I am using a URLFetch() to get the content, as per below:
var params = {
headers:{
'Accept':'application/vnd.google-apps.script+json',
'Authorization':'Bearer '+ ScriptApp.getOAuthToken()
},
method:'get'
};
var fileDrive = Drive.Files.get(fileId);
var link = JSON.parse(fileDrive)['exportLinks']['application/vnd.google-apps.script+json'];
var fetched = UrlFetchApp.fetch(link, params);
return { meta: fileDrive, source: JSON.parse(fetched.getContentText()) };
Any help would be appreciated, thank you!
Based from this documentation, each change is referred to as a "Revision" and access to revisions is provided through the Revisions resource. You can programmatically save new revisions of a file or query the version history as detailed in the Revisions reference.
Listing revisions
The following example demonstrates how to list the revisions for a given file. Note that some properties of revisions are only available for certain file types. For example, G Suite application files do not consume space in Google Drive and thus return a file size of 0.
function listRevisions(fileId) {
var revisions = Drive.Revisions.list(fileId);
if (revisions.items && revisions.items.length > 0) {
for (var i = 0; i < revisions.items.length; i++) {
var revision = revisions.items[i];
var date = new Date(revision.modifiedDate);
Logger.log('Date: %s, File size (bytes): %s', date.toLocaleString(),
revision.fileSize);
}
} else {
Logger.log('No revisions found.');
}
}
Hope this helps!

Solution to map different excel files to db

I have to map a lot of different files with different structures to a db. There is a lot of different tables in those xlsx so I thought about schemeless noSQL approach, but I'm quite newbie in this field.
It should be a microservice with client interface for choosing tables/cells for parsing xlsx files. I do not have strict technology; it could be JAVA, GROOVY, Python or even a JavaScript engine.
Do you know any working solution for doing it?
Here is example xlsx (but I've got also other files, also in xls format): http://stat.gov.pl/download/gfx/portalinformacyjny/pl/defaultaktualnosci/5502/11/13/1/wyniki_finansowe_podmiotow_gospodarczych_1-6m_2015.xlsx
The work you have to do is called ETL (Extract Transform Load). You need to either find a good ETL software (here is a discussion about open source ETL) or to script your own solution in a language you are used with.
The advantage of a ready made GUI software is that you just have to drag and drop data but if you have some custom logic or semi structured data like in your xlsx example, you have limited support.
The advantage of writing your own script is you have all the freedom you need.
I have done some ETL work and I used successfully Groovy for writing my own solution with custom logic and so on, and in terms of GUI I used Altova Mapforce when I had to import some exotic file types.
If you decide to write your own solution you have to:
Convert all data to an easy to load format. In your case you have to convert each xls or xlsx tab to CSV with a naming convention.
Load your files in your chosen language for transforming
Do your logic to put data in a desirable format
Save it in a database (SQL or noSQL)
Maybe you should try Google Sheets to display excel and Google Apps Script (https://developers.google.com/apps-script/overview) to write custom add-on for parsing data to JSON.
Spreadsheet Service (https://developers.google.com/apps-script/reference/spreadsheet/) has plenty methods to access data in sheets.
Next you can send this JSON over API (https://developers.google.com/apps-script/reference/url-fetch/url-fetch-app) or put directly into database (https://developers.google.com/apps-script/guides/jdbc).
Maybe isn't clean, but fast solution.
I had a project that done work almost the same as your problem but it seem easier as I had a fixed structure of xlsx files.
For xlsx parsing, I had experiment with Python and Openpyxl and had no struggle while working with them, they are simple, fast and easy to use.
For database, I recommend using MongoDB, you can deal with documents and collections in MongoDB just as simple as working with JSON objects or a set of JSON objects. PyMongo is the best and recommended way to work with MongoDB from Python I think.
The problem is you have different files with different structures. I cannot recommend anything deeper on this without viewing your data. But you should find the general structure of them or you have to figure out the way to classify them into common sets, each set will be parsed using appropriate algorithm.
Javascript solution, as xlsx2csv (you can make export anywhere):
var def = "1.xlsx";
if (WScript.Arguments.length>0) def = WScript.Arguments(0);
var col = [];
var objShell = new ActiveXObject( "Shell.Application" );
var fs = new ActiveXObject("Scripting.FileSystemObject");
function flush(){
WScript.Echo(col.join(';'));
}
function import_xlsx(file) {
var strZipFile = file; // '"1.xlsx" 'name of zip file
var outFolder = "."; // 'destination folder of unzipped files (must exist)
var pwd =WScript.ScriptFullName.replace( WScript.ScriptName, "");
var i,j,k;
var strXlsFile = strZipFile;
var strZipFile = strXlsFile.replace( ".xlsx",".zip").replace( ".XLSX",".zip");
fs.CopyFile (strXlsFile,strZipFile, true);
var objSource = objShell.NameSpace(pwd+strZipFile).Items();
var objTarget = objShell.NameSpace(pwd+outFolder);
for (i=0;i<objSource.Count;i++)
if (objSource.item(i).Name == "xl"){
if (fs.FolderExists("xl")) fs.DeleteFolder("xl");
objTarget.CopyHere(objSource.item(i), 256);
}
var xml = new ActiveXObject("Msxml2.DOMDocument.6.0");
xml.load("xl\\sharedStrings.xml");
var sel = xml.selectNodes("/*/*/*") ;
var vol = [];
for(i=0;i<sel.length;i++) vol.push(sel[i].text);
xml.load ("xl\\worksheets\\sheet1.xml");
ret = "";
var line = xml.selectNodes("/*/*/*");
var li, line2 = 0, line3=0, row;
for (li = 0; li< line.length; li++){
if (line[li].nodeName == "row")
for (row=0;row<line[li].childNodes.length;row++){
r = line[li].childNodes[row].selectSingleNode("#r").text;
line2 = eval(r.replace(r.substring(0,1),""));
if (line2 != line3) {
line3 = line2;
if (line3 != 0) {
//flush -------------------------- line3
flush();
for (i=0;i<col.length;i++) col[i]="";
}
}
try{
t = line[li].childNodes[row].selectSingleNode("#t").text;
//i = instr("ABCDEFGHIJKLMNOPQRSTUVWXYZ", left(r,1))
i = ("ABCDEFGHIJKLMNOPQRSTUVWXYZ").indexOf(r.charAt(0));
while (i > col.length) col.push("");
if (t == "s"){
t = eval(line[li].childNodes[row].firstChild.text)
col[i] = vol[t];
} else col[i] = line[li].childNodes[row].firstChild.text;
} catch(e) {};
}
flush();
}
if (fs.FolderExists("xl")) fs.DeleteFolder("xl");
if (fs.FileExists(strZipFile)) fs.DeleteFile(strZipFile);
}
import_xlsx(def);

get remote image into string

The question says it all: how to get a remotely hosted image into a string. I will later use XMLHTTPPost to upload the content. This is javascript question, for those who don't read tag line.
#Madmartigan: the script itself is executed in rather odd manner: user uses javascript: to append the script from the remove host. (this gives access to the user cookie session, which we need in order to proceed) This generates form, giving user ability to setup some texts. (this is easy bit) When user clicks upload the script must get an image hosted on remote host. I am trying to get the image from the remote host as a string and then use something like the function below to convert it to binary. So, how do I do that?
function toBin(str){
var st,i,j,d;
var arr = [];
var len = str.length;
for (i = 1; i<=len; i++){
//reverse so its like a stack
d = str.charCodeAt(len-i);
for (j = 0; j < 8; j++) {
st = d%2 == '0' ? "class='zero'" : ""
arr.push(d%2);
d = Math.floor(d/2);
}
}
//reverse all bits again.
return arr.reverse().join("");
}
I should mention, that I managed to find things like:
var reader = new FileReader();
reader.onload = function() {
previewImage.src = reader.result;
}
reader.readAsDataURL(myFile);
However, they are very browser dependent and therefore not very useful.
I am trying to avoid using base64 because of the redundant size increase.
EDIT: take a look here. Take should help you: http://www.nihilogic.dk/labs/exif/ or maybe here: http://jsfromhell.com/classes/binary-parser the only way to store binary data into string in javascript context is to use base64/base128 encoding. But I never tried it myself to do that in case of a image. There are many JavaScript base encoder/decoder out. Hope this helps you.

Categories