Alloy Titanium & Google Cloud Endpoint - javascript

I would like to know how (the right way) to work with Google Cloud Endpoint in an Alloy Titanium application. And I would like to use the library that Google has for the API endpoints.
I am new to Alloy and CommonJS, therefore trying to figure out the right way to do this.
From my understanding Alloy prefers (or only allows) including javascript via modules (CommonJS - exports...).
var module = require('google.js');
google.api.endpoint.execute();
This would be the way CommonJS would expect things to work. Although in the google javascript library it just creates a global variable called "gapi".
Is there a way, I can include this file ?
Is there a way, I can create global variables ?
Should I stay away from creating them in first place ?
Thanks !

The client.js library that Google has for the API endpoints can be run only from browsers (Titanium.UI.WebView in this case), it can't be run directly from Titanium code since it contains objects not available in Titanium Appcelerator.
Also, using a Google Cloud Endpoint into an Alloy Titanium application requires having the js code available into the project at compile time, as it is used by Titanium to generate the native code for the desired platforms.
To anwser your questions:
Is there a way, I can include this file ?
No, if you plan to run the code as Titanium code, for the reasons mentioned above. Instead you could use the following code snippet to connect to a Google Cloud Endpoint:
var url = "https://1-dot-projectid.appspot.com/_ah/api/rpc";
var methodName = "testendpoint.listGreetings";
var apiVersion = "v1";
callMethod(url, methodName, apiVersion, {
success : function(responseText)
{
//work with the response
},
error : function(e) { //onerror do something
}
});
function callMethod(url, methodName, apiVersion, callbacks) {
var xhr = Titanium.Network.createHTTPClient();
xhr.onload = function(e) {
Ti.API.info("received text: " + this.responseText);
if (typeof callbacks.success === 'function') {
callbacks.success(this.responseText);
}
};
xhr.onerror = function(e) {
Ti.API.info(JSON.stringify(e));
//Ti.API.info(e.responseText);
if (typeof callbacks.error === 'function') {
callbacks.error(e);
}
};
xhr.timeout = 5000; /* in milliseconds */
xhr.open("POST", url, true);
xhr.setRequestHeader('Content-Type', 'application/json-rpc');
//xhr.setRequestHeader('Authorization', 'Bearer ' + token);
var d = [{
jsonrpc: '2.0',
method: methodName,
id: 1,
apiVersion: apiVersion,
}];
Ti.API.info(JSON.stringify(d));
// Send the request.
xhr.send(JSON.stringify(d));
}
Yes, if you use the embeded device's browser like this (as can be found in web client GAE samples)
webview = Titanium.UI.createWebView({
width : '100%',
height : '100%',
url : url // put your link to the HTML page
});
, to call your server HTML page which should contain:
script src="https://apis.google.com/js/client.js?onload=init">
Is there a way, I can create global variables ?
Yes, insert into the app/alloy.js file the global variables, see the default comments in the file:
// This is a great place to do any initialization for your app
// or create any global variables/functions that you'd like to
// make available throughout your app. You can easily make things
// accessible globally by attaching them to the Alloy.Globals
// object. For example:
//
Alloy.Globals.someGlobalFunction = function(){};
Alloy.Globals.someGlobalVariable = "80dp";
Should I stay away from creating them in first place ?
I suppose you are reffering to global variables containing the module code for connecting to GAE enpoind methods. It's your call, here is how you can use them.
a) Create a file named jsonrpc.js in the app/lib folder of your Titanium project, put the following code into it, and move the function code from above as the function body:
JSONRPCClient = function () {
};
JSONRPCClient.prototype = {
callMethod : function (url, methodName, apiVersion, callbacks) {
// insert the function body here
}
};
exports.JSONRPCClient = JSONRPCClient;
b) Into app/alloy.js file define your global variable:
Alloy.Globals.JSONRPCClient = require('jsonrpc').JSONRPCClient;
c) Use it (eg. from your controller js files):
var client = new Alloy.Globals.JSONRPCClient();
var url = "https://1-dot-projectid.appspot.com/_ah/api/rpc";
var methodName = "testendpoint.listGreetings";
var apiVersion = "v1";
client.callMethod(url, methodName, apiVersion,
{success: function(result) {
//result handling
Ti.API.info('response result=', JSON.stringify(result));
//alert(JSON.stringify(result));
},
error: function(err) {
Ti.API.info('response out err=', JSON.stringify(err));
//error handling
}
});

Related

How to execute / access local file from Thunderbird WebExtension?

I like to write a Thunderbird AddOn that encrypts stuff. For this, I already extracted all data from the compose window. Now I have to save this into files and run a local executable for encryption. But I found no way to save the files and execute an executable on the local machine. How can I do that?
I found the File and Directory Entries API documentation, but it seems to not work. I always get undefined while trying to get the object with this code:
var filesystem = FileSystemEntry.filesystem;
console.log(filesystem); // --> undefined
At least, is there a working AddOn that I can examine to find out how this is working and maybe what permissions I have to request in the manifest.json?
NOTE: Must work cross-platform (Windows and Linux).
The answer is, that WebExtensions are currently not able to execute local files. Also, saving to some local folder on the disk is also not possible.
Instead, you need to add some WebExtension Experiment to your project and there use the legacy APIs. There you can use the IOUtils and FileUtils extensions to reach your goal:
Execute a file:
In your background JS file:
var ret = await browser.experiment.execute("/usr/bin/executable", [ "-v" ]);
In the experiment you can execute like this:
var { ExtensionCommon } = ChromeUtils.import("resource://gre/modules/ExtensionCommon.jsm");
var { FileUtils } = ChromeUtils.import("resource://gre/modules/FileUtils.jsm");
var { XPCOMUtils } = ChromeUtils.import("resource://gre/modules/XPCOMUtils.jsm");
XPCOMUtils.defineLazyGlobalGetters(this, ["IOUtils");
async execute(executable, arrParams) {
var fileExists = await IOUtils.exists(executable);
if (!fileExists) {
Services.wm.getMostRecentWindow("mail:3pane")
.alert("Executable [" + executable + "] not found!");
return false;
}
var progPath = new FileUtils.File(executable);
let process = Cc["#mozilla.org/process/util;1"].createInstance(Ci.nsIProcess);
process.init(progPath);
process.startHidden = false;
process.noShell = true;
process.run(true, arrParams, arrParams.length);
return true;
},
Save an attachment to disk:
In your backround JS file you can do like this:
var f = messenger.compose.getAttachmentFile(attachment.id)
var blob = await f.arrayBuffer();
var t = await browser.experiment.writeFileBinary(tempFile, blob);
In the experiment you can then write the file like this:
async writeFileBinary(filename, data) {
// first we need to convert the arrayBuffer to some Uint8Array
var uint8 = new Uint8Array(data);
uint8.reduce((binary, uint8) => binary + uint8.toString(2), "");
// then we can save it
var ret = await IOUtils.write(filename, uint8);
return ret;
},
IOUtils documentation:
https://searchfox.org/mozilla-central/source/dom/chrome-webidl/IOUtils.webidl
FileUtils documentation:
https://searchfox.org/mozilla-central/source/toolkit/modules/FileUtils.jsm

Using Swagger with javascript on client-site without NodeJs

How I can use Swagger-generated API client source on client-site (normal browser application without NodeJs)?
In a first test I generated a javascript client for Swaggers' petstore API (https://petstore.swagger.io/v2) using editor.swagger.io
The generated code is containing a index.js which provides access to constructors for public API classes, which I try to embed and use in my web application.
The documentation describes the usage of the API like so:
var SwaggerPetstore = require('swagger_petstore');
var defaultClient = SwaggerPetstore.ApiClient.instance;
// Configure API key authorization: api_key
var api_key = defaultClient.authentications['api_key'];
api_key.apiKey = 'YOUR API KEY';
// Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null)
//api_key.apiKeyPrefix = 'Token';
var apiInstance = new SwaggerPetstore.PetApi();
var petId = 789; // Number | ID of pet to return
var callback = function(error, data, response) {
if (error) {
console.error(error);
} else {
console.log('API called successfully. Returned data: ' + data);
}
};
apiInstance.getPetById(petId, callback);
This works fine for NodeJs applications. But how I can use the API for conventional client-site web-apps inside the browser? For such applications the nodejs function require does not work.
From https://github.com/swagger-api/swagger-js
The example has cross origin problem, but it should work in your own project
<html>
<head>
<script src='//unpkg.com/swagger-client' type='text/javascript'></script>
<script>
var specUrl = '//petstore.swagger.io/v2/swagger.json'; // data urls are OK too 'data:application/json;base64,abc...'
SwaggerClient.http.withCredentials = true; // this activates CORS, if necessary
var swaggerClient = new SwaggerClient(specUrl)
.then(function (swaggerClient) {
return swaggerClient.apis.pet.addPet({id: 1, name: "bobby"}); // chaining promises
}, function (reason) {
console.error("failed to load the spec" + reason);
})
.then(function(addPetResult) {
console.log(addPetResult.obj);
// you may return more promises, if necessary
}, function (reason) {
console.error("failed on API call " + reason);
});
</script>
</head>
<body>
check console in browser's dev. tools
</body>
</html>

Service Worker Respond To Fetch after getting data from another worker

I am using service workers to intercept requests for me and provide the responses to the fetch requests by communicating with a Web worker (also created from the same parent page).
I have used message channels for direct communication between the worker and service worker. Here is a simple POC I have written:
var otherPort, parentPort;
var dummyObj;
var DummyHandler = function()
{
this.onmessage = null;
var selfRef = this;
this.callHandler = function(arg)
{
if (typeof selfRef.onmessage === "function")
{
selfRef.onmessage(arg);
}
else
{
console.error("Message Handler not set");
}
};
};
function msgFromW(evt)
{
console.log(evt.data);
dummyObj.callHandler(evt);
}
self.addEventListener("message", function(evt) {
var data = evt.data;
if(data.msg === "connect")
{
otherPort = evt.ports[1];
otherPort.onmessage = msgFromW;
parentPort = evt.ports[0];
parentPort.postMessage({"msg": "connect"});
}
});
self.addEventListener("fetch", function(event)
{
var url = event.request.url;
var urlObj = new URL(url);
if(!isToBeIntercepted(url))
{
return fetch(event.request);
}
url = decodeURI(url);
var key = processURL(url).toLowerCase();
console.log("Fetch For: " + key);
event.respondWith(new Promise(function(resolve, reject){
dummyObj = new DummyHandler();
dummyObj.onmessage = function(e)
{
if(e.data.error)
{
reject(e.data.error);
}
else
{
var content = e.data.data;
var blob = new Blob([content]);
resolve(new Response(blob));
}
};
otherPort.postMessage({"msg": "content", param: key});
}));
});
Roles of the ports:
otherPort: Communication with worker
parentPort: Communication with parent page
In the worker, I have a database say this:
var dataBase = {
"file1.txt": "This is File1",
"file2.txt": "This is File2"
};
The worker just serves the correct data according to the key sent by the service worker. In reality these will be very large files.
The problem I am facing with this is the following:
Since I am using a global dummyObj, the older dummyObj and hence the older onmessage is lost and only the latest resource is responded with the received data.
In fact, file2 gets This is File1, because the latest dummyObj is for file2.txt but the worker first sends data for file1.txt.
I tried by creating an iframe directly and all the requests inside it are intercepted:
<html>
<head></head>
<body><iframe src="tointercept/file1.txt" ></iframe><iframe src="tointercept/file2.txt"></iframe>
</body>
</html>
Here is what I get as output:
One approach could be to write all the files that could be fetched into IndexedDB in the worker before creating the iframe. Then in the Service Worker fetch those from indexed DB. But I don't want to save all the resources in IDB. So this approach is not what I want.
Does anybody know a way to accomplish what I am trying to do in some other way? Or is there a fix to what I am doing.
Please Help!
UPDATE
I have got this to work by queuing the dummyObjs in a global queue instead of having a global object. And on receiving the response from the worker in msgFromW I pop an element from the queue and call its callHandler function.
But I am not sure if this is a reliable solution. As it assumes that everything will occur in order. Is this assumption correct?
I'd recommend wrapping your message passing between the service worker and the web worker in promises, and then pass a promise that resolves with the data from the web worker to fetchEvent.respondWith().
The promise-worker library can automate this promise-wrapping for you, or you could do it by hand, using this example as a guide.
If you were using promise-worker, your code would look something like:
var promiseWorker = new PromiseWorker(/* your web worker */);
self.addEventListener('fetch', function(fetchEvent) {
if (/* some optional check to see if you want to handle this event */) {
fetchEvent.respondWith(promiseWorker.postMessage(/* file name */));
}
});

requirejs: load script/module from a web service

I have a WebService that query a SQL database. In a sql table, I store some javascript and I want to use it in a webpage using RequireJS.
I try this :
var url = "http://localhost:64952/breeze/app/Objectss?$filter=Id%20eq%201&$select=Script";
require([url], (test) => {
debugger
arguments[0];
});
The server respond correctly :
http://i.stack.imgur.com/05lE7.png
But I'm not sure RequireJS is able to load script like this.
I try something else :
var req = breeze.EntityQuery.from("Commands")
.where("Id", "eq", "1")
.select("Script");
dataservice.manager.executeQuery(req)
.then((res : breeze.QueryResult) => {
if (res.results[0]) {
require([(<any>res.results[0]).Script], (hekki) => {
debugger
});
}
});
Doesn't work too...
Do you have any idea to help me please ?!
Create a requirejs plugin responsible for loading dependencies via the breeze api you've put together...
breezeloader.js:
define({
load: function (name, req, onload, config) {
// load the script using the breeze api you've put together...
var query = breeze.EntityQuery
.from("Commands")
.where("Id", "eq", name)
.select("Script");
dataservice.manager.executeQuery(query)
.then((queryResult: breeze.QueryResult) => {
var text = queryResult.results[0].Script;
// Have RequireJS execute the JavaScript within
//the correct environment/context, and trigger the load
//call for this resource.
onload.fromText(text);
});
}
});
express dependencies that should be loaded with the breeze loader using the requirejs plugin syntax:
require(['breezeloader!1', 'jquery', 'foo'], function (hekki, jquery, foo) {
...
});

How to execute an external script in jsdom

I have a method in a products.js file like so:
var handler = function(errors, window) {...}
and would like to execute it within a jsdom env callback:
jsdom.env({
html : "http://dev.mysite.com:3000/products.html",
scripts : [ "http://code.jquery.com/jquery.js", "page-scrapers/products.js" ],
done : function(errors, window) {
handler(errors, window)
}
});
When executed, it tells me 'handler is not defined'. Am I getting close?
Context of the problem is to scrape data from an existing web site. We want to associate a javascript scraper for each page, and access the scraped data via URLs served up via a node.js server.
As suggested by Juan, the key is using node.js modules. The bulk of the hander method is exported from product.js:
exports.handler = function(errors, window, resp) {...
and then imported in the node.js-based server instance:
//note: subdir paths must start with './' :
var products = require('./page-scrapers/products.js');
This creates a reference to the method by name 'products.handler', which can then be called in the request handler:
var router = new director.http.Router({
'/floop' : {
get : funkyFunc
}
})
var funkyFunc = function() {
var resp = this.res
jsdom.env({
html : "http://dev.mySite.com:3000/products.html",
scripts : [ "http://code.jquery.com/jquery.js"],
done : function(errors, window) {products.handler(errors, window, resp)}
});
}
And that works.
If you want a variable to be accessible to another file, you have to export it. http://nodejs.org/api/modules.html
//products.js
exports.handler = function(window, error) {...}
// another.file.js
var products = require('products.js');
jsdom.env({
html : "http://dev.mysite.com:3000/products.html",
scripts : [ "http://code.jquery.com/jquery.js", "page-scrapers/products.js" ],
// This can be simplified as follows
done : products.handler
});
This sounds like a bad idea though, why would a handler be made into a global? I think you should restructure your code

Categories