I'm trying to get python ReactiveX stream (using RxPy library) to be sent to a javascript on Web UI component, but I can't seem to find a way to do so. Also, I might need to get the data stream coming into the Javascript into a RxJS Observable of sorts for further processing.
Could you please help me understand how to achieve this?
I'm still getting a grip on ReactiveX so maybe there are some fundamental concepts I'm missing, but I'm struggling to find anything similar to this around the net.
This issue has come up as I'm working on a desktop app that takes data from a csv or a zeromq endpoint, and streams it to a UI where the data will be plotted dynamically (updated the plot as new data comes in). I'm using Electron to build my app, using python as my backend code. Python is a must as I will be extending the app with some TensorFlow models.
Following fyears really well made example as an initial structure, I have written some sample code to play with but I can't seem to get it to work.
I manage to get from the UI button all the way to the python scripts, but I get stuck in the return of the PricesApi.get_stream(...) method.
index.html
The front end is straight forward.
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Electron Application</title>
</head>
<body>
<button id="super-button">Trigger Python Code</button>
<div id="py-output">
</div>
</body>
<script src="renderer.js" ></script>
</html>
api.py:
The ZeroRPC server file is like the one in the above mentioned link.
import gevent
import json
import signal
import zerorpc
from core_operator import stream
class PricesApi(object):
def get_stream(self, filename):
return stream(filename)
def stop(self):
print('Stopping strategy.')
def echo(self, text):
"""echo any text"""
return text
def load_settings():
with open('settings.json') as json_settings:
settings_dictionary = json.load(json_settings)
return settings_dictionary
def main():
settings = load_settings()
s = zerorpc.Server(PricesApi())
s.bind(settings['address'])
print(f"Initialising server on {settings['address']}")
s.run()
if __name__ == '__main__':
main()
core_operator.py
This is the file were the major logic will sit to get prices from zeroMQ subscription, but currently just creates an Observable from a csv.
import sys
import rx
from csv import DictReader
def prepare_csv_timeseries_stream(filename):
return rx.from_(DictReader(open(filename, 'r')))
def stream(filename):
price_observable = prepare_csv_timeseries_stream(filename)
return price_observable
rendered.js
finally, the javascript that should be receiving the stream:
const zerorpc = require('zerorpc');
const fs = require('fs')
const settings_block = JSON.parse(fs.readFileSync('./settings.json').toString());
let client = new zerorpc.Client();
client.connect(settings_block['address']);
let button = document.querySelector('#super-button');
let pyOutput = document.querySelector('#py-output');
let filename = '%path-to-file%'
button.addEventListener('click', () => {
let line_to_write = '1'
console.log('button click received.')
client.invoke('get_stream', filename, (error, result) => {
var messages = pyOutput;
message = document.createElement('li'),
content = document.createTextNode(error.data);
message.appendChild(content);
messages.appendChild(message);
if(error) {
console.error(error);
} else {
var messages = pyOutput;
message = document.createElement('li'),
content = document.createTextNode(result.data);
message.appendChild(content);
messages.appendChild(message);
}
})
})
I have been looking into using WebSockets, but failed in understanding how to implement it. I did find some examples using Tornado server, however I am trying to keep it as pure as possible and, also, it feels odd that having already a client/server structure from Electron, I'm not able to use that directly.
Also I'm trying to maintain the entire system a PUSH structure as the data requirements don't allow for a PULL type of pattern, with regular pollings etc.
Thank you very much in advance for any time you can dedicate to this, and please let me know if you require any further details or explanations.
I found a solution by using an amazing library called Eel (described as "A little Python library for making simple Electron-like HTML/JS GUI apps"). Its absolute simplicity and intuitiveness allowed me to achieve what I wanted a few simple lines.
Follow the intro to understand the layout.
Then your main python file (which I conveniently named main.py), you expose the stream function to eel, so it can be called from JS file, and pipe the stream into the JavaScript "receive_price" function which is exposed from the JS file!
import sys
import rx
from csv import DictReader
def prepare_csv_timeseries_stream(filename):
return rx.from_(DictReader(open(filename, 'r')))
def process_logic():
return pipe(
ops.map(lambda p: print(p)), # just to view what's flowing through
ops.map(lambda p: eel.receive_price(p)), # KEY FUNCTION in JS file, exposed via eel, is called for each price.
)
#eel.expose # Decorator so this function can get triggered from JavaScript
def stream(filename):
price_observable = prepare_csv_timeseries_stream(filename)
price_observable.pipe(process_logic()).subscribe() # apply the pipe and subscribe to trigger stream
eel.init('web')
eel.start('main.html') # look at how beautiful and elegant this is!
Now we create the price_processing.js file (placed in the 'web' folder as per Eel instructions) to incorporate the exposed functions
let button = document.querySelector('#super-button');
let pyOutput = document.querySelector('#py-output' );
let filename = '%path-to-file%'
console.log("ready to receive data!")
eel.expose(receive_price); // Exposing the function to Python, to process each price
function receive_price(result) {
var messages = pyOutput;
message = document.createElement('li');
content = document.createTextNode(result);
message.appendChild(content);
messages.appendChild(message);
// in here you can add more functions to process data, e.g. logging, charting and so on..
};
button.addEventListener('click', () => {
console.log('Button clicked magnificently! Bloody good job')
eel.stream(filename); // calling the Python function exposed through Eel to start stream.
})
The HTML stays almost the same, apart from the changing the script refs: /eel.js, as per Eel documentation and our price_processing.js file.
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Let's try Eel</title>
</head>
<body>
<h1>Eel-saved-my-life: the App!</h1>
<button id="super-button">Trigger Python Code</button>
<div id="py-output">
</div>
</body>
<script type="text/javascript" src="/eel.js"></script>
<script type="text/javascript" src="price_processing.js"></script>
</html>
I hope this can help anyone struggling with the same problem.
Related
I built a basic HTML & Javascript app to translate a few words from the Google Translate API then text them to a number via Twilio. Here is my HTML:
<!DOCTYPE html>
<html>
<head>
<script type = "text/javascript" src="script.js"></script>
</head>
<body>
<p>Click the button to receive 3 Hebrew texts</p>
<input id="clickMe" type="button" value="clickme" onclick="myFunction();" />
</body>
</html>
And here is script.js:
function myFunction(){
// Imports the Google Cloud client library
const {Translate} =require('#google-cloud/translate').v2;
// Creates a client in Google API
const projectId = 'Xx'
const keyFilename = '/Users/x/Downloads/vocal-lead-306923-b3d8f6749397.json'
const translate = new Translate({projectId, keyFilename});
const lang = "he"
// Creates a client in Twilio API
const accountSid = 'Xx'
const authToken = 'Xx'
const client = require('twilio')(accountSid,authToken);
/** Set variables for input in Google API */
const text = ['One day'];
const target = lang;
async function translateText() {
// Translates the text into the target language. "text" can be a string for
// translating a single piece of text, or an array of strings for translating
// multiple texts.
let [translations] = await translate.translate(text, target);
translations = Array.isArray(translations) ? translations : [translations];
//console.log('Translations:');
translations.forEach((translation, i) => {
setTimeout(() => {
// Sends messages via Twilio
client.messages.create({
to:'+phone',
from:'+phone',
body: `${translation}`
})
}, i * 10000);
});
}
translateText();
}
myFunction();
By itself, the script works but it doesn't work when I run it from my local browser. I hit inspect and I get this error:
Uncaught ReferenceError: require is not defined
at myFunction (script.js:5)
at HTMLInputElement.onclick (index.html:8)
I took out auth keys/any personal data but I think that is all correct. Any advice would be helpful!
If you're trying to run the script in browser, it won't work. This is because require() is a nodejs feature, and so anything that depends on libraries by require needs to be done in a nodejs backend (you can communicate between html frontend and nodejs backend over http for example; see https://expressjs.com/ and https://nodejs.org/en/ the latter has builtin http but for routing express is recommended).
You mention that you've removed private information for this SO post, but keep in mind that when you publish this site and the script.js is visible to the user (i.e. inspect element) it'll be freely accessible. It is not good practice to put secrets in the frontend code. Consider this: a bad actor uses your API key to send spam SMS on your behalf... not good.
Also see: How to use google translation api with react
I am learning data science but I am still new to flask, html and Js.
I have developed a ML model for home price prediction and would love to deploy it to Heroku.
The problem is the drop down menu in my frontend is not updated by the locations I have passed in my python flask backend.
here are the important parts of my code.
server.py:
from flask import Flask, request, jsonify, render_template
app = Flask(__name__)
#app.route('/locations')
def locations():
response = jsonify({
'locations': get_location_names()
})
response.headers.add('Access-Control-Allow-Origin', '*')
return response
app.js
function onPageLoad() {
console.log( "document loaded" );
$.get("{{ url_for('locations') }}",
function(data, status) {
console.log("got response for locations request");
if(data) {
var locations = data.locations;
var uiLocations = document.getElementById("uiLocations");
$('#uiLocations').empty();
for(var i in locations) {
var opt = new Option(locations[i]);
$('#uiLocations').append(opt);
}
}
});
index.html:
<!DOCTYPE html>
<html>
<head>
<title>Banglore Home Price Prediction</title>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.1/jquery.min.js"
type="text/javascript"></script>
<link rel="stylesheet" type= "text/css" href="{{url_for('static', filename = 'app.css')}}">
<script type="text/javascript" src ="{{url_for('static', filename = 'app.js')}}"></script>
</head>
the browser consoles prints "document loaded" which I placed in app.js but doesn't get the data from server.py.
I believe the issue is with the url_for statement but don't know how to go about it.
You can't use jinja2 expressions in a js file which is loaded as a static asset. - v25
You can add your Javascript in a <script> tag in the index.html file. Or you can hard code it.
I usually do not use either approachs. Instead, I render all the files with a custom python script before running the main app. I use a .bat file and type all the commands needed. You sometimes use Sass or any other thing that requires rendering... So it's helpful to be organized and write such a script. Use this approach if your JavaScript data doesn't change dynamically.
But if your script is dynamic, you can add a route that renders your file every time it is requested.
#app.route('/my_script.js')
def script():
return render_template('my_script.js', name='mark')
And in your /locations route:
<script src="{{url_for('script')}}"></script>
Jinja2 can parse any file regardless of it's type.
I'm new to Web Development (including JavaScript and HTML) and have a few issues within my personal project that seem to have no clear fixes.
Overview
My project is taking input from a user on the website, and feeding it to my back-end to output a list of word completion suggestions.
For example, input => "bass", then the program would suggest "bassist", "bassa", "bassalia", "bassalian", "bassalan", etc. as possible completions for the pattern "bass" (these are words extracted from an English dictionary text file).
The backend - running on Node JS libraries
trie.js file:
/* code for the trie not fully shown */
var Deque = require("collections/deque"); // to be used somewhere
function add_word_to_trie(word) { ... }
function get_words_matching_pattern(pattern, number_to_get = DEFAULT_FETCH) { ... }
// read in words from English dictionary
var file = require('fs');
const DICTIONARY = 'somefile.txt';
function preprocess() {
file.readFileSync(DICTIONARY, 'utf-8')
.split('\n')
.forEach( (item) => {
add_word_to_trie(item.replace(/\r?\n|\r/g, ""));
});
}
preprocess();
module.exports = get_words_matching_trie;
The frontend
An HTML script that renders the visuals for the website, as well as getting input from the user and passing it onto the backend script for getting possible suggestions. It looks something like this:
index.html script:
<!DOCTYPE HTML>
<html>
<!-- code for formatting website and headers not shown -->
<body>
<script src = "./trie.js">
function get_predicted_text() {
const autofill_options = get_words_matching_pattern(input.value);
/* add the first suggestion we get from the autofill options to the user's input
arbitrary, because I couldn't get this to actually work. Actual version of
autofill would be more sophisticated. */
document.querySelector("input").value += autofill_options[0];
}
</script>
<input placeholder="Enter text..." oninput="get_predicted_text()">
<!-- I get a runtime error here saying that get_predicted_text is not defined -->
</body>
</html>
Errors I get
Firstly, I get the obvious error of 'require()' being undefined on the client-side. This, I fix using browserify.
Secondly, there is the issue of 'fs' not existing on the client-side, for being a node.js module. I have tried running the trie.js file using node and treating it with some server-side code:
function respond_to_user_input() {
fs.readFile('./index.html', null, (err, html) => {
if (err) throw err;
http.createServer( (request, response) => {
response.write(html);
response.end();
}).listen(PORT);
});
respond_to_user_input();
}
With this, I'm not exactly sure how to edit document elements, such as changing input.value in index.html, or calling the oninput event listener within the input field. Also, my CSS formatting script is not called if I invoke the HTML file through node trie.js command in terminal.
This leaves me with the question: is it even possible to run index.html directly (through Google Chrome) and have it use node JS modules when it calls the trie.js script? Can the server-side code I described above with the HTTP module, how can I fix the issues of invoking my external CSS script (which my HTML file sends an href to) and accessing document.querySelector("input") to edit my input field?
I am working on a web-scraping project. One of the websites I am working with has the data coming from Javascript.
There was a suggestion on one of my earlier questions that I can directly call the Javascript from Python, but I'm not sure how to accomplish this.
For example: If a JavaScript function is defined as: add_2(var,var2)
How would I call that JavaScript function from Python?
Find a JavaScript interpreter that has Python bindings. (Try Rhino? V8? SeaMonkey?). When you have found one, it should come with examples of how to use it from python.
Python itself, however, does not include a JavaScript interpreter.
To interact with JavaScript from Python I use webkit, which is the browser renderer behind Chrome and Safari. There are Python bindings to webkit through Qt. In particular there is a function for executing JavaScript called evaluateJavaScript().
Here is a full example to execute JavaScript and extract the final HTML.
An interesting alternative I discovered recently is the Python bond module, which can be used to communicate with a NodeJs process (v8 engine).
Usage would be very similar to the pyv8 bindings, but you can directly use any NodeJs library without modification, which is a major selling point for me.
Your python code would look like this:
val = js.call('add2', var1, var2)
or even:
add2 = js.callable('add2')
val = add2(var1, var2)
Calling functions though is definitely slower than pyv8, so it greatly depends on your needs. If you need to use an npm package that does a lot of heavy-lifting, bond is great. You can even have more nodejs processes running in parallel.
But if you just need to call a bunch of JS functions (for instance, to have the same validation functions between the browser/backend), pyv8 will definitely be a lot faster.
You can eventually get the JavaScript from the page and execute it through some interpreter (such as v8 or Rhino). However, you can get a good result in a way easier way by using some functional testing tools, such as Selenium or Splinter. These solutions launch a browser and effectively load the page - it can be slow but assures that the expected browser displayed content will be available.
For example, consider the HTML document below:
<html>
<head>
<title>Test</title>
<script type="text/javascript">
function addContent(divId) {
var div = document.getElementById(divId);
div.innerHTML = '<em>My content!</em>';
}
</script>
</head>
<body>
<p>The element below will receive content</p>
<div id="mydiv" />
<script type="text/javascript">addContent('mydiv')</script>
</body>
</html>
The script below will use Splinter. Splinter will launch Firefox and after the complete load of the page it will get the content added to a div by JavaScript:
from splinter.browser import Browser
import os.path
browser = Browser()
browser.visit('file://' + os.path.realpath('test.html'))
elements = browser.find_by_css("#mydiv")
div = elements[0]
print div.value
browser.quit()
The result will be the content printed in the stdout.
You might call node through Popen.
My example how to do it
print execute('''function (args) {
var result = 0;
args.map(function (i) {
result += i;
});
return result;
}''', args=[[1, 2, 3, 4, 5]])
Hi so one possible solution would be to use ajax with flask to comunicate between javascript and python. You would run a server with flask and then open the website in a browser. This way you could run javascript functions when the website is created via pythoncode or with a button how it is done in this example.
HTML code:
<html>
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script>
<script>
function pycall() {
$.getJSON('/pycall', {content: "content from js"},function(data) {
alert(data.result);
});
}
</script>
<button type="button" onclick="pycall()">click me</button>
</html>
Python Code:
from flask import Flask, jsonify, render_template, request
app = Flask(__name__)
def load_file(file_name):
data = None
with open(file_name, 'r') as file:
data = file.read()
return data
#app.route('/pycall')
def pycall():
content = request.args.get('content', 0, type=str)
print("call_received",content)
return jsonify(result="data from python")
#app.route('/')
def index():
return load_file("basic.html")
import webbrowser
print("opening localhost")
url = "http://127.0.0.1:5000/"
webbrowser.open(url)
app.run()
output in python:
call_received content from js
alert in browser:
data from python
This worked for me for simple js file, source:
https://www.geeksforgeeks.org/how-to-run-javascript-from-python/
pip install js2py
pip install temp
file.py
import js2py
eval_res, tempfile = js2py.run_file("scripts/dev/test.js")
tempfile.wish("GeeksforGeeks")
scripts/dev/test.js
function wish(name) {
console.log("Hello, " + name + "!")
}
Did a whole run-down of the different methods recently.
PyQt4
node.js/zombie.js
phantomjs
Phantomjs was the winner hands down, very straightforward with lots of examples.
I have external programs such as ffmpeg and gstreamer running in the background and writing to a log file. I want to display the contents of this log with my Flask application, so that the user can watch the log update, like tail -f job.log would do in the terminal.
I tried to use <object data="/out.log" type="text/plain"> to point at the log file, but that failed to show the data, or the browser told me I needed a plugin.
How can I embed and update the log file in an HTML page?
Use a Flask view to continuously read from the file forever and stream the response. Use JavaScript to read from the stream and update the page. This example sends the entire file, you may want to truncate that at some point to save bandwidth and memory. This example sleeps between reads to reduce cpu load from the endless loop and allow other threads more active time.
from time import sleep
from flask import Flask, render_template
app = Flask(__name__)
#app.route('/')
def index():
return render_template('index.html')
#app.route('/stream')
def stream():
def generate():
with open('job.log') as f:
while True:
yield f.read()
sleep(1)
return app.response_class(generate(), mimetype='text/plain')
app.run()
<pre id="output"></pre>
<script>
var output = document.getElementById('output');
var xhr = new XMLHttpRequest();
xhr.open('GET', '{{ url_for('stream') }}');
xhr.send();
setInterval(function() {
output.textContent = xhr.responseText;
}, 1000);
</script>
This is almost the same as this answer, which describes how to stream and parse messages, although reading from an external file forever was novel enough to be it's own answer. The code here is simpler because we don't care about parsing messages or ending the stream, just tailing the file forever.
I am using frontail package from npm.
npm i frontail -g
frontail /var/log/syslog
visit http://127.0.0.1:9001 to view logs
Source: https://github.com/mthenw/frontail
This may not be the exact answer for the question(to embed an html page), but it solves the problem of many users who are looking specifically only for
Display the contents of a log file as it is updated
For me #davidism solution (accepted answer) worked only on Firefox. It didnt work in Chrome, Brave, Vivaldi. Maybe there was some kind of de-sync in backend and frontend loops? I dont know.
Anyway i used far simpler solution, without loop on the backend and javascript loop on frontend. Maybe it's "uglier" and may cause trouble for some very long logs, but at least it works on every browser i use.
#app.route('/stream')
def stream():
with open("job.log", "r") as f:
content = f.read()
# as you see, file is loaded only once, no loop here, (loop is on frontend side)
return app.response_class(content, mimetype='text/plain')
<!DOCTYPE html>
<html>
<head>
<!-- page auto-refresh every 10 seconds -->
<meta http-equiv="refresh" content="10">
<title>Some title</title>
</head>
<body>
<h1>Log file ...</h1>
<script>
// function for adjusting iframe height to log size
function resizeIframe(obj) {
obj.style.height = obj.contentWindow.document.documentElement.scrollHeight + 'px';
}
</script>
<!-- iframe pulls whole file -->
<iframe src="{{ url_for('stream') }}" frameborder="0" style="overflow:hidden;width:100%" width="100%" frameborder="0" scrolling="no" onload="resizeIframe(this)"></iframe>
</body>
</html>
As you see the only javascript code is used to adjust iframe height to current text size.