Run local external python script in terminal from Vue application - javascript

hi I am a beginner to vue js and firebase. I am building a simple GUI for a network intrusion detection system using vue js. I have written a python script that allowed me to push the output from terminal to firebase. I am currently running the app in local host:8080
However in my vue app I want to run the local python script in terminal when I press the start button and kill the process when I press stop.
I read online that doing it require a server. Does anyone have a good
recommendation on how I can do it?
I hear that it may also be easier to just use 'child_process' from
node js but I can't use it in vue js.
What is the best / simplest way to implement it.
<template>
<div id="new-packet">
<h3>Start/Stop Analysis</h3>
<button type="submit" class="btn">Start</button>
<router-link to="/home" class="btn green">Stop</router-link>
</div>
</template>
<script>
export default {
name: 'new-packet',
data () {
return {}
}
}
</script>

You can use FLASK/DJANGO to setup restFul API for your python function as suggested by #mustafa but if you don't want to setup those frameworks then you can setup simple http server handler as -
# This class contains methods to handle our requests to different URIs in the app
class MyHandler(SimpleHTTPRequestHandler):
def do_HEAD(self):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
# Check the URI of the request to serve the proper content.
def do_GET(self):
if "URLToTriggerGetRequestHandler" in self.path:
# If URI contains URLToTriggerGetRequestHandler, execute the python script that corresponds to it and get that data
# whatever we send to "respond" as an argument will be sent back to client
content = pythonScriptMethod()
self.respond(content) # we can retrieve response within this scope and then pass info to self.respond
else:
super(MyHandler, self).do_GET() #serves the static src file by default
def handle_http(self, data):
self.send_response(200)
# set the data type for the response header. In this case it will be json.
# setting these headers is important for the browser to know what to do with
# the response. Browsers can be very picky this way.
self.send_header('Content-type', 'application/json')
self.end_headers()
return bytes(data, 'UTF-8')
# store response for delivery back to client. This is good to do so
# the user has a way of knowing what the server's response was.
def respond(self, data):
response = self.handle_http(data)
self.wfile.write(response)
# This is the main method that will fire off the server.
if __name__ == '__main__':
server_class = HTTPServer
httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)
print(time.asctime(), 'Server Starts - %s:%s' % (HOST_NAME, PORT_NUMBER))
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
httpd.server_close()
print(time.asctime(), 'Server Stops - %s:%s' % (HOST_NAME, PORT_NUMBER))
and then you can make call to the server from vue using AXIOS as -
new Vue({
el: '#app',
data () {
return {
info: null
}
},
mounted () {
axios
.get('/URLToTriggerGetRequestHandler')
.then(response => (this.info = response))
}
})
I don't think there is anyway you can directly call a python script from js or vue.

You need to install django or flask in local machine. Do a restful api like her https://flask-restful.readthedocs.io/en/latest/quickstart.html#a-minimal-api
Write your python inside
Call python code from vue js by using restful url.

Related

Python dictionary from flask returning null

I have created a home prediction model and currently trying to deploy it to heroku. I am using flask, javascript and html.
Everything works fine when I run it locally, but when deployed in heroku the dropdown menu of locations is empty.
On checking my console on the web browser I noticed that the JS actually got the response from the flask server, but the response is null instead of a list of locations.
Here is relevant codes.
import json
with open('columns.json', 'r') as f
__data_columns = json.load(f)['data_columns']
__locations= __data_columns[4:]
def get_location_names():
return __locations
#app.route('/locations', methods=['GET'])
def locations()
response =jsonify({'locations': get_location_names()})
return response
The result i get on print response is: locations: null how do I resolve this as columns.json contains something like this:
{"data_columns": ["New york", "Washigton"]}
The issue can with the path to the file columns.json. You try to build in such way:
import os
with open(os.path.join(os.path.dirname(__file__), 'columns.json'), 'r') as f:
__data_columns = json.load(f)['data_columns']
__locations= __data_columns[4:]

Calling javascript function using python function in django

I have a website built using the django framework that takes in an input csv folder to do some data processing. I would like to use a html text box as a console log to let the users know that the data processing is underway. The data processing is done using a python function. It is possible for me to change/add text inputs into the text box at certain intervals using my python function?
Sorry if i am not specific enough with my question, still learning how to use these tools!
Edit - Thanks for all the help though, but I am still quite new at this and there are lots of things that I do not really understand. Here is an example of my python function, not sure if it helps
def query_result(request, job_id):
info_dict = request.session['info_dict']
machines = lt.trace_machine(inputFile.LOT.tolist())
return render(request, 'tools/result.html', {'dict': json.dumps(info_dict),
'job_id': job_id})
Actually my main objective is to let the user know that the data processing has started and that the site is working. I was thinking maybe I could display an output log in a html textbox to achieve this purpose.
No cannot do that because you already at server side therefor you cannot touch anything in html page.
You can have 2 ways to do that:
You can make a interval function to call to server and ask the progress and update the progress like you want at callback function.
You can open a socket connection in your server & browser to instantly update.
While it is impossible for the server (Django) to directly update the client (browser), you can you JavaScript to make the request, and Django can return a StreamingHttpResponse. As each part of the response is received, you can update the textbox using JavaScript.
Here is a sample with pseudo code
def process_csv_request(request):
csv_file = get_csv_file(requests)
return StreamingHttpResponse(process_file(csv_file))
def process_file(csv_file):
for row in csv_file:
yield progress
actual_processing(row)
return "Done"
Alternatively you could write the process to the db or some cache, and call an API that returns the progress repeatedly from the frontend
You can achieve this with websockets using Django Channels.
Here's a sample consumer:
class Consumer(WebsocketConsumer):
def connect(self):
self.group_name = self.scope['user']
print(self.group_name) # use this for debugging not sure what the scope returns
# Join group
async_to_sync(self.channel_layer.group_add)(
self.group_name,
self.channel_name
)
self.accept()
def disconnect(self, close_code):
# Leave group
async_to_sync(self.channel_layer.group_discard)(
self.group_name,
self.channel_name
)
def update_html(self, event):
status = event['status']
# Send message to WebSocket
self.send(text_data=json.dumps({
'status': status
}))
Running through the Channels 2.0 tutorial you will learn that by putting some javascript on your page, each time it loads it will connect you to a websocket consumer. On connect() the consumer adds the user to a group. This group name is used by your csv processing function to send a message to the browser of any users connected to that group (in this case just one user) and update the html on your page.
def send_update(channel_layer, group_name, message):
async_to_sync(channel_layer.group_send)(
group_name,
{
'type': 'update_html',
'status': message
}
)
def process_csv(file):
channel_layer = get_channel_layer()
group_name = get_user_name() # function to get same group name as in connect()
with open(file) as f:
reader=csv.reader(f)
send_update(channel_layer, group_name, 'Opened file')
for row in reader:
send_update(channel_layer, group_name, 'Processing Row#: %s' % row)
You would include javascript on your page as outlined in the Channels documentation then have an extra onmessage function fro updating the html:
var WebSocket = new ReconnectiongWebSocket(...);
WebSocket.onmessage = function(e) {
var data = JSON.parse(e.data);
$('#htmlToReplace').html(data['status']);
}

Display arduino serial data on a web browser via python

I'm looking to get serial data from Arduino displayed from a web browser.
First, I put the data on a local host using bottle in python in json format
from bottle import route, run
import serial
dic = {}
ser = serial.Serial(port='COM6',baudrate=9600,timeout=None)
#route('/test')
def test():
c = ser.readline()
c = (str(c)[2:-5]) #just to get rid of the extra characters
dic["val"] = c
return(dic)
run(host='localhost', port=8080, debug=True)
Then I proceed to read it using javascript
function getArduinoVals(){
$.getJSON("http://localhost:8080/test", function(data){
$('#vals').html(data.val);
});
t = setTimeout("getArduinoVals()",50);
});
}
getArduinoVals();
however, it doesn't seem to load from local host (I tested other URLs). How should I fix this? Thanks!
You could use p5.serialport of p5.js for getting the serial data on the web browser. But you have to run the node server in the back.
https://github.com/vanevery/p5.serialport
You should check the github repo for getting started with p5.serialport.
p5.js is very similar to arduino code style so it's easier and convenient than python.

How to connect Javascript to Python sharing data with JSON format in both ways?

I'm trying to find out how to create a local connection between a Python server and a Javascript client using the JSON format for the data to be retrieved. Particularly, I need to make some queries on the HTML client side, send these queries to the server on JSON format and run them on the Python server side to search for data on a SQLite Database. And after getting the results from the database, send those results back to the client in JSON format too.
By now, I just can run the query on Python and code it on JSON like this:
import sqlite3 as dbapi
import json
connection = dbapi.connect("C:/folder/database.db")
mycursor = connection.cursor()
mycursor.execute("select * from people")
results = []
for information in mycursor.fetchall():
results += information
onFormat = json.dumps(results)
print(onFormat)
I know this code does something alike (in fact it runs), because it calls a service on a server which returns data in JSON format (but the server in this example is NOT Python):
<html>
<head>
<style>img{ height: 100px; float: left; }</style>
<script src="http://code.jquery.com/jquery-latest.js"></script>
</head>
<body>
<div id="images"></div>
<script>
$.getJSON("http://api.flickr.com/services/feeds/photos_public.gne?jsoncallback=?",
{
tags: "mount rainier",
tagmode: "any",
format: "json"
},
function(data) {
$.each(data.items, function(i,item){
$("<img/>").attr("src", item.media.m).appendTo("#images");
if ( i == 3 ) return false;
});
});</script>
</body>
</html>
What I need is to know how should I run (locally) the python program to be an available running web-service and how should be the Javascript to retrieve the data from the python server.
I've looking for this on internet everywhere but I didn't find this answer anywhere because the only answers they give are on how to code JSON inside Python or inside Javascript but not connecting both. Hope somebody can help me on this!!!
Here's a "hello world" example of a flask web-application that can serve static html and javascript files, search database using parameter from a javascript request, and return results to javascript as json:
import sqlite3
from flask import Flask, jsonify, g, redirect, request, url_for
app = Flask(__name__)
#app.before_request
def before_request():
g.db = sqlite3.connect('database.db')
#app.teardown_request
def teardown_request(exception):
if hasattr(g, 'db'):
g.db.close()
#app.route('/')
def index():
return redirect(url_for('static', filename='page.html'))
#app.route('/json-data/')
def json_data():
# get number of items from the javascript request
nitems = request.args.get('nitems', 2)
# query database
cursor = g.db.execute('select * from items limit ?', (nitems,))
# return json
return jsonify(dict(('item%d' % i, item)
for i, item in enumerate(cursor.fetchall(), start=1)))
if __name__ == '__main__':
app.run(debug=True, host='localhost', port=5001) # http://localhost:5001/
else:
application = app # for a WSGI server e.g.,
# twistd -n web --wsgi=hello_world.application --port tcp:5001:interface=localhost
The database setup code is from Using SQLite 3 with Flask.
static/page.html and static/json-jquery.js files are from Ajax/jQuery.getJSON Simple Example, where the javascript code is modified slightly to pass a different url and nitems parameter:
$(document).ready(function(){
$('#getdata-button').live('click', function(){
$.getJSON('/json-data', {'nitems': 3}, function(data) {
$('#showdata').html("<p>item1="+data.item1+" item2="+data.item2+" item3="+data.item3+"</p>");
});
});
});
Your question amounts to "how do I make this python into a webservice".
Probably the most lightweight ways to do that are web.py and flask. Check them out.
If this is getting bigger, consider django with tastypie - that's a simple way to make a json-based api.
Update: Apparently, there is also a python-javascript RPC framework called Pico, to which Felix Kling is a contributor. The intro says:
Literally add one line of code (import pico) to your Python module to
turn it into a web service that is accessible through the Javascript
(and Python) Pico client libararies.
I found finally an easier way than Flask. It's a Python framework called Bottle You only need to download the library from the official web site and put all its files in your working directory in order to import the library. You can also install it using the setup python program included to avoid carrying with the sourcecode everywhere. Then, for making your Web Service Server you can code it like this:
from bottle import hook, response, route, run, static_file, request
import json
import socket
import sqlite3
#These lines are needed for avoiding the "Access-Control-Allow-Origin" errors
#hook('after_request')
def enable_cors():
response.headers['Access-Control-Allow-Origin'] = '*'
#Note that the text on the route decorator is the name of the resource
# and the name of the function which answers the request could have any name
#route('/examplePage')
def exPage():
return "<h1>This is an example of web page</h1><hr/><h2>Hope you enjoy it!</h2>"
#If you want to return a JSON you can use a common dict of Python,
# the conversion to JSON is automatically done by the framework
#route('/sampleJSON', method='GET')
def mySample():
return { "first": "This is the first", "second": "the second one here", "third": "and finally the third one!" }
#If you have to send parameters, the right sintax is as calling the resoure
# with a kind of path, with the parameters separed with slash ( / ) and they
# MUST to be written inside the lesser/greater than signs ( <parameter_name> )
#route('/dataQuery/<name>/<age>')
def myQuery(name,age):
connection= sqlite3.connect("C:/folder/data.db")
mycursor = connection.cursor()
mycursor.execute("select * from client where name = ? and age= ?",(name, age))
results = mycursor.fetchall()
theQuery = []
for tuple in results:
theQuery.append({"name":tuple[0],"age":tuple[1]})
return json.dumps(theQuery)
#If you want to send images in jpg format you can use this below
#route('/images/<filename:re:.*\.jpg>')
def send_image(filename):
return static_file(filename, root="C:/folder/images", mimetype="image/jpg")
#To send a favicon to a webpage use this below
#route('/favicon.ico')
def favicon():
return static_file('windowIcon.ico', root="C:/folder/images", mimetype="image/ico")
#And the MOST important line to set this program as a web service provider is this
run(host=socket.gethostname(), port=8000)
Finally, you can call the REST web service of your Bottlepy app on a Javascript client in this way:
var addr = "192.168.1.100"
var port = "8000"
function makeQuery(name, age){
jQuery.get("http://"+addr+":"+port+"/dataQuery/"+ name+ "/" + age, function(result){
myRes = jQuery.parseJSON(result);
toStore= "<table border='2' bordercolor='#397056'><tr><td><strong>name</strong></td><td><strong>age</strong></td></tr>";
$.each(myRes, function(i, element){
toStore= toStore+ "<tr><td>"+element.name+"</td><td>" + element.age+ "</td></td></tr>";
})
toStore= toStore+ "</table>"
$('#theDataDiv').text('');
$('<br/>').appendTo('#theDataDiv');
$(toStore).appendTo('#theDataDiv');
$('<br/>').appendTo('#theDataDiv');
})
}
I hope it could be useful for somebody else

How to combine scrapy and htmlunit to crawl urls with javascript

I'm working on Scrapy to crawl pages,however,I can't handle the pages with javascript.
People suggest me to use htmlunit, so I got it installed,but I don't know how to use it at all.Dose anyone can give an example(scrapy + htmlunit) for me? Thanks very much.
To handle the pages with javascript you can use Webkit or Selenium.
Here some snippets from snippets.scrapy.org:
Rendered/interactive javascript with gtk/webkit/jswebkit
Rendered Javascript Crawler With Scrapy and Selenium RC
Here is a working example using selenium and phantomjs headless webdriver in a download handler middleware.
class JsDownload(object):
#check_spider_middleware
def process_request(self, request, spider):
driver = webdriver.PhantomJS(executable_path='D:\phantomjs.exe')
driver.get(request.url)
return HtmlResponse(request.url, encoding='utf-8', body=driver.page_source.encode('utf-8'))
I wanted to ability to tell different spiders which middleware to use so I implemented this wrapper:
def check_spider_middleware(method):
#functools.wraps(method)
def wrapper(self, request, spider):
msg = '%%s %s middleware step' % (self.__class__.__name__,)
if self.__class__ in spider.middleware:
spider.log(msg % 'executing', level=log.DEBUG)
return method(self, request, spider)
else:
spider.log(msg % 'skipping', level=log.DEBUG)
return None
return wrapper
settings.py:
DOWNLOADER_MIDDLEWARES = {'MyProj.middleware.MiddleWareModule.MiddleWareClass': 500}
for wrapper to work all spiders must have at minimum:
middleware = set([])
to include a middleware:
middleware = set([MyProj.middleware.ModuleName.ClassName])
The main advantage to implementing it this way rather than in the spider is that you only end up making one request. In the solution at reclosedev's second link for example: The download handler processes the request and then hands off the response to the spider. The spider then makes a brand new request in it's parse_page function -- That's two requests for the same content.
Another example: https://github.com/scrapinghub/scrapyjs
Cheers!

Categories