Related
I am currently trying to run a python file from a deno backend using the following code...
const cmd = Deno.run({
cmd: ["python", "python.py"],
stdout: "piped",
stderr: "piped"
});
const output = await cmd.output() // "piped" must be set
const outStr = new TextDecoder().decode(output);
const error = await cmd.stderrOutput();
const errorStr = new TextDecoder().decode(error);
cmd.close();
console.log(outStr, errorStr);
const resultsAlgorithm = outStr
console.log('This is a test, python result is...',outStr)
console.log('Finished')
The code works for basic scripts like 'print("Hello")' but is unable to run imports on more complex scripts such as...
import pandas as pd # Changes call up name to pd
from yahoofinancials import YahooFinancials
from datetime import date, datetime, timedelta,time
Yahoo_Forex = pd.DataFrame()
Currency_Pair_Prices = pd.DataFrame()
print('Running')
def DataExtract(FileName, DataFrameName, DataFrameName_2, time_range, Interval, ColumnName):
print('Function started')
start = date.today() - timedelta(days=2)
end = date.today() - timedelta(days=time_range)
DataFrameName = pd.read_excel(FileName, header=None)
DataFrameName.columns = ColumnName
n = 0
for ticker in DataFrameName[ColumnName[0]]:
Currency_Pair = DataFrameName.iloc[n, 1]
Currency_Pair_Ticker = YahooFinancials(ticker)
data = Currency_Pair_Ticker.get_historical_price_data(
str(end), str(start), Interval)
Extracted_Data = pd.DataFrame(data[ticker]["prices"])
Currency_Close_Price = (Extracted_Data["close"])
DataFrameName_2[str(Currency_Pair)] = Currency_Close_Price
n = n+1 # 13
print(DataFrameName_2)
print("DataExtract Completed")
DataExtract("yahoo_Forex.xlsx", Yahoo_Forex, Currency_Pair_Prices,int(10), "daily", ["Ticker", "Pair", "Exchange"])
The python code runs successfully on it's own so must be something with deno but sure what I would need to change so any help would be appreciated!
So, I have such viewset.
class ProfileViewSet(viewsets.ModelViewSet):
queryset = Profile.objects.all()
permission_classes = (IsAuthenticated, )
#action(detail=True, methods=['PUT'])
def set_description(self, request, pk=None):
profile = self.get_object()
serializer = DescriptionSerializer(data=request.data)
if serializer.is_valid():
profile.description = request.data['description']
profile.save()
else:
return Response(serializer.errors,
status=status.HTTP_400_BAD_REQUEST)
#action(
detail=True,
methods=['post'],
serializer_class=ImageSerializer
)
def add_image(self, request, pk=None):
instance = self.get_object()
serializer = self.get_serializer(instance, data=self.request.data)
serializer.is_valid(raise_exception=True)
image = serializer.save(profile=request.user.profile)
image_id = image.id
return Response(serializer.data, {'image_id': image_id})
#action(
detail=True,
methods=['delete'],
serializer_class=ImageSerializer
)
def delete_image(self, request, pk):
instance = self.get_object()
instance.delete()
return JsonResponse(status=status.HTTP_200_OK)
My urls.py:
from django.urls import path, include
from rest_framework.routers import DefaultRouter
from . import views
from . import viewsets
app_name = 'profile_'
router = DefaultRouter()
router.register('', viewsets.ProfileViewSet)
urlpatterns = [
path('', views.ProfileView.as_view(), name='profile'),
]
urlpatterns += router.urls
And finally there is a couple of available urls (showed by python3 manange show_urls)
/profile/<pk>/add_image/ profile_.viewsets.ProfileViewSet profile_:profile-add-image
/profile/<pk>/add_image\.<format>/ profile_.viewsets.ProfileViewSet profile_:profile-add-image
/profile/<pk>/delete_image/ profile_.viewsets.ProfileViewSet profile_:profile-delete-image
/profile/<pk>/delete_image\.<format>/ profile_.viewsets.ProfileViewSet profile_:profile-delete-image
/profile/<pk>/set_description/ profile_.viewsets.ProfileViewSet profile_:profile-set-description
/profile/<pk>/set_description\.<format>/ profile_.viewsets.ProfileViewSet profile_:profile-set-description
I not copied all of them but you can see there is all paths I need provided by default router.
So, I trying to change profile's description from JS
this.update(`http://127.0.0.1:8000/profile/${this.user_id}/set_description/`, data, 'PUT')
Update is a function which getting url, some king of data and a request method (And it works ok, because I used it so much times).
But server responded with:
profile.js:58 PUT http://127.0.0.1:8000/profile/5/set_description/ 404 (Not Found)
So, if I try http://127.0.0.1:8000/profile/${this.user_id}/add_image/ it will cause same issue.
All paths exist as you can see in the output above
Where did the problem come from? I will be soooooo gratefull if you help me!
.
I'm uploading small files (sub 20k) using Fetch and Flask, however it can take up to 10 seconds to post, process and then return. Conversely, if i run the same functions in pure python only the same load and process time is less than a second.
When running a file upload with no processing it is almost instant upload.
Am I missing something that slows things down when Im processing files in with Flask?
I've tried uploading with no processing (fast), I've tried processsing with no flask (fast). I've tried uplaoding and processing from browser with flask(slow)
from flask import render_template, url_for, flash, redirect, request, jsonify, Response, Flask, session, make_response, Markup
import io, random, sys, os, pandas
app = Flask(__name__)
#app.route("/")
############################################################
#app.route("/routee", methods = ['GET', 'POST'])
def routee():
return render_template('Upload Test.html')
############################################################
#app.route("/routee/appendroute", methods = ['GET', 'POST'])
def appendroute():
PrintFlask(request.files.getlist('route'))
return make_response(jsonify('Voyage = VoyageJson, Intersects = IntersectJson'), 200)
############################################################
if __name__ == "__main__":
app.run(debug=True)
<script type="text/javascript">
function prepformdata(route){
formdata = new FormData();
for (var i = 0; i < route.files.length; i++) {
formdata.append('route', route.files[i]);
}
return formdata
}
//////////////////////////////////////////////////////////////
function appendroutes(formdata, route) {
uploadfiles = document.getElementById("files")
formdata = prepformdata(uploadfiles)
InitConst ={method: "POST",body: formdata}
url = window.origin + '/routee/appendroute'
fetch(url,InitConst)
.then(res => res.json())
.then(data => {addon = data['Voyage']; addonIntersects = data['Intersects']})
.then(() => console.log('Route(s) Added'))
}
</script>
Whilst the code above is very nippy. I'm expecting to do some equally nippy processing server side. But something is slowing it down. Any ideas why processing might slow down when flask is used?
I am trying to scrape search results from website that uses a __doPostBack function. The webpage displays 10 results per search query. To see more results, one has to click a button that triggers a __doPostBack javascript. After some research, I realized that the POST request behaves just like a form, and that one could simply use scrapy's FormRequest to fill that form. I used the following thread:
Troubles using scrapy with javascript __doPostBack method
to write the following script.
# -*- coding: utf-8 -*-
from scrapy.contrib.spiders import CrawlSpider
from scrapy.http import FormRequest
from scrapy.http import Request
from scrapy.selector import Selector
from ahram.items import AhramItem
import re
class MySpider(CrawlSpider):
name = u"el_ahram2"
def start_requests(self):
search_term = u'اقتصاد'
baseUrl = u'http://digital.ahram.org.eg/sresult.aspx?srch=' + search_term + u'&archid=1'
requests = []
for i in range(1, 4):#crawl first 3 pages as a test
argument = u"'Page$"+ str(i+1) + u"'"
data = {'__EVENTTARGET': u"'GridView1'", '__EVENTARGUMENT': argument}
currentPage = FormRequest(baseUrl, formdata = data, callback = self.fetch_articles)
requests.append(currentPage)
return requests
def fetch_articles(self, response):
sel = Selector(response)
for ref in sel.xpath("//a[contains(#href,'checkpart.aspx?Serial=')]/#href").extract():
yield Request('http://digital.ahram.org.eg/' + ref, callback=self.parse_items)
def parse_items(self, response):
sel = Selector(response)
the_title = ' '.join(sel.xpath("//title/text()").extract()).replace('\n','').replace('\r','').replace('\t','')#* mean 'anything'
the_authors = '---'.join(sel.xpath("//*[contains(#id,'editorsdatalst_HyperLink')]//text()").extract())
the_text = ' '.join(sel.xpath("//span[#id='TextBox2']/text()").extract())
the_month_year = ' '.join(sel.xpath("string(//span[#id = 'Label1'])").extract())
the_day = ' '.join(sel.xpath("string(//span[#id = 'Label2'])").extract())
item = AhramItem()
item["Authors"] = the_authors
item["Title"] = the_title
item["MonthYear"] = the_month_year
item["Day"] = the_day
item['Text'] = the_text
return item
My problem now is that 'fetch_articles' is never called:
2014-05-27 12:19:12+0200 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2014-05-27 12:19:13+0200 [el_ahram2] DEBUG: Crawled (200) <POST http://digital.ahram.org.eg/sresult.aspx?srch=%D8%A7%D9%82%D8%AA%D8%B5%D8%A7%D8%AF&archid=1> (referer: None)
2014-05-27 12:19:13+0200 [el_ahram2] DEBUG: Crawled (200) <POST http://digital.ahram.org.eg/sresult.aspx?srch=%D8%A7%D9%82%D8%AA%D8%B5%D8%A7%D8%AF&archid=1> (referer: None)
2014-05-27 12:19:13+0200 [el_ahram2] DEBUG: Crawled (200) <POST http://digital.ahram.org.eg/sresult.aspx?srch=%D8%A7%D9%82%D8%AA%D8%B5%D8%A7%D8%AF&archid=1> (referer: None)
2014-05-27 12:19:13+0200 [el_ahram2] INFO: Closing spider (finished)
After searching for several days I feel completely stuck. I am a beginner in python, so perhaps the error is trivial. However if it is not, this thread could be of use to a number of people. Thank you in advance for you help.
Your code is fine. fetch_articles is running. You can test it by adding a print statement.
However, the website requires you to validate POST requests. In order to validate them, you must have __EVENTVALIDATION and __VIEWSTATE in your request body to prove you are responding to their form. In order to get these, you need to first make a GET request, and extract these fields from the form. If you don't provide this, you get an error page instead, which did not contain any links with "checkpart.aspx?Serial=", so your for loop was not being executed.
Here is how I've setup the start_request, and then fetch_search does what start_request used to do.
class MySpider(CrawlSpider):
name = u"el_ahram2"
def start_requests(self):
search_term = u'اقتصاد'
baseUrl = u'http://digital.ahram.org.eg/sresult.aspx?srch=' + search_term + u'&archid=1'
SearchPage = Request(baseUrl, callback = self.fetch_search)
return [SearchPage]
def fetch_search(self, response):
sel = Selector(response)
search_term = u'اقتصاد'
baseUrl = u'http://digital.ahram.org.eg/sresult.aspx?srch=' + search_term + u'&archid=1'
viewstate = sel.xpath("//input[#id='__VIEWSTATE']/#value").extract().pop()
eventvalidation = sel.xpath("//input[#id='__EVENTVALIDATION']/#value").extract().pop()
for i in range(1, 4):#crawl first 3 pages as a test
argument = u"'Page$"+ str(i+1) + u"'"
data = {'__EVENTTARGET': u"'GridView1'", '__EVENTARGUMENT': argument, '__VIEWSTATE': viewstate, '__EVENTVALIDATION': eventvalidation}
currentPage = FormRequest(baseUrl, formdata = data, callback = self.fetch_articles)
yield currentPage
...
def fetch_articles(self, response):
sel = Selector(response)
print response._get_body() # you can write to file and do an grep
for ref in sel.xpath("//a[contains(#href,'checkpart.aspx?Serial=')]/#href").extract():
yield Request('http://digital.ahram.org.eg/' + ref, callback=self.parse_items)
I could not find the "checkpart.aspx?Serial=" which you are searching for.
This might not solve your issue, but using answer instead of the comment for the code formatting.
I'm looking at the following API:
http://wiki.github.com/soundcloud/api/oembed-api
The example they give is
Call:
http://soundcloud.com/oembed?url=http%3A//soundcloud.com/forss/flickermood&format=json
Response:
{
"html":"<object height=\"81\" ... ",
"user":"Forss",
"permalink":"http:\/\/soundcloud.com\/forss\/flickermood",
"title":"Flickermood",
"type":"rich",
"provider_url":"http:\/\/soundcloud.com",
"description":"From the Soulhack album...",
"version":1.0,
"user_permalink_url":"http:\/\/soundcloud.com\/forss",
"height":81,
"provider_name":"Soundcloud",
"width":0
}
What do I have to do to get this JSON object from just an URL?
It seems they offer a js option for the format parameter, which will return JSONP. You can retrieve JSONP like so:
function getJSONP(url, success) {
var ud = '_' + +new Date,
script = document.createElement('script'),
head = document.getElementsByTagName('head')[0]
|| document.documentElement;
window[ud] = function(data) {
head.removeChild(script);
success && success(data);
};
script.src = url.replace('callback=?', 'callback=' + ud);
head.appendChild(script);
}
getJSONP('http://soundcloud.com/oembed?url=http%3A//soundcloud.com/forss/flickermood&format=js&callback=?', function(data){
console.log(data);
});
A standard http GET request should do it. Then you can use JSON.parse() to make it into a json object.
function Get(yourUrl){
var Httpreq = new XMLHttpRequest(); // a new request
Httpreq.open("GET",yourUrl,false);
Httpreq.send(null);
return Httpreq.responseText;
}
then
var json_obj = JSON.parse(Get(yourUrl));
console.log("this is the author name: "+json_obj.author_name);
that's basically it
In modern-day JS, you can get your JSON data by calling ES6's fetch() on your URL and then using ES7's async/await to "unpack" the Response object from the fetch to get the JSON data like so:
const getJSON = async url => {
const response = await fetch(url);
if(!response.ok) // check if response worked (no 404 errors etc...)
throw new Error(response.statusText);
const data = response.json(); // get JSON from the response
return data; // returns a promise, which resolves to this data value
}
console.log("Fetching data...");
getJSON("https://soundcloud.com/oembed?url=http%3A//soundcloud.com/forss/flickermood&format=json").then(data => {
console.log(data);
}).catch(error => {
console.error(error);
});
The above method can be simplified down to a few lines if you ignore the exception/error handling (usually not recommended as this can lead to unwanted errors):
const getJSON = async url => {
const response = await fetch(url);
return response.json(); // get JSON from the response
}
console.log("Fetching data...");
getJSON("https://soundcloud.com/oembed?url=http%3A//soundcloud.com/forss/flickermood&format=json")
.then(data => console.log(data));
Because the URL isn't on the same domain as your website, you need to use JSONP.
For example: (In jQuery):
$.getJSON(
'http://soundcloud.com/oembed?url=http%3A//soundcloud.com/forss/flickermood&format=js&callback=?',
function(data) { ... }
);
This works by creating a <script> tag like this one:
<script src="http://soundcloud.com/oembed?url=http%3A//soundcloud.com/forss/flickermood&format=js&callback=someFunction" type="text/javascript"></script>
Their server then emits Javascript that calls someFunction with the data to retrieve.
`someFunction is an internal callback generated by jQuery that then calls your callback.
DickFeynman's answer is a workable solution for any circumstance in which JQuery is not a good fit, or isn't otherwise necessary. As ComFreek notes, this requires setting the CORS headers on the server-side. If it's your service, and you have a handle on the bigger question of security, then that's entirely feasible.
Here's a listing of a Flask service, setting the CORS headers, grabbing data from a database, responding with JSON, and working happily with DickFeynman's approach on the client-side:
#!/usr/bin/env python
from __future__ import unicode_literals
from flask import Flask, Response, jsonify, redirect, request, url_for
from your_model import *
import os
try:
import simplejson as json;
except ImportError:
import json
try:
from flask.ext.cors import *
except:
from flask_cors import *
app = Flask(__name__)
#app.before_request
def before_request():
try:
# Provided by an object in your_model
app.session = SessionManager.connect()
except:
print "Database connection failed."
#app.teardown_request
def shutdown_session(exception=None):
app.session.close()
# A route with a CORS header, to enable your javascript client to access
# JSON created from a database query.
#app.route('/whatever-data/', methods=['GET', 'OPTIONS'])
#cross_origin(headers=['Content-Type'])
def json_data():
whatever_list = []
results_json = None
try:
# Use SQL Alchemy to select all Whatevers, WHERE size > 0.
whatevers = app.session.query(Whatever).filter(Whatever.size > 0).all()
if whatevers and len(whatevers) > 0:
for whatever in whatevers:
# Each whatever is able to return a serialized version of itself.
# Refer to your_model.
whatever_list.append(whatever.serialize())
# Convert a list to JSON.
results_json = json.dumps(whatever_list)
except SQLAlchemyError as e:
print 'Error {0}'.format(e)
exit(0)
if len(whatevers) < 1 or not results_json:
exit(0)
else:
# Because we used json.dumps(), rather than jsonify(),
# we need to create a Flask Response object, here.
return Response(response=str(results_json), mimetype='application/json')
if __name__ == '__main__':
##NOTE Not suitable for production. As configured,
# your Flask service is in debug mode and publicly accessible.
app.run(debug=True, host='0.0.0.0', port=5001) # http://localhost:5001/
your_model contains the serialization method for your whatever, as well as the database connection manager (which could stand a little refactoring, but suffices to centralize the creation of database sessions, in bigger systems or Model/View/Control architectures). This happens to use postgreSQL, but could just as easily use any server side data store:
#!/usr/bin/env python
# Filename: your_model.py
import time
import psycopg2
import psycopg2.pool
import psycopg2.extras
from psycopg2.extensions import adapt, register_adapter, AsIs
from sqlalchemy import update
from sqlalchemy.orm import *
from sqlalchemy.exc import *
from sqlalchemy.dialects import postgresql
from sqlalchemy import Table, Column, Integer, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
class SessionManager(object):
#staticmethod
def connect():
engine = create_engine('postgresql://id:passwd#localhost/mydatabase',
echo = True)
Session = sessionmaker(bind = engine,
autoflush = True,
expire_on_commit = False,
autocommit = False)
session = Session()
return session
#staticmethod
def declareBase():
engine = create_engine('postgresql://id:passwd#localhost/mydatabase', echo=True)
whatever_metadata = MetaData(engine, schema ='public')
Base = declarative_base(metadata=whatever_metadata)
return Base
Base = SessionManager.declareBase()
class Whatever(Base):
"""Create, supply information about, and manage the state of one or more whatever.
"""
__tablename__ = 'whatever'
id = Column(Integer, primary_key=True)
whatever_digest = Column(VARCHAR, unique=True)
best_name = Column(VARCHAR, nullable = True)
whatever_timestamp = Column(BigInteger, default = time.time())
whatever_raw = Column(Numeric(precision = 1000, scale = 0), default = 0.0)
whatever_label = Column(postgresql.VARCHAR, nullable = True)
size = Column(BigInteger, default = 0)
def __init__(self,
whatever_digest = '',
best_name = '',
whatever_timestamp = 0,
whatever_raw = 0,
whatever_label = '',
size = 0):
self.whatever_digest = whatever_digest
self.best_name = best_name
self.whatever_timestamp = whatever_timestamp
self.whatever_raw = whatever_raw
self.whatever_label = whatever_label
# Serialize one way or another, just handle appropriately in the client.
def serialize(self):
return {
'best_name' :self.best_name,
'whatever_label':self.whatever_label,
'size' :self.size,
}
In retrospect, I might have serialized the whatever objects as lists, rather than a Python dict, which might have simplified their processing in the Flask service, and I might have separated concerns better in the Flask implementation (The database call probably shouldn't be built-in the the route handler), but you can improve on this, once you have a working solution in your own development environment.
Also, I'm not suggesting people avoid JQuery. But, if JQuery's not in the picture, for one reason or another, this approach seems like a reasonable alternative.
It works, in any case.
Here's my implementation of DickFeynman's approach, in the the client:
<script type="text/javascript">
var addr = "dev.yourserver.yourorg.tld"
var port = "5001"
function Get(whateverUrl){
var Httpreq = new XMLHttpRequest(); // a new request
Httpreq.open("GET",whateverUrl,false);
Httpreq.send(null);
return Httpreq.responseText;
}
var whatever_list_obj = JSON.parse(Get("http://" + addr + ":" + port + "/whatever-data/"));
whatever_qty = whatever_list_obj.length;
for (var i = 0; i < whatever_qty; i++) {
console.log(whatever_list_obj[i].best_name);
}
</script>
I'm not going to list my console output, but I'm looking at a long list of whatever.best_name strings.
More to the point: The whatever_list_obj is available for use in my javascript namespace, for whatever I care to do with it, ...which might include generating graphics with D3.js, mapping with OpenLayers or CesiumJS, or calculating some intermediate values which have no particular need to live in my DOM.
You make a bog standard HTTP GET Request. You get a bog standard HTTP Response with an application/json content type and a JSON document as the body. You then parse this.
Since you have tagged this 'JavaScript' (I assume you mean "from a web page in a browser"), and I assume this is a third party service, you're stuck. You can't fetch data from remote URI in JavaScript unless explicit workarounds (such as JSONP) are put in place.
Oh wait, reading the documentation you linked to - JSONP is available, but you must say 'js' not 'json' and specify a callback: format=js&callback=foo
Then you can just define the callback function:
function foo(myData) {
// do stuff with myData
}
And then load the data:
var script = document.createElement('script');
script.type = 'text/javascript';
script.src = theUrlForTheApi;
document.body.appendChild(script);