I have a framework that retrieves the values of a slider via POST requests from HTML to Flask using AJAX. This calls the /about endpoint to perform some data manipulation and generates a redirected link defining the API url. Another endpoint (/index) then takes that API url and retrieves a response from the data source. However, when I attempt to render the template under the /index endpoint, I am unable to observe a dynamically rendering template suited for further upstream data visualization tasks.
Here's a very basic example illustrating the pipeline I've laid out:
./templates/index.html:
<body>
<div id="map" style="height: 700px;"></div>
<form method = 'POST'>
<div class="rangeslider">
<input style="width: 50%;" type="range" min="0" max="64" value="64" class="myslider" id="sliderRange">
<p>
<span id="demo"></span>
</p>
</div>
<h1><b>Testing context value: {{ asdf }}</b></h1>
<script>
const Http = new XMLHttpRequest();
var rangeslider = document.getElementById("sliderRange");
var output = document.getElementById("demo");
var current;
rangeslider.oninput = function() {
// Step 1: receives value from HTML slider object
current = this.value;
Http.open('POST', '/about')
Http.send(current)
}
</script>
</body>
./app.py:
from flask import Flask, render_template, request, redirect, url_for, jsonify
app = Flask(__name__)
app.config['TEMPLATES_AUTO_RELOAD'] = True
#app.route('/')
def main():
return redirect('/index')
#app.route('/index', methods=['GET'])
def index():
asdf = request.args.get('asdf')
// Step 3: asdf undergoes request to pull in appropriate data from data source and would ideally generate custom HTML prepped to inject a child into an existing leaflet.js app
// Issue occurs in asdf = asdf statement. Ideally asdf will represent custom HTML dependent on slider value.
return render_template('index.html', asdf = asdf)
#app.route('/about', methods=['POST'])
def about():
received_data = request.data
// Step 2: received_data undergoes further preprocessing to create API URL
return redirect(url_for("index", asdf=str(received_data)))
if __name__ == '__main__':
app.run(port=8000)
I know that AJAX is typically the best solution to deal with dynamic contexts, but I am unsure how to make the appropriate fixes to suit my overall needs in this case.
Any assistance would be greatly appreciated. Thanks!
This is how you can pass parameters to route:
from flask import Flask, render_template, request, redirect, url_for, jsonify
app = Flask(__name__)
app.config['TEMPLATES_AUTO_RELOAD'] = True
#app.route('/')
def main():
return redirect('/index')
#app.route('/index/<asdf>', methods=['GET'])
def index(asdf):
// Step 3: asdf undergoes request to pull in appropriate data from data source and would ideally generate custom HTML prepped to inject a child into an existing leaflet.js app
// Issue occurs in asdf = asdf statement. Ideally asdf will represent custom HTML dependent on slider value.
return render_template('index.html', asdf = asdf)
#app.route('/about', methods=['POST'])
def about():
received_data = request.data
// Step 2: received_data undergoes further preprocessing to create API URL
return redirect(url_for("index", asdf=str(received_data)))
if __name__ == '__main__':
app.run(port=8000)
Adding to optimising your code, you don't need to pass asdf to index and reload the html. You can pass asdf as a response from /about to your js and insert it into the html. This might help you.
In your app.py
#app.route('/about', methods=['POST'])
def about():
received_data = request.data
data = {'status':'success', 'xyz':received_data}
resp = make_response(jsonify(data), 200)
return resp
You may then access this response in the XMLHttp request as:
<script>
const Http = new XMLHttpRequest();
var rangeslider = document.getElementById("sliderRange");
var output = document.getElementById("demo");
var current;
rangeslider.oninput = function() {
// Step 1: receives value from HTML slider object
current = this.value;
Http.open('POST', '/about')
Http.send(current)
}
Http.onload = function() {
var response = JSON.parse(Http.responseText)
if (response['status'] == "success") {
$('#tag').html(response['xyz'])
}
}
</script>
You may change how you use the response to your context.
Related
First time poster here. I am trying to put together a chatroom app using Flask-SocketIO and am having a hard time getting one tiny, but crucial, piece to work. When I call the emit method in the server and set the broadcast=True flag, the expected behavior is that it triggers the respective socket.on events on the client side for all users, including the sender. However, my code apparently only does so for all clients other than the sender. This is contrary to the documentation at https://flask-socketio.readthedocs.io/en/latest/, where it states:
When a message is sent with the broadcast option enabled, all clients connected to the namespace receive it, including the sender. When namespaces are not used, the clients connected to the global namespace receive the message.
I need for this to send the message and trigger the socket.on event for the sender as well as all other clients. Can anyone explain what I'm doing wrong here? Because it seems like such a basic feature that I must be missing something really obvious.
Server code:
import os
import requests
from flask import Flask, jsonify, render_template, request
from flask_socketio import SocketIO, emit, join_room, leave_room
app = Flask(__name__)
app.config['SECRET_KEY'] = os.getenv('SECRET_KEY')
socketio = SocketIO(app)
channels = ['a', 'b', 'c', 'testing', 'abc', '123']
ch_msgs = {'a': ['testing', '1234', 'high five'], 'b': ['hello']}
msg_limit = 100
#app.route('/')
def index():
return render_template('index.html')
#socketio.on('add channel')
def add(data):
new_channel = data
channels.append(new_channel)
ch_msgs[new_channel] = list()
emit('announce channel', new_channel, broadcast=True)
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
Javascript:
document.addEventListener('DOMContentLoaded', () => {
var socket = io.connect(location.protocol + '//' + document.domain + ':' + location.port);
socket.on('connect', () => {
document.querySelector('#add_channel').onclick = () => {
const new_channel = document.querySelector('#new_channel').value;
socket.emit('add channel', new_channel);
};
});
socket.on('announce channel', data => {
const li = document.createElement('li');
li.innerHTML = '#' + `${data}`;
$('#channel_list').append(li);
});
});
HTML/CSS snippet:
<h3>Chat Channels</h3>
<br>
<ul id='channel_list' style='list-style-type:none;'>
</ul>
<form class='channel'>
<input type='text' id='new_channel' placeholder='add new channel'><input type='submit' id='add_channel' value='Add'>
</form>
All clients in fact are receiving the message sent, including the sender, but the issue here is that the sender's page is refreshed as soon as the form is submitted, (which is the default behavior of forms) thus resulting in loss of all local data.
In order to prevent this, you need to use event.preventDefault() method. In your case, you may change the event handler for connect event like this:
socket.on('connect', () => {
document.querySelector('#add_channel').onclick = event => {
event.preventDefault();
const textinput = document.querySelector('#new_channel')
const new_channel = textinput.value;
textinput.value = ""; // Clear input after value is saved
socket.emit('add channel', new_channel);
};
});
I'm uploading small files (sub 20k) using Fetch and Flask, however it can take up to 10 seconds to post, process and then return. Conversely, if i run the same functions in pure python only the same load and process time is less than a second.
When running a file upload with no processing it is almost instant upload.
Am I missing something that slows things down when Im processing files in with Flask?
I've tried uploading with no processing (fast), I've tried processsing with no flask (fast). I've tried uplaoding and processing from browser with flask(slow)
from flask import render_template, url_for, flash, redirect, request, jsonify, Response, Flask, session, make_response, Markup
import io, random, sys, os, pandas
app = Flask(__name__)
#app.route("/")
############################################################
#app.route("/routee", methods = ['GET', 'POST'])
def routee():
return render_template('Upload Test.html')
############################################################
#app.route("/routee/appendroute", methods = ['GET', 'POST'])
def appendroute():
PrintFlask(request.files.getlist('route'))
return make_response(jsonify('Voyage = VoyageJson, Intersects = IntersectJson'), 200)
############################################################
if __name__ == "__main__":
app.run(debug=True)
<script type="text/javascript">
function prepformdata(route){
formdata = new FormData();
for (var i = 0; i < route.files.length; i++) {
formdata.append('route', route.files[i]);
}
return formdata
}
//////////////////////////////////////////////////////////////
function appendroutes(formdata, route) {
uploadfiles = document.getElementById("files")
formdata = prepformdata(uploadfiles)
InitConst ={method: "POST",body: formdata}
url = window.origin + '/routee/appendroute'
fetch(url,InitConst)
.then(res => res.json())
.then(data => {addon = data['Voyage']; addonIntersects = data['Intersects']})
.then(() => console.log('Route(s) Added'))
}
</script>
Whilst the code above is very nippy. I'm expecting to do some equally nippy processing server side. But something is slowing it down. Any ideas why processing might slow down when flask is used?
I am trying to scrape search results from website that uses a __doPostBack function. The webpage displays 10 results per search query. To see more results, one has to click a button that triggers a __doPostBack javascript. After some research, I realized that the POST request behaves just like a form, and that one could simply use scrapy's FormRequest to fill that form. I used the following thread:
Troubles using scrapy with javascript __doPostBack method
to write the following script.
# -*- coding: utf-8 -*-
from scrapy.contrib.spiders import CrawlSpider
from scrapy.http import FormRequest
from scrapy.http import Request
from scrapy.selector import Selector
from ahram.items import AhramItem
import re
class MySpider(CrawlSpider):
name = u"el_ahram2"
def start_requests(self):
search_term = u'اقتصاد'
baseUrl = u'http://digital.ahram.org.eg/sresult.aspx?srch=' + search_term + u'&archid=1'
requests = []
for i in range(1, 4):#crawl first 3 pages as a test
argument = u"'Page$"+ str(i+1) + u"'"
data = {'__EVENTTARGET': u"'GridView1'", '__EVENTARGUMENT': argument}
currentPage = FormRequest(baseUrl, formdata = data, callback = self.fetch_articles)
requests.append(currentPage)
return requests
def fetch_articles(self, response):
sel = Selector(response)
for ref in sel.xpath("//a[contains(#href,'checkpart.aspx?Serial=')]/#href").extract():
yield Request('http://digital.ahram.org.eg/' + ref, callback=self.parse_items)
def parse_items(self, response):
sel = Selector(response)
the_title = ' '.join(sel.xpath("//title/text()").extract()).replace('\n','').replace('\r','').replace('\t','')#* mean 'anything'
the_authors = '---'.join(sel.xpath("//*[contains(#id,'editorsdatalst_HyperLink')]//text()").extract())
the_text = ' '.join(sel.xpath("//span[#id='TextBox2']/text()").extract())
the_month_year = ' '.join(sel.xpath("string(//span[#id = 'Label1'])").extract())
the_day = ' '.join(sel.xpath("string(//span[#id = 'Label2'])").extract())
item = AhramItem()
item["Authors"] = the_authors
item["Title"] = the_title
item["MonthYear"] = the_month_year
item["Day"] = the_day
item['Text'] = the_text
return item
My problem now is that 'fetch_articles' is never called:
2014-05-27 12:19:12+0200 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2014-05-27 12:19:13+0200 [el_ahram2] DEBUG: Crawled (200) <POST http://digital.ahram.org.eg/sresult.aspx?srch=%D8%A7%D9%82%D8%AA%D8%B5%D8%A7%D8%AF&archid=1> (referer: None)
2014-05-27 12:19:13+0200 [el_ahram2] DEBUG: Crawled (200) <POST http://digital.ahram.org.eg/sresult.aspx?srch=%D8%A7%D9%82%D8%AA%D8%B5%D8%A7%D8%AF&archid=1> (referer: None)
2014-05-27 12:19:13+0200 [el_ahram2] DEBUG: Crawled (200) <POST http://digital.ahram.org.eg/sresult.aspx?srch=%D8%A7%D9%82%D8%AA%D8%B5%D8%A7%D8%AF&archid=1> (referer: None)
2014-05-27 12:19:13+0200 [el_ahram2] INFO: Closing spider (finished)
After searching for several days I feel completely stuck. I am a beginner in python, so perhaps the error is trivial. However if it is not, this thread could be of use to a number of people. Thank you in advance for you help.
Your code is fine. fetch_articles is running. You can test it by adding a print statement.
However, the website requires you to validate POST requests. In order to validate them, you must have __EVENTVALIDATION and __VIEWSTATE in your request body to prove you are responding to their form. In order to get these, you need to first make a GET request, and extract these fields from the form. If you don't provide this, you get an error page instead, which did not contain any links with "checkpart.aspx?Serial=", so your for loop was not being executed.
Here is how I've setup the start_request, and then fetch_search does what start_request used to do.
class MySpider(CrawlSpider):
name = u"el_ahram2"
def start_requests(self):
search_term = u'اقتصاد'
baseUrl = u'http://digital.ahram.org.eg/sresult.aspx?srch=' + search_term + u'&archid=1'
SearchPage = Request(baseUrl, callback = self.fetch_search)
return [SearchPage]
def fetch_search(self, response):
sel = Selector(response)
search_term = u'اقتصاد'
baseUrl = u'http://digital.ahram.org.eg/sresult.aspx?srch=' + search_term + u'&archid=1'
viewstate = sel.xpath("//input[#id='__VIEWSTATE']/#value").extract().pop()
eventvalidation = sel.xpath("//input[#id='__EVENTVALIDATION']/#value").extract().pop()
for i in range(1, 4):#crawl first 3 pages as a test
argument = u"'Page$"+ str(i+1) + u"'"
data = {'__EVENTTARGET': u"'GridView1'", '__EVENTARGUMENT': argument, '__VIEWSTATE': viewstate, '__EVENTVALIDATION': eventvalidation}
currentPage = FormRequest(baseUrl, formdata = data, callback = self.fetch_articles)
yield currentPage
...
def fetch_articles(self, response):
sel = Selector(response)
print response._get_body() # you can write to file and do an grep
for ref in sel.xpath("//a[contains(#href,'checkpart.aspx?Serial=')]/#href").extract():
yield Request('http://digital.ahram.org.eg/' + ref, callback=self.parse_items)
I could not find the "checkpart.aspx?Serial=" which you are searching for.
This might not solve your issue, but using answer instead of the comment for the code formatting.
I am trying to write a realtime web app which can respond real time messages updating in Flask framework. I am using code from http://flask.pocoo.org/snippets/116/ but JavaScript EventSource SSE not firing in browser. From the log I can see I've published the data successfully but the webpage(http:xxxx:5000/) does not get updated at all.
import gevent
from gevent.wsgi import WSGIServer
from gevent.queue import Queue
from flask import Flask, Response
import time
# SSE "protocol" is described here: http://mzl.la/UPFyxY
class ServerSentEvent(object):
def __init__(self, data):
self.data = data
self.event = None
self.id = None
self.desc_map = {
self.data : "data",
self.event : "event",
self.id : "id"
}
def encode(self):
if not self.data:
return ""
lines = ["%s: %s" % (v, k)
for k, v in self.desc_map.iteritems() if k]
return "%s\n\n" % "\n".join(lines)
app = Flask(__name__)
subscriptions = []
# Client code consumes like this.
#app.route("/")
def index():
debug_template = """
<!DOCTYPE html>
<html>
<head>
</head>
<body>
<h1>Server sent events</h1>
<div id="event"></div>
<script type="text/javascript">
var eventOutputContainer = document.getElementById("event");
var evtSrc = new EventSource("/subscribe");
evtSrc.onmessage = function(e) {
console.log(e.data);
eventOutputContainer.innerHTML = e.data;
};
</script>
</body>
</html>
"""
return(debug_template)
#app.route("/debug")
def debug():
return "Currently %d subscriptions" % len(subscriptions)
#app.route("/publish")
def publish():
#Dummy data - pick up from request for real data
def notify():
msg = str(time.time())
for sub in subscriptions[:]:
sub.put(msg)
print 'data is ' + str(time.time())
gevent.spawn(notify)
return "OK"
#app.route("/subscribe")
def subscribe():
def gen():
q = Queue()
subscriptions.append(q)
try:
while True:
result = q.get()
ev = ServerSentEvent(str(result))
print 'str(result) is: ' + str(result)
yield ev.encode()
except GeneratorExit: # Or maybe use flask signals
subscriptions.remove(q)
return Response(gen(), mimetype="text/event-stream")
if __name__ == "__main__":
app.debug = True
server = WSGIServer(("", 5001), app)
server.serve_forever()
# Then visit http://localhost:5000 to subscribe
# and send messages by visiting http://localhost:5000/publish
Can you please shed some lights? I am testing with Chrome/34.0.1847.116. Thanks.
I'm looking at the following API:
http://wiki.github.com/soundcloud/api/oembed-api
The example they give is
Call:
http://soundcloud.com/oembed?url=http%3A//soundcloud.com/forss/flickermood&format=json
Response:
{
"html":"<object height=\"81\" ... ",
"user":"Forss",
"permalink":"http:\/\/soundcloud.com\/forss\/flickermood",
"title":"Flickermood",
"type":"rich",
"provider_url":"http:\/\/soundcloud.com",
"description":"From the Soulhack album...",
"version":1.0,
"user_permalink_url":"http:\/\/soundcloud.com\/forss",
"height":81,
"provider_name":"Soundcloud",
"width":0
}
What do I have to do to get this JSON object from just an URL?
It seems they offer a js option for the format parameter, which will return JSONP. You can retrieve JSONP like so:
function getJSONP(url, success) {
var ud = '_' + +new Date,
script = document.createElement('script'),
head = document.getElementsByTagName('head')[0]
|| document.documentElement;
window[ud] = function(data) {
head.removeChild(script);
success && success(data);
};
script.src = url.replace('callback=?', 'callback=' + ud);
head.appendChild(script);
}
getJSONP('http://soundcloud.com/oembed?url=http%3A//soundcloud.com/forss/flickermood&format=js&callback=?', function(data){
console.log(data);
});
A standard http GET request should do it. Then you can use JSON.parse() to make it into a json object.
function Get(yourUrl){
var Httpreq = new XMLHttpRequest(); // a new request
Httpreq.open("GET",yourUrl,false);
Httpreq.send(null);
return Httpreq.responseText;
}
then
var json_obj = JSON.parse(Get(yourUrl));
console.log("this is the author name: "+json_obj.author_name);
that's basically it
In modern-day JS, you can get your JSON data by calling ES6's fetch() on your URL and then using ES7's async/await to "unpack" the Response object from the fetch to get the JSON data like so:
const getJSON = async url => {
const response = await fetch(url);
if(!response.ok) // check if response worked (no 404 errors etc...)
throw new Error(response.statusText);
const data = response.json(); // get JSON from the response
return data; // returns a promise, which resolves to this data value
}
console.log("Fetching data...");
getJSON("https://soundcloud.com/oembed?url=http%3A//soundcloud.com/forss/flickermood&format=json").then(data => {
console.log(data);
}).catch(error => {
console.error(error);
});
The above method can be simplified down to a few lines if you ignore the exception/error handling (usually not recommended as this can lead to unwanted errors):
const getJSON = async url => {
const response = await fetch(url);
return response.json(); // get JSON from the response
}
console.log("Fetching data...");
getJSON("https://soundcloud.com/oembed?url=http%3A//soundcloud.com/forss/flickermood&format=json")
.then(data => console.log(data));
Because the URL isn't on the same domain as your website, you need to use JSONP.
For example: (In jQuery):
$.getJSON(
'http://soundcloud.com/oembed?url=http%3A//soundcloud.com/forss/flickermood&format=js&callback=?',
function(data) { ... }
);
This works by creating a <script> tag like this one:
<script src="http://soundcloud.com/oembed?url=http%3A//soundcloud.com/forss/flickermood&format=js&callback=someFunction" type="text/javascript"></script>
Their server then emits Javascript that calls someFunction with the data to retrieve.
`someFunction is an internal callback generated by jQuery that then calls your callback.
DickFeynman's answer is a workable solution for any circumstance in which JQuery is not a good fit, or isn't otherwise necessary. As ComFreek notes, this requires setting the CORS headers on the server-side. If it's your service, and you have a handle on the bigger question of security, then that's entirely feasible.
Here's a listing of a Flask service, setting the CORS headers, grabbing data from a database, responding with JSON, and working happily with DickFeynman's approach on the client-side:
#!/usr/bin/env python
from __future__ import unicode_literals
from flask import Flask, Response, jsonify, redirect, request, url_for
from your_model import *
import os
try:
import simplejson as json;
except ImportError:
import json
try:
from flask.ext.cors import *
except:
from flask_cors import *
app = Flask(__name__)
#app.before_request
def before_request():
try:
# Provided by an object in your_model
app.session = SessionManager.connect()
except:
print "Database connection failed."
#app.teardown_request
def shutdown_session(exception=None):
app.session.close()
# A route with a CORS header, to enable your javascript client to access
# JSON created from a database query.
#app.route('/whatever-data/', methods=['GET', 'OPTIONS'])
#cross_origin(headers=['Content-Type'])
def json_data():
whatever_list = []
results_json = None
try:
# Use SQL Alchemy to select all Whatevers, WHERE size > 0.
whatevers = app.session.query(Whatever).filter(Whatever.size > 0).all()
if whatevers and len(whatevers) > 0:
for whatever in whatevers:
# Each whatever is able to return a serialized version of itself.
# Refer to your_model.
whatever_list.append(whatever.serialize())
# Convert a list to JSON.
results_json = json.dumps(whatever_list)
except SQLAlchemyError as e:
print 'Error {0}'.format(e)
exit(0)
if len(whatevers) < 1 or not results_json:
exit(0)
else:
# Because we used json.dumps(), rather than jsonify(),
# we need to create a Flask Response object, here.
return Response(response=str(results_json), mimetype='application/json')
if __name__ == '__main__':
##NOTE Not suitable for production. As configured,
# your Flask service is in debug mode and publicly accessible.
app.run(debug=True, host='0.0.0.0', port=5001) # http://localhost:5001/
your_model contains the serialization method for your whatever, as well as the database connection manager (which could stand a little refactoring, but suffices to centralize the creation of database sessions, in bigger systems or Model/View/Control architectures). This happens to use postgreSQL, but could just as easily use any server side data store:
#!/usr/bin/env python
# Filename: your_model.py
import time
import psycopg2
import psycopg2.pool
import psycopg2.extras
from psycopg2.extensions import adapt, register_adapter, AsIs
from sqlalchemy import update
from sqlalchemy.orm import *
from sqlalchemy.exc import *
from sqlalchemy.dialects import postgresql
from sqlalchemy import Table, Column, Integer, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
class SessionManager(object):
#staticmethod
def connect():
engine = create_engine('postgresql://id:passwd#localhost/mydatabase',
echo = True)
Session = sessionmaker(bind = engine,
autoflush = True,
expire_on_commit = False,
autocommit = False)
session = Session()
return session
#staticmethod
def declareBase():
engine = create_engine('postgresql://id:passwd#localhost/mydatabase', echo=True)
whatever_metadata = MetaData(engine, schema ='public')
Base = declarative_base(metadata=whatever_metadata)
return Base
Base = SessionManager.declareBase()
class Whatever(Base):
"""Create, supply information about, and manage the state of one or more whatever.
"""
__tablename__ = 'whatever'
id = Column(Integer, primary_key=True)
whatever_digest = Column(VARCHAR, unique=True)
best_name = Column(VARCHAR, nullable = True)
whatever_timestamp = Column(BigInteger, default = time.time())
whatever_raw = Column(Numeric(precision = 1000, scale = 0), default = 0.0)
whatever_label = Column(postgresql.VARCHAR, nullable = True)
size = Column(BigInteger, default = 0)
def __init__(self,
whatever_digest = '',
best_name = '',
whatever_timestamp = 0,
whatever_raw = 0,
whatever_label = '',
size = 0):
self.whatever_digest = whatever_digest
self.best_name = best_name
self.whatever_timestamp = whatever_timestamp
self.whatever_raw = whatever_raw
self.whatever_label = whatever_label
# Serialize one way or another, just handle appropriately in the client.
def serialize(self):
return {
'best_name' :self.best_name,
'whatever_label':self.whatever_label,
'size' :self.size,
}
In retrospect, I might have serialized the whatever objects as lists, rather than a Python dict, which might have simplified their processing in the Flask service, and I might have separated concerns better in the Flask implementation (The database call probably shouldn't be built-in the the route handler), but you can improve on this, once you have a working solution in your own development environment.
Also, I'm not suggesting people avoid JQuery. But, if JQuery's not in the picture, for one reason or another, this approach seems like a reasonable alternative.
It works, in any case.
Here's my implementation of DickFeynman's approach, in the the client:
<script type="text/javascript">
var addr = "dev.yourserver.yourorg.tld"
var port = "5001"
function Get(whateverUrl){
var Httpreq = new XMLHttpRequest(); // a new request
Httpreq.open("GET",whateverUrl,false);
Httpreq.send(null);
return Httpreq.responseText;
}
var whatever_list_obj = JSON.parse(Get("http://" + addr + ":" + port + "/whatever-data/"));
whatever_qty = whatever_list_obj.length;
for (var i = 0; i < whatever_qty; i++) {
console.log(whatever_list_obj[i].best_name);
}
</script>
I'm not going to list my console output, but I'm looking at a long list of whatever.best_name strings.
More to the point: The whatever_list_obj is available for use in my javascript namespace, for whatever I care to do with it, ...which might include generating graphics with D3.js, mapping with OpenLayers or CesiumJS, or calculating some intermediate values which have no particular need to live in my DOM.
You make a bog standard HTTP GET Request. You get a bog standard HTTP Response with an application/json content type and a JSON document as the body. You then parse this.
Since you have tagged this 'JavaScript' (I assume you mean "from a web page in a browser"), and I assume this is a third party service, you're stuck. You can't fetch data from remote URI in JavaScript unless explicit workarounds (such as JSONP) are put in place.
Oh wait, reading the documentation you linked to - JSONP is available, but you must say 'js' not 'json' and specify a callback: format=js&callback=foo
Then you can just define the callback function:
function foo(myData) {
// do stuff with myData
}
And then load the data:
var script = document.createElement('script');
script.type = 'text/javascript';
script.src = theUrlForTheApi;
document.body.appendChild(script);