Transfer array from python to javascript - javascript

I have a test.txt file that has two columns of data (x's and y's), eg:
#test.txt
1 23
2 234
4 52
43 5
3 35
And a python program that reads in these values and stores them in x and y as so:
#test.py
# Read the file.
f = open('test.txt', 'r')
# read the whole file into a single variable, which is a list of every row of the file.
lines = f.readlines()
f.close()
# initialize some variable to be lists:
x = []
y = []
# scan the rows of the file stored in lines, and put the values into some variables:
for line in lines:
p = line.split()
x.append(float(p[0]))
y.append(float(p[1]))
I want to take these values stored in x and y and transfer them to two similar arrays in a javascript program to display them as a graph. How can I transfer between python and javascript?

Your question is slightly vague. There are two possible ways your question can be interpreted, here are the corresponding answers:
Your python code needs to transfer them to some javascript running in the browser/node.js application. In order to do this, the Python half of your application would need to either store the data in a database and expose the data via some sort of API, which the javascript half could consume. To go a little fancier, you could set up a connection between the two web servers (e.g. socket.io) and have the Python half send the arrays over the connection as they're created.
This is what I believe you're trying to do based on the code you've posted: Preprocess some data in Python, and then pass it over to some other javascript piece of the puzzle where there's no real-time aspect to it.
In this case, you could simply write the arrays to JSON, and parse that in Javascript. To write to a file, you could do:
import json
data = {'x': x, 'y': y}
# To write to a file:
with open("output.json", "w") as f:
json.dump(data, f)
# To print out the JSON string (which you could then hardcode into the JS)
json.dumps(data)

Its unclear as to why you want to transfer this, but you can write it out into a js file simply by formatting the list into var someList = [[x1,y1], [x2,y2]].
Would be easier to build the string up withing from line in lines but in case you do have actual need of maintaining separate x and y lists
zippedList = list(list(zipped) for zipped in zip(xList,yList))
print('var someList = {};'.format(zippedList))
which gives you
var someList = [[1, 23], [2, 234], [4, 52], [43, 5], [3, 35]];
Write this out to a js file as needed or append to an existing html as
with open('index.html','a') as f:
f.write('<script> var transferredList={}; </script>'.format(zippedList))

Because you mention "display" together with JavaScript, I somewhat assume that your goal is to have JavaScript in a web browser displaying the data that your python script reads from the file in the backend. Transferring from backend python to frontend JavaScript is a piece of cake, because both languages "speak" JSON.
In the backend, you will do something like
import json
response = json.dumps(response_object)
and in the frontend, you can right away use
var responseObject = JSON.parse(responseString);
It is also quite clear how the frontend will call the backend, namely using ajax, usually wrapped in a handy library like jQuery.
The field opens, when it comes to making your python script run in a web server. A good starting point for this would be the Python How-To for using python in the web. You will have to consult with your web hosting service, too, because most hosters have clear guidance whether they admit CGIor, for example FastCGI. Once this is clear, maybe you want to take a look at the Flask micro-framework or something similar, which are offering many services you will need right out-of-the-box.
I have mentioned all the buzzwords here to enable your own investigation ;)

I didn't put the arrays stored in variables x and y, but you can do it if you know at least a bit of JavaScript.
var Request=new XMLHttpRequest();//request object
Request.open("GET","test.txt",true);
/*
* Open request;
* Take 2 arguments, but third is optional (async).
* If you want async, then set onreadystatechange instead of onload event
*/
Request.onreadystatechange=function(){
if(this.status==400){//status is okay
var Lines=this.responseText.split("\n");
alert(Lines[0])//will alert the first line
}
/*
* this.status>=500 are request issues or user network delay
* this.status=404 means file not found
*/
}

Related

Recieving high volume data in client browser using Jquery / JavaScript

I am in the process of making a WordPress based application where a student can take the examination on his web-browser. The questions will be randomly selected and served from the question bank stored in a WordPress CMS.
In this regard following is important to share:
-each examination can have as many as 100 multiple choice questions.
-Each question can have images, each choice can have associated images.
-since the examination is time bound I can not send request to server every time the student completes his question.
My query is :
How do I send the questions from the server:
-should I send the whole question set in one go and then have the Java Script parse all the questions and choices parsed at the client side
or
-should the client repeatedly request the questions from server in the background in the chunks of say 5 question each, for example. If this is better approach I am not sure how do I implement this. Any pointers?, please.
Or is there a third approach which I am not aware of.
Please advise for any comments and solutions for the problem.
Thanks in advance.
Depends on user's selection,send appropriate JSON data to client and render it dynamivally,but if you want to use XML then lets talk about it:
I should mention that this comparison is really from the perspective of using them in a browser with JavaScript. It's not the way either data format has to be used, and there are plenty of good parsers which will change the details to make what I'm saying not quite valid.
JSON is both more compact and (in my view) more readable - in transmission it can be "faster" simply because less data is transferred.
In parsing, it depends on your parser. A parser turning the code (be it JSON or XML) into a data structure (like a map) may benefit from the strict nature of XML (XML Schemas disambiguate the data structure nicely) - however in JSON the type of an item (String/Number/Nested JSON Object) can be inferred syntactically, e.g:
myJSON = {"age" : 12,
"name" : "Danielle"}
The parser doesn't need to be intelligent enough to realise that 12 represents a number, (and Danielle is a string like any other). So in javascript we can do:
anObject = JSON.parse(myJSON);
anObject.age === 12 // True
anObject.name == "Danielle" // True
anObject.age === "12" // False
In XML we'd have to do something like the following:
<person>
<age>12</age>
<name>Danielle</name>
</person>
(as an aside, this illustrates the point that XML is rather more verbose; a concern for data transmission). To use this data, we'd run it through a parser, then we'd have to call something like:
myObject = parseThatXMLPlease();
thePeople = myObject.getChildren("person");
thePerson = thePeople[0];
thePerson.getChildren("name")[0].value() == "Danielle" // True
thePerson.getChildren("age")[0].value() == "12" // True
Actually, a good parser might well type the age for you (on the other hand, you might well not want it to). What's going on when we access this data is - instead of doing an attribute lookup like in the JSON example above - we're doing a map lookup on the key name. It might be more intuitive to form the XML like this:
<person name="Danielle" age="12" />
But we'd still have to do map lookups to access our data:
myObject = parseThatXMLPlease();
age = myObject.getChildren("person")[0].getAttr("age");

pyodbc result set has data type embedded

I am querying Teradata using pyodbc and flask with the intention of charting the data using d3.
The result set has two columns, one is a decimal and the other an integer. When I pass the results to my html page and log the output I am getting something along these lines:
[(Decimal(& #39;-16.200000000& #39;), 5), (Decimal(& #39;-23.100000000& #39;), 12), (Decimal(& #39;500.300000000& #39;), 5)].
The embedded data type information is making it difficult to do anything with the result set. How can I get output to look like this instead?
[[-16.200000000, 5], [-23.100000000, 12], [500.300000000, 5]]
In other words I just want an array of arrays.
Steps I am following:
create a connection
create a cursor
execute the SQL
use fetchall() to store the rows in a variable
pass the variable to my html page using render_template
in javascript set variable equal to data passed in
var data={{dataset}};
console.log(data);
I have seen many flask examples where they take the result set and iterate through it to print the lines in the html but I want to use the resulting dataset as an input to my d3 code. Why is it displaying the data type? And why would it not also display the data type of the integer column?
I am guessing the issue is related to the row construct but I have tried creating a list instead and can't get rid of the data type information. I then get all sorts of errors due to the ampersands.
I think the better approach is as provided in this answer, as it is best to dumps the data of the query into a json object so you can easily retrieve it from the client side using JS, without the need to parse it using regex or other mechanism.
STEPS:
1 - Subclass the json.JSONEncoder:
class DecimalEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, decimal.Decimal):
return float(o) #Type conversion happens here
return super(DecimalEncoder, self).default(o)
2 - Pass the params to the template as a json object:
data = json.dumps({'pi': decimal.Decimal('3.14')}, cls=DecimalEncoder)
return render_template('mytemplate.html', dataset=data)
3 - Read back the result of the query in your JS:
var data = {{dataset}};
import re
processed = []
for piece in results:
processed.append([re.sub(".*?(-?\d+\.\d+).*", "\1", repr(piece[0])), piece[1]]
Something along those lines. It's displaying that data because it's of type Decimal (the teradata kind). I'm guessing repr() will give you the same results you're seeing in the log... then just regex it to what you want.

Improving Twitter's typeahead.js performance with remote data using Django

I have a database with roughly 1.2M names. I'm using Twitter's typeahead.js to remotely fetch the autocomplete suggestions when you type someone's name. In my local environment this takes roughly 1-2 seconds for the results to appear after you stop typing (the autocomplete doesn't appear while you are typing), and 2-5+ seconds on the deployed app on Heroku (using only 1 dyno).
I'm wondering if the reason why it only shows the suggestions after you stop typing (and a few seconds delay) is because my code isn't as optimized?
The script on the page:
<script type="text/javascript">
$(document).ready(function() {
$("#navPersonSearch").typeahead({
name: 'people',
remote: 'name_autocomplete/?q=%QUERY'
})
.keydown(function(e) {
if (e.keyCode === 13) {
$("form").trigger('submit');
}
});
});
</script>
The keydown snippet is because without it my form doesn't submit for some reason when pushing enter.
my django view:
def name_autocomplete(request):
query = request.GET.get('q','')
if(len(query) > 0):
results = Person.objects.filter(short__istartswith=query)
result_list = []
for item in results:
result_list.append(item.short)
else:
result_list = []
response_text = json.dumps(result_list, separators=(',',':'))
return HttpResponse(response_text, content_type="application/json")
The short field in my Person model is also indexed. Is there a way to improve the performance of my typeahead?
I don't think this is directly related Django, but I may be wrong. I can offer some generic advice for this kind of situations:
(My money is on #4 or #5 below).
1) What is an average "ping" from your machine to Heroku? If it's far, that's a little bit extra overhead. Not much, though. Certainly not much when compared to then 8-9 seconds you are referring to. The penalty will be larger with https, mind you.
2) Check the value of waitLimitFn and rateLimitWait in your remote dataset. Are they the default?
3) In all likelyhood, the problem is database/dataset related. First thing to check is how long it takes you to establish a connection to the database (do you use a connection pool?).
4) Second thing: how long it takes to run the query. My bet is on this point or the next. Add debug prints, or use NewRelic (even the free plan is OK). Have a look at the generated query and make sure it is indexed. Have your DB "explain" the execution plan for such a query and make it is uses the index.
5) Third thing: are the results large? If, for example, you specify "J" as the query, I imagine there will be lots of answers. Just getting them and streaming them to the client will take time. In such cases:
5.1) Specify a minLength for your dataset. Make it at least 3, if not 4.
5.2) Limit the result set that your DB query returns. Make it return no more than 10, say.
6) I am no Django expert, but make sure the way you use your model in Django doesn't make it load the entire table into memory first. Just sayin'.
HTH.
results = Person.objects.filter(short__istartswith=query)
result_list = []
for item in results:
result_list.append(item.short)
Probably not the only cause of your slowness but this horrible from a performance point of view: never loop over a django queryset. To assemble a list from a django queryset you should always use values_list. In this specific case:
results = Person.objects.filter(short__istartswith=query)
result_list = results.values_list('short', flat=True)
This way you are getting the single field you need straight from the db instead of: getting all the table row, creating a Person instance from it and finally reading the single attribute from it.
Nitzan covered a lot of the main points that would improve performance, but unlike him I think this might be directly related to Django (at at least, sever side).
A quick way to test this would be to update your name_autocomplete method to simply return 10 random generated strings in the format that Typeahead expects. (The reason we want them random is so that Typeahead's caching doesn't skew any results).
What I suspect you will see is that Typeahead is now running pretty quick and you should start seeing results appear as soon as your minLength of string has been typed.
If that is the case then we will need to into what could be slowing the query up, my Python skills are non-existent so I can't help you there sorry!
If that isn't the case then I would maybe consider doing some logging of when $('#navPersonSearch') calls typeahead:initialized and typeahead:opened to see if they bring up anything odd.
You can use django haystack, and your server side code would be roughly like:
def autocomplete(request):
sqs = SearchQuerySet().filter(content_auto=request.GET.get('q', ''))[:5] # or how many names you need
suggestions = [result.first_name for result in sqs]
# you have to configure typeahead how to process returned data, this is a simple example
data = json.dumps({'q': suggestions})
return HttpResponse(data, content_type='application/json')

Inserting data via mongos and javascript

I want to execute a java script file via the mongos for inserting data to my sharding set. In addition to that I want to add a dynamic variable and the NULL value -
I would login (manually) to the shell by
mongo hostip:port/admin my_script.js
My js looks like:
var amount = 1000000;
var x=1;
var doc= '';
for (i=0; i<amount; i++)
{
doc = { a: '1', b: '2', c: 'text' , d: 'x', e: 'NULL'}
db.mycol.insert(doc);
x=x + 1
}
(Rather "x" I could just use "i")
does "d" writes the value of "x" or just the letter "x"?
does "e" writes the text "Null" or the .. let's say "database NULL"
Is the way I do that procedure correctly? (Concerning how I connect to mongos / the sharding set)
best regards
EDIT:
And very important - how can I figure out the time, the mongodb/sharding set needs to store all the data? And balanced it?
Edit 2nd:
Hi Ross,
I have a sharding set that consists of two shards (two replicasets). At the moment I'm testing and therefore I use the loop-counter as the shard key.
Is there a way to check the time within the javascript?
Update:
So measuring the time that is needed for storing the data is equivalent to the time the javascript is executed? (Or the time the mongo shell isn't accessible because of executing)
Is that assumption acceptable for measuring the query response time?
(where do I have to store the java script file?)
You dont need to keep multiple counters - as you are incrementing i on each iteration of the for loop. As you want the values and not strings the use i for the value of d and null instead of the string "NULL" - heres the cleaned up loop:
var amount = 1000000;
for (i=1; i<amount+1; i++) {
doc = { a: '1', b: '2', c: 'text' , d: i, e: null }
db.mycol.insert(doc);
}
Regarding how long it takes to store / balance your data - that depends on a few factors.
Firstly, what is your shard key? Is it a random value or is it an increasing value (like a timestamp). A random pattern for shard keys help ensure an even distribution of writes and if you know the ranges of the shard key, you could pre-split the shard to try and ensure that it stays balanced when loading data. If the shard key is increasing like a timestamp then most likely one shard will become hot and it will always be at the top end of the range and will have to split chunks and migrate the data to the other shards.
At MongoDB UK there were a couple of good presentations about sharding: Overview of sharding and Sharding best practices.
Update:
Regarding how long will it take for the shards to become balanced - this depends on the load on your machines. Balancing is a lightweight process so should be considered a background operation. Its important to note, that even with a sharded system as soon as the data is written to the mongos its accessible for querying against. So if a shard becomes imbalanced during a data load the data is still accessible - it may take time to rebalance the shard - depending on the load of the shard and the additions of new data, meaning chunks need to be split before migrating.
Update2
The inserts to mongos are synchronous, so the time it takes to run the script is the time it took to apply the inserts. There are other options about the durability of writes using getLastError essentially how long you block while the write is written. The shell calls getLastError() transparently but the default for your language of choice is to be asynchronous and not wait for a server response.
Where to store the javascript file? - Well thats up to you - its your application code. Most users will write an application in their preferred language and use the driver to call mongodb.

javascript: array of object for simple localization

I need to implement a simple way to handle localization about weekdays' names, and I came up with the following structure:
var weekdaysLegend=new Array(
{'it-it':'Lunedì', 'en-us':'Monday'},
{'it-it':'Martedì', 'en-us':'Tuesday'},
{'it-it':'Mercoledì', 'en-us':'Wednesday'},
{'it-it':'Giovedì', 'en-us':'Thursday'},
{'it-it':'Venerdì', 'en-us':'Friday'},
{'it-it':'Sabato', 'en-us':'Saturday'},
{'it-it':'Domenica', 'en-us':'Sunday'}
);
I know I could implement something like an associative array (given the fact that I know that javascript does not provide associative arrays but objects with similar structure), but i need to iterate through the array using numeric indexes instead of labels.
So, I would like to handle this in a for cycle with particular values (like j-1 or indexes like that).
Is my structure correct? Provided a variable "lang" as one of the value between "it-it" or "en-us", I tried to print weekdaysLegend[j-1][lang] (or weekdaysLegend[j-1].lang, I think I tried everything!) but the results is [object Object]. Obviously I'm missing something..
Any idea?
The structure looks fine. You should be able to access values by:
weekdaysLegend[0]["en-us"]; // returns Monday
Of course this will also work for values in variables such as:
weekdaysLegend[i][lang];
for (var i = 0; i < weekdaysLegend.length; i++) {
alert(weekdaysLegend[i]["en-us"]);
}
This will alert the days of the week.
Sounds like you're doing everything correctly and the structure works for me as well.
Just a small note (I see the answer is already marked) as I am currently designing on a large application where I want to put locals into a javascript array.
Assumption: 1000 words x4 languages generates 'xx-xx' + the word itself...
Thats 1000 rows pr. language + the same 7 chars used for language alone = wasted bandwitdh...
the client/browser will have to PARSE THEM ALL before it can do any lookup in the arrays at all.
here is my approach:
Why not generate the javascript for one language at a time, if the user selects another language, just respond(send) the right javascript to the browser to include?
Either store a separate javascript with large array for each language OR use the language as parametre to the server-side script aka:
If the language file changes a lot or you need to minimize it per user/module, then its quite archivable with this approach as you can just add an extra parametre for "which part/module" to generate or a timestamp so the cache of the javascript file will work until changes occures.
if the dynamic approach is too performance heavy for the webserver, then publish/generate the files everytime there is a change/added a new locale - all you'll need is the "language linker" check in the top of the page, to check which language file to server the browser.
Conclusion
This approach will remove the overhead of a LOT of repeating "language" ID's if the locales list grows large.
You have to access an index from the array, and then a value by specifying a key from the object.
This works just fine for me: http://jsfiddle.net/98Sda/.
var day = 2;
var lang = 'en-us';
var weekdaysLegend = [
{'it-it':'Lunedì', 'en-us':'Monday'},
{'it-it':'Martedì', 'en-us':'Tuesday'},
{'it-it':'Mercoledì', 'en-us':'Wednesday'},
{'it-it':'Giovedì', 'en-us':'Thursday'},
{'it-it':'Venerdì', 'en-us':'Friday'},
{'it-it':'Sabato', 'en-us':'Saturday'},
{'it-it':'Domenica', 'en-us':'Sunday'}
];
alert(weekdaysLegend[day][lang]);

Categories