How to plot an array of numbers as an image using Javascript? - javascript

I would like to plot an array of numbers (int/float/binary/..) as an image using Javascript and I don't know exactly how to do it..
The system is composed of a CORE part done in C++ and a GUI part done in jQuery, and I have to show results calculated by the CORE in the GUI side. I can pass them in any format such as binary files, XML files,... but I don't know how the GUI might plot an array of numbers and show them as an image to the user.. Also It may be useful to have an scale of colours..
any suggestion on doing that? any available library for that purpose? every idea is welcome!
thanks in advance!
cheers

Have a look at d3.js. This library provides plenty of possible visualizations and is quite easy to customize.

Here is a list of plot libs in js:
http://javascript.open-libraries.com/utilities/chart/20-best-javascript-charting-and-plotting-libraries/
You need to format your data in json to get it work. if the data set is very big, better to plot is on server side and send a image file to client.

Related

Best practice for database data rearrangement/transformation?

I have a MySQL database and retrieve data using php on a website where I would like to visualize the data in different ways. For this I also need to transform the data (e.g. creating sums, filtering etc.).
My question is now at which step of the data flow this kind of transformation makes most sense, especially regarding performance and flexibility. I am not a very experienced programmer, but I think these are the options I have:
1.) Prepare views in the database which are already providing the desired transformed data.
2.) Use a PHP script that SELECT's the data in the transformed way.
3.) Just have a SELECT * FROM table statement in PHP and load everything in a json, read it in js and transform the data to a desired version.
What is best practice to transform the data?
It all depends in the architectural design of your application.
At this time SOA (Service Oriented Architecture) is a very used approach. If you use It, logic use to be in the services. Database is used as a data repository, and UI manage final data in light weight format, only really needed information.
So in this case, e.g. your option number 2 is most appropriated.

Classifier to Predict activities taking place

I have multiple datasets here that i took from Kaggle. There are multiple csv files and each csv file is made specifically for sit, stand, walking, running etc. The data is taken from sensors like accelerometers and gyroscopes. The values in datasets are of axis like x, y and z.
Sample Data
Here is a sample dataset of jogging. Now i need to make classifiers in my program so that my program can detect itself whether the data is of jogging, sitting, standing etc. I want to mix all the datasets in a single csv file and then upload it into my webapge and then i want the javascript code to start detecting whether a particular row is of sitting, standing, jogging etc. I don't want any code help but instead i just need a little explanation or a way to start coding it. How can i started making such classifier? I know it is kind of broad question but i think i have tried to explain myself in best way possible. Once my program has detected every row with specific activity it will count all the activities separately and then show it in a table format in webpage.
In order to answer properly to your question, it would be very helpful to know which is your level of understanding and experience with machine learning.
If you are a beginner I would suggest to try to run and understand a couple of tutorials that can be easily found on the web.
If you need an idea of which is the "standard" approach for machine learning development, I will try to give you a general idea of the process.
You can summarize the process in these main steps:
Data pre-processing-> Data splitting -> Feature selection -> Model Training -> Validation -> Deployment
Data pre-processing is meant to clean and format the data: removing NA values, decision about categorical variables, outlier analysis,.... This is a complex step that depends on the application. In your case I would start checking that the data in the different data-sets are homogeneous, i.e. the features have the same meaning across csv and corresponding features respect the same distribution. While the meaning of each feature should be explained in the description of your csv, the check of the distributions could be easily done plotting the box-plots for each feature and csv. If distributions of the same feature across different csv files don't overlap you should investigate further the issue.
An important step in the design of a good model is the splitting of the data. You should split your data in training/validation set (training/validation/test for a more comprehensive approach). This step allows you to train your model on the training set and test the model on the validation set computing unbiased performance of your model. I suggest here to become familiar with concepts as: Cross Validation, stratified-cross-validation, nested-cross-validation for hyper-parameter tuning, overfitting, bias,.... The Validation of the model will give you an idea of the expected performance that it will have on unseen data. If you are considering the use of more than one model, you can use the validation results to choose the "best" one. I suggest here a comparison using the confidence interval or if possible a significance test (e.g t-test, anova,...). Before the deployment the model is trained on all the available data.
The choice of the model depends on the data that you are using: number of samples, number of features, type of variable (numerical, categorical),....
I'm not an expert of javascript, but I believe (just a feeling) that python and R are more common choices for developing Machine learning applications. Both have libraries specifically developed for the task and you can find a lot of materials and tutorial around.
With a bit of more context I think that I could be more specific.
I hope it helps

Latex/Asymptote to Image for Website?

I'm working on a database of math problems right now - all the problems are formatted in LaTeX, but there are Asymptote images as well.
It doesn't look like Mathjax supports Asymptote - what would be the best way to get the images from each math problem? How does Art of Problem Solving's TeXeR get Asymptote?
Could we send the code and render the image client-side, or should we extract the asy code for each problem, get an image, save these images to the database, and link each image to its respective problem?
I'm very late here, but I'll suggest something anyway.
What I would do is simply render all the images serverside. You can have Asymptote rerender images if there are any changes to the original file.
That way you save bandwidth and processing time for the client a lot. You may have a lot more data to store, but this way you'd also have a bit more control over the pics.
Asymptote lets you define the output, that way you can have Your server automatically create them based completely on the original problem, name them accordingly. That way they are already 'linked'.
This is probably the best answer I can give without code.

Using math in browser using XML data

I'm trying to make pull XML or Csv data into a HTML file then I want to use math to add up the values and show the result on the page ( I'm basically trying to display invoices on a web browser)
My skill set is HTML/CSS and I understand a little JavaScript
I've managed to pull XML data into HTML using http request and style that information using xslt
Really what I'm asking is what is the best solution to my needs is it using the above method then using xquiry to add up values or would I need to learn a bit of Ajax, Json and calculate the values with JavaScript?
You really should learn AJAX in order to fetch and manipulate data instead of fetching presentation parts. That's the way everyone follows as it allows more responsive interactions with the user and a cleaner architecture in case of complex interactions.
But that doesn't mean you must abandon XML : originally AJAX was built on XML (the X in AJAX) and not JSON.
Personally I prefer JSON, and I think it will be easier to manage in the long term, but if the server side is hard to change, you can fetch the XML (look for example at jquery's ajax function), build javascript objects using it, and then change your screen using those data. If later you decide to use JSON instead of XML, you'll just have to change the "parsing" part of the client code.
"I'm trying to make pull XML or Csv data into a HTML file then I want to use math to add up the values and show the result on the page"
You can do this with either XSLT or javascript. However, with XSLT things can become pretty complicated, depending on what version you're using. XSLT 1.0 has pretty limited set of functions for aggregating results. For all XSLT, you can't reassign variables you'll have to solve many of these things with recursion. In my opinion, not really a comfortable method.
Regardless of the choice between XSLT and Javascript, I would also question the architecture that would put this kind of logic in the presentation layer in the browser. I think it would be better if the server side would perform all the calculations that are required, and limit the browser's tasks to styling the output.

Easiest way to edit a huge geoJSON?

I'm sitting here with a huge geoJSON that I got from an Open Street Map shape-file. However, most of the polygons are unnecessary. These could, in theory, easily be singled out based on certain properties.
But how do I query the geoJSON file to remove certain elements (features)? Or would it be easier to save the shape-file in another format (working in QGIS)?
Link to sample of json-file: http://dl.dropbox.com/u/15955488/hki_test_sample.json (240 kB)
When you say "query the geoJSON," are you talking about having the source where you get the geoJSON give you a subset of data? There is no widely-implemented standard for "querying" JSON like this, but each site you retrieve from may have its own parameters to reduce the size of data you get.
If you're talking about paring down the data in client-side code, simply looping through the structure and removing properties (with delete) and array items is what you'd have to do.
Shapefile beats GeoJSON for large (not mega) data. It supports random access to features. To get at the GeoJSON features in a collection you have to read and deserialize the entire file.
Depending on how you want to edit it and what software is available you have a few options. If you have access to Safe FME this is by far the best geographic feature manipuluation software and will give you tons of options (it can read / write (and convert between) just about any geographic format). If you're just looking for a text editor that can handle the volume of data I would look at Notepad++ - it can hold a lot of text and you can do find / replace using regular expressions. Safe FME can be a little pricy, but you might be able to get a trial
As Jacob says, just iterate and remove the elements you don't want. I like http://documentcloud.github.com/underscore/#reject for convenience.
If you are going to permanently remove fields just convert it to a shapefile, remove the fields you don't want, and re-export it as GeoJSON.
I realize this question is old, but if anyone comes across this now, I'd recommend TopoJSON.
Convert it to TopoJSON.
By default TopoJSON removes all attributes, but you can flag those you'd like to keep like this:
topojson -o output.topojson -p fieldToKeep,anotherFieldToKeep input.geojson
More info in the TopoJSON command line reference

Categories