converting csv to json and then parsing it in javascript - javascript

I have a python script which converts a csv file to json.
import csv
import json
# Function to convert a CSV to JSON
# Takes the file paths as arguments
def make_json(csvFilePath, jsonFilePath):
# create a dictionary
data = {}
# Open a csv reader called DictReader
with open(csvFilePath, encoding='utf-8') as csvf:
csvReader = csv.DictReader(csvf)
# Convert each row into a dictionary
# and add it to data
for rows in csvReader:
# Assuming a column named 'No' to
# be the primary key
key = rows['issue']
data[key] = rows
# Open a json writer, and use the json.dumps()
# function to dump data
with open(jsonFilePath, 'w', encoding='utf-8') as jsonf:
jsonf.write(json.dumps(data, indent=4))
# Driver Code
# Decide the two file paths according to your
# computer system
csvFilePath = r'Names.csv'
jsonFilePath = r'Names.json'
# Call the make_json function
make_json(csvFilePath, jsonFilePath)
my csv file is like this
issue, summary, desc
A1, summ1, desc1
A2, summ2, desc2
Once the script is run, I get the following json file
{
"A1": {
"issue": "A1",
" summary": " summ1",
" desc": " desc1"
},
"A2": {
"issue": "A2",
" summary": " summ2",
" desc": " desc2 "
}
}
Now in my javascript application, I want to read this json file and iterate over it. Sample code in my react application is
import myData from <jsonfile>
console.log(myData['A1'].summary)
but for this to work I need to know the value A1, A2 etc..
I am unable to work out how to iterate on this. Please can someone guide me on what a sample javascript code should look like to work with this json.
Operations I want to perform are extract, all issue fields, extract all summary fields etc. Basically work on this json result like it was an array.

for (const k in myData) {
console.log(`${myData[k].issue}: ${myData[k][" summary"]}`);
}

You have spaces in your JSON names:
you have
" summary" instead of "summary".
Fix that, and it will work.

Related

NodeJS: How to parse a comment from a CSV

I have a CSV file that looks like this:
# Meta Data 1: ...
# Meta Data 2: ...
...
Header1, Header2...
actual data
I'm currently using the fast-csv library in a NodeJS script to parse the actual data part into objects with
const csv = require("fast-csv");
const fs = require("fs");
fs.createReadStream(file)
.pipe(csv.parse({
headers:true,
comment:"#", // this ignore lines that begin with #
skipLines:2 })
)
I'm skipping over the comments or else I won't get nice neat objects with header:data pairs, but I still want some of my meta data. Is there a way to get them? If not with fast-csv, is there another library that could accomplish this?
Thanks!
Edit: My current work around is to just regex for the specific meta data I want, but this means I have to read the file twice. I don't expect my files to be super big so this works for now but I don't think it's the best solution.

Store data in vs code URI when the data contains #

I am creating a temporary URI in vs code. It is needed for command vscode.diff.
I am following their example from here
The URI is parsed via the following command
let uri = vscode.Uri.parse('cowsay:' + what);
and read via the following command (from their examples)
const myProvider = class implements vscode.TextDocumentContentProvider {
provideTextDocumentContent(uri: vscode.Uri): string {
return cowsay.say({ text: uri.path });
}
};
It is stored in uri.path. The problem I am facing is that the data I want to store contains # in them. uri.path ignores all text as soon as the first # is encountered.
Is there a way to store data in a custom URI containing #.
e.g.
If my code is below
let textToStore: string = "print '1'# some comment";
// Storing in URI
let uri = vscode.Uri.parse('cowsay:' + textToStore);
The URI.path would only store print '1' in it while it should store print '1'# some comment. The character after # are ignored.
Is there a way to store # in a custom URI scheme in Vs code.
You might be looking for encodeURIComponent(). This function encodes certain characters that cannot be used in URL components, such as '#'. More information can be found here.
let textToStore: string = encodeURIComponent("print '1'# some comment");

NodeJs JSON parsing issue

I have a .json file where i have people's names stored. I'm reading the content from this file using the file system from Node Manager and then I'm trying to convert this json to string and parsing it to JS object. After parsing it to JS object i get as type string instead of object.
Here is the example json file :
{
"21154535154122752": {
"username": "Stanislav",
"discriminator": "0001",
"id": "21154535154122752",
"avatar": "043bc3d9f7c2655ea2e3bf029b19fa5f",
"shared_servers": [
"Reactiflux",
"Discord Testers",
"Official Fortnite",
"Discord API"
]
}
}
and here is the code for processing the data:
const string_data = JSON.stringify(fs.readFileSync('data/users.json', 'utf8'));
const data = JSON.parse(string_data);
console.log(typeof(data)); // <-- this line here shows the type of data as string
const results_array = Object.values(data);
where fs is the file system package from npm.
don't use JSON.stringify as it is further changing the string representation of JSON object. An example of what is happening is below
Imagine if you have a data in your file as shown below
{
"key": "value"
}
When you read the file (using readFileSync) and apply JSON.stringify, it is converted to a new string as shown below. You can notice that the double quotes are now escaped
"{\"key\": \"value\"}"
Now when you will parse it using JSON.parse then instead of getting the desired object, you are going to get back the same string you read from the file.
You are basically first performing and then undoing the stringify operation
Okay so fs.readFileSync returns a string so you dont need to use stringify
var fs = require('fs');
var read = fs.readFileSync('data/users.json', 'utf8');
console.log(read);
console.log(typeof(read));
const data = JSON.parse(read);
console.log(typeof(data));
You will see it returns an object
This works for me:
const data = JSON.parse(fs.readFileSync('data/users.json', 'utf8'));
console.log(typeof(data)); // <-- this line here shows the type of data as OBJECT
const results_array = Object.values(data);

Upload a CSV file and read it in Bokeh Web app

I have a Bokeh plotting app, and I need to allow the user to upload a CSV file and modify the plots according to the data in it.
Is it possible to do this with the available widgets of Bokeh?
Thank you very much.
Although there is no native Bokeh widget for file input. It is quite doable to extend the current tools provided by Bokeh. This answer will try to guide you through the steps of creating a custom widget and modifying the bokeh javascript to read, parse and output the file.
First though a lot of the credit goes to bigreddot's previous answer on creating the widget. I simply extended the coffescript in his answer to add a file handling function.
Now we begin by creating a new bokeh class on python which will link up to the javascript class and hold the information generated by the file input.
models.py
from bokeh.core.properties import List, String, Dict, Int
from bokeh.models import LayoutDOM
class FileInput(LayoutDOM):
__implementation__ = 'static/js/extensions_file_input.coffee'
__javascript__ = './input_widget/static/js/papaparse.js'
value = String(help="""
Selected input file.
""")
file_name = String(help="""
Name of the input file.
""")
accept = String(help="""
Character string of accepted file types for the input. This should be
written like normal html.
""")
data = List(Dict(keys_type=String, values_type=Int), default=[], help="""
List of dictionary containing the inputed data. This the output of the parser.
""")
Then we create the coffeescript implementation for our new python class. In this new class, there is an added file handler function which triggers on change of the file input widget. This file handler uses PapaParse to parse the csv and then saves the result in the class's data property. The javascript for PapaParse can be downloaded on their website.
You can extend and modify the parser for your desired application and data format.
extensions_file_input.coffee
import * as p from "core/properties"
import {WidgetBox, WidgetBoxView} from "models/layouts/widget_box"
export class FileInputView extends WidgetBoxView
initialize: (options) ->
super(options)
input = document.createElement("input")
input.type = "file"
input.accept = #model.accept
input.id = #model.id
input.style = "width:" + #model.width + "px"
input.onchange = () =>
#model.value = input.value
#model.file_name = input.files[0].name
#file_handler(input)
#el.appendChild(input)
file_handler: (input) ->
file = input.files[0]
opts =
header: true,
dynamicTyping: true,
delimiter: ",",
newline: "\r\n",
complete: (results) =>
input.data = results.data
#.model.data = results.data
Papa.parse(file, opts)
export class FileInput extends WidgetBox
default_view: FileInputView
type: "FileInput"
#define {
value: [ p.String ]
file_name: [ p.String ]
accept: [ p.String ]
data : [ p.Array ]
}
A
Back on the python side we can then attach a bokeh on_change to our new input class to trigger when it's data property changes. This will happen after the csv parsing is done. This example showcases the desired interaction.
main.py
from bokeh.core.properties import List, String, Dict, Int
from bokeh.models import LayoutDOM
from bokeh.layouts import column
from bokeh.models import Button, ColumnDataSource
from bokeh.io import curdoc
from bokeh.plotting import Figure
import pandas as pd
from models import FileInput
# Starting data
x = [1, 2, 3, 4]
y = x
source = ColumnDataSource(data=dict(x=x, y=y))
plot = Figure(plot_width=400, plot_height=400)
plot.circle('x', 'y', source=source, color="navy", alpha=0.5, size=20)
button_input = FileInput(id="fileSelect",
accept=".csv")
def change_plot_data(attr, old, new):
new_df = pd.DataFrame(new)
source.data = source.from_df(new_df[['x', 'y']])
button_input.on_change('data', change_plot_data)
layout = column(plot, button_input)
curdoc().add_root(layout)
An example of a .csv file for this application would be. Make sure there is no extra line at the end of the csv.
x,y
0,2
2,3
6,4
7,5
10,25
To run this example properly, bokeh must be set up in it's proper application file tree format.
input_widget
|
+---main.py
+---models.py
+---static
+---js
+--- extensions_file_input.coffee
+--- papaparse.js
To run this example, you need to be in the directory above the top most file and execute bokeh serve input_widget in the terminal.
As far as I know there is no widget native to Bokeh that will allow a file upload.
It would be helpful if you could clarify your current setup a bit more. Are your plots running on a bokeh server or just through a Python script that generates the plots?
Generally though, if you need this to be exposed through a browser you'll probably want something like Flask running a page that lets the user upload a file to a directory which the bokeh script can then read and plot.

Python json.dump() to javascript JSON.parse()

Problem Summary: can't parse through string that's formatted as a JSON object from .json file
Long Version:
I have some tweets I'm processing with Python where I create a json file I'm wanting to pass into d3.js and parse. I'm writing the tweets I get, into a file, so I have to serialize them with the json.dump() command in Python before I write them to a file.
Python
def on_data(self, data):
f = open("tweets.json","a")
tweet = json.loads(data)
d = {
"created": tweet["created_at"],
"text": tweet["text"]
}
final_tweet = json.dumps(d)
f.write(final_tweet)
f.close()
return True
However when I get the json file and try to grab it in my d3.json("tweets.json") it prints out the correct json format in the file:
{
tweet:[
{"key":"value"},
{"key":"value"}
]
}
but I cannot parse the data with the code I'm using because console.log(JSON.parse(data)) does not print out any object value.
d3.text("tweets.json", function(error, data){
if (error) return console.warn(error);
console.log("hello3");
console.log(JSON.parse(data));
});
EDIT I edit the file that gets written to by manually adding braces at the top and bottom.
Use this Json instead and check your Json data here
{
"keys":[
{"key":"value"},
{"key":"value"}
]
}

Categories