I am using the CSV package for Node to parse some CSV files in a project. I need to be able to handle cases with and without a header. So either of:
const withHeader = `h1,h2,h3
d1a,d2a,d3a
d1b,d2b,d3b`;
const withoutHeader = `d1a,d2a,d3a
d1b,d2b,d3b`;
The number of columns and their names are unknown to my application. Either they will be read from the header, or they should be numerically generated, e.g. col0,col1,col2.
This is where I run into a problem. I always want the output of csvParse to be in object literal form. This is easy when the end-user has indicated that the CSV has a header:
> csvParse(withHeader, {columns: true})
[
{ h1: 'd1a', 'h2': 'd2a', 'h3': 'd3a' },
{ h1: 'd1b', 'h2': 'd2b', 'h3': 'd3b' }
]
But when the user indicates that there is not a header row, it doesn't seem to be possible to end-up with the data in object literal form with programatically generated column headers.
The 3 options for columns are boolean | array | function.
By supplying false, the data returned is an array of arrays, which I would then need to transform into object literal form. Not ideal!
To supply an array of column names, I would already need to know how many columns there are... before it is parsed, which doesn't make sense. I could parse the first row to get the count, then start again supplying the array, but this seems clumsy.
I can supply a function which programmatically generates the column keys. E.g. column => column which doesn't help as a) there is no index supplied, and b) this then ignores the first line as it is assumed to be column headers being transformed into the desired column keys.
Is there a way trick to doing this that I've missed? Here are the two ways that seem clumsier and less efficient than necessary.
Parse 1 row, then parse all
// Obviously in actual use I'd handle edge cases
const colCount = csvParse(withoutHeader, {to_line: 1})[0].length;
// 3
const data = csvParse(withoutHeader, {columns: [...Array(colCount).keys()].map(i => `col{i}`)})
/*
[
{ col0: 'd1a', col1: 'd2a', col2: 'd3a' },
{ col0: 'd1b', col1: 'd2b', col2: 'd3b' }
]
*/
Parse into array of arrays, then convert
csvParse(withoutHeader).map(
row => row.reduce(
(obj, item, index) => {
obj[`col${index}`] = item;
return obj;
},
{}
)
)
/*
[
{ col0: 'd1a', col1: 'd2a', col2: 'd3a' },
{ col0: 'd1b', col1: 'd2b', col2: 'd3b' }
]
*/
To me it would be ideal to be able to specify columns as a function, which was given the column index as an argument instead of a header row.
Related
I am using MUI Data Gri (pro version). I want to add some checkboxes to the sidebar and use those checkboxes to filter various columns. For example, imagine I have three columns:
* Column Letters: ['a', 'b', 'c', 'd', etc.]
* Column Numbers: [1, 2, 3, 4,5, etc.]
* Column Names: ['Bob', 'Bill', 'Jim', 'Joe', etc]
Now let's also imagine that in my sidebar, I have different checkboxes:
* First 10 letters of the alphabet: []
* Last 10 letters of the alphabet: []
* Even numbers: []
* Odd numbers: []
* Names that begin with 'B': []
* Names that begin with 'J': []
What I want is for someone to check one or more of those checkboxes and have them filter the appropriate columns. What I can't figure out is how to connect the checkboxes to the Data Grid. That is to say, I'm not having an issue with how to set up a filter to filter for only odd numbers or names that begin with B, but rather how to use a custom checkbox to apply such a filter.
I thought of using apiRef: https://mui.com/x/react-data-grid/api-object/
In particular, to use setFilterModel as noted on the Grid Api docs: https://mui.com/x/api/data-grid/grid-api/
Now, I'm not sure this is the right way to do it, but either way it is NOT working. Here is what I tried (using a button rather than a checkbox):
import { useGridApiRef } from "#mui/x-data-grid-pro";
export default function Sidebar({) {
const apiRef = useGridApiRef();
<DataGridPro
apiRef={apiRef}
components={{ Toolbar: GridToolbar }}
rows={rowsData}
columns={columnsData}
/>
<Button
variant="contained"
onClick={() =>
apiRef.current.setFilterModel([
{
id: 1,
columnField: "roic10y",
operatorValue: ">=",
value: 10,
},
])
}
>
Set Filter Model
</Button>
);
}
However, when I click on this button, I get the following error: TypeError: Cannot read properties of undefined (reading 'length')
Am I doing this the right way? I.e., should I be using setFilterModel? If so, what am I doing wrong and how can I fix it? If not, what should I be doing?
Thanks :).
how are you? I'm trying to move a filter inside the fetch I'm doing to bring my data from Builder.io and I'm struggling with one of them here. The title search works fine, but the second one don't. My objective is to filter between the entries to catch only the ones that match at least one of the annotationArray items.
The annotationArray can be, for example:
const annotationArray = ['Video', 'Image', 'GPS']
or just
const annotationArray = ['Video']
or whatever.
And the entries have an annotation field that consists in a string where I pass the annotations, like this:
const entries = [{title: 'First', annotation: 'Video, GPS'}, {title: 'Second', annotation: 'GPS'}, {title: 'Third', annotation: 'Video, Image'}]
So, for example, if the annotationArray is ['Video', 'GPS'], I want to fetch all of them. But if it's ['Video'], only the third and first, and so.
Currently I have this code
const sets = await builder.getAll('open-dataset', {
options: { noTargeting: true },
omit: 'data.blocks',
limit: 100,
query: {
data: {
title: { $regex: search, $options: 'i' },
annotation: { $regex: annotationArray && annotationArray.join(' '), $options: 'i' },
}
}
});
The result of annotationArray.join(' ') can be, for example, Video Image GPS or just Image. And annotation Video Image or whatever.
So I need to filter between the entries and fetch only the ones that contain at least one of the annotationArray strings.
My code is failing because currently it only fetches the ones that have all the annotationArray items, and not the ones that have at least one. I don't know how to do it with MondoDB query operators... previously, I had this code with javascript and it worked fine.
const filtered = entries.filter(item => annotationArray.some(data => item.annotation.includes(data)));
can somebody help me? thanks
My product table is sorted with initialSort by the product release month ascending. I also grouped my products by a codename which is determinate by the ajax json response url and renamed them to readable names with a groupBy function. Now I want to sort my groups individual without loosing the month sorting in my groups. How is that possible?
var table = new Tabulator("#tableid", {
ajaxURL: url,
layout: "fitColumns",
groupBy: "codename",
groupBy:function(data){
if (data.codename == "X123") {
return "Productname for X123";
}
if (data.codename == "X124") {
return "Productname for X124";
}
…
…
},
initialSort:[
{column:"month", dir:"asc"}
],
columns: [
{ title: "Product", field: "codename"},
{ title: "Month", field: "month"},
…
…
…
]
});
Not exactly sure what you mean by "sort my groups individual", but is this what you're looking for ?
https://jsfiddle.net/r3f7pysw/
initialSort:[
{column:"month", dir:"asc"},
{column:"codename", dir:"asc"}
],
NOTE(1) : I dont think you want to return that string for your grouping, when it seems like its just the header Display string, and you're still going to be sorting on "codename" (becoz the string is the same with a change at the end, which really takes a "little" long to compare each time). But maybe you do...
NOTE(2) : Adding the seconds initialSort is like Ctrl-Click on the sorting to sort by multiple criteria. So if you single click on say, Month, remember that destroys the currentSort array and sets it just to Month.
Trying to build a query for a postgreSQl DB based on a keyword. LIKE doesn't work as it matches any row that contains any of the letters. For Example:
SELECT * FROM table WHERE column ilike '%jeep%';
This returns any row that a j,e or p in the column (and the same row multiple times for some reason). Not the word 'jeep'.
Below is my query structure. Using Knex and queuing multiple tables:
searchAllBoardPosts(db, term) {
return db
.select('*')
.from({
a: 'messageboard_posts',
b: 'rentals',
c: 'market_place',
d: 'jobs'
})
.where('a.title', 'ilike', `%${term}%`)
.orWhere('b.title', 'ilike', `%${term}%`)
.orWhere('c.title', 'ilike', `%${term}%`)
.orWhere('d.title', 'ilike', `%${term}%`);
},
Thanks in advance!
UPDATE:
Here is the SQL output:
select *
from "messageboard_posts" as "a",
"rentals" as "b",
"market_place" as "c",
"jobs" as "d"
where "a"."title" ilike '%jeep%'
or "b"."title" ilike '%jeep%'
or "c"."title" ilike '%jeep%'
or "d"."title" ilike '%jeep%'
This query is a cross join
(But the Knex syntax masks that, a little).
This returns any row that a j,e or p in the column (and the same row multiple times for some reason).
It's not returning the same row multiple times. It's returning everything from each table named in a CROSS JOIN. This is the behaviour of Postgres when more than one table is named in the FROM clause (see: docs). This:
db
.select('*')
.from({
a: 'table_one',
b: 'table_two'
})
will return the entire row from each of the named tables every time you get an ILIKE match. So at minimum you'll always get an object consisting of two rows joined (or however many you name in the FROM clause).
The tricky part is, Knex column names have to map to JavaScript objects. This means that if there are two column results named, say, id or title, the last one will overwrite the first one in the resulting object.
Let's illustrate (with wombats)
Here's a migration and seed, just to make it clearer:
table_one
exports.up = knex =>
knex.schema.createTable("table_one", t => {
t.increments("id");
t.string("title");
});
exports.down = knex => knex.schema.dropTable("table_one");
table_two
exports.up = knex =>
knex.schema.createTable("table_two", t => {
t.increments("id");
t.string("title");
});
exports.down = knex => knex.schema.dropTable("table_two");
Seed
exports.seed = knex =>
knex("table_one")
.del()
.then(() => knex("table_two").del())
.then(() =>
knex("table_one").insert([
{ title: "WILLMATCHwombatblahblahblah" },
{ title: "WILLMATCHWOMBAT" }
])
)
.then(() =>
knex("table_two").insert([
{ title: "NEVERMATCHwwwwwww" },
{ title: "wombatWILLMATCH" }
])
)
);
Query
This allows us to play around a bit with ILIKE matching. Now we need to make the column names really explicit:
return db
.select([
"a.id as a.id",
"a.title as a.title",
"b.id as b.id",
"b.title as b.title"
])
.from({
a: "table_one",
b: "table_two"
})
.where("a.title", "ilike", `%${term}%`)
.orWhere("b.title", "ilike", `%${term}%`);
This produces:
[
{
'a.id': 1,
'a.title': 'WILLMATCHwombatblahblahblah',
'b.id': 1,
'b.title': 'NEVERMATCHwwwwwww'
},
{
'a.id': 1,
'a.title': 'WILLMATCHwombatblahblahblah',
'b.id': 2,
'b.title': 'wombatWILLMATCH'
},
{
'a.id': 2,
'a.title': 'WILLMATCHWOMBAT',
'b.id': 1,
'b.title': 'NEVERMATCHwwwwwww'
},
{
'a.id': 2,
'a.title': 'WILLMATCHWOMBAT',
'b.id': 2,
'b.title': 'wombatWILLMATCH'
}
]
As you can see, it's cross-joining both tables, but I suspect you were only seeing results that appeared not to match (because the match was in the other table, and the title column name was a duplicate).
So, what should the query be?
I think your (or Ry's) plan to use UNION was correct, but it's probably worth using UNION ALL to avoid unnecessary removal of duplicates. Something like this:
return db
.unionAll([
db("market_place")
.select(db.raw("*, 'marketplace' as type"))
.where("title", "ilike", `%${term}%`),
db("messageboard_posts")
.select(db.raw("*, 'post' as type"))
.where("title", "ilike", `%${term}%`),
db("rentals")
.select(db.raw("*, 'rental' as type"))
.where("title", "ilike", `%${term}%`),
db("jobs")
.select(db.raw("*, 'job' as type"))
.where("title", "ilike", `%${term}%`)
]);
A similar query against our test data produces the result set:
[
{ id: 1, title: 'WILLMATCHwombatblahblahblah', type: 'table_one' },
{ id: 2, title: 'WILLMATCHWOMBAT', type: 'table_one' },
{ id: 2, title: 'wombatWILLMATCH', type: 'table_two' }
]
Using .union works and returns the correct values, however using the key tag from the first table in the query. Have ended up just making four separate queries in the end but hope this can help some else!
searchAllBoardPosts(db, term) {
return db
.union([db
.select('id', 'market_place_cat')
.from('market_place')
.where('title', 'ilike', `%${term}%`)
])
.union([db
.select('id', 'board_id')
.from('messageboard_posts')
.where('title', 'ilike', `%${term}%`)
])
.union([db
.select('id', 'rental_cat')
.from('rentals')
.where('title', 'ilike', `%${term}%`)
])
.union([db
.select('id', 'job_cat')
.from('jobs')
.where('title', 'ilike', `%${term}%`)
]);
},
This expression:
WHERE column ilike 'jeep'
Only matches rows where the value is lower(column) = 'jeep', such as:
JEEP
jeep
JeeP
It does not match any other expression.
If you use wildcards:
WHERE column ilike '%jeep%'
then is looks for 'jeep' anywhere in lower(column). It is not searching character by character. For that, you would use regular expressions and character classes:
WHERE column ~* '[jep]'
If you want to find a word in the field, you would normally use regular expressions, not like/ilike.
I am trying to insert a href into one of my columns in DataTables but Im having some issues since I need the actual href to show my slug and then the full company name.
Example how it should be formatted: "company"
Real data: Toyota Cars.
I am using columns.render which seems to be the correct function but I can't wrap my head around how I can get 'company' between the a tags. The function does not even make use of the "data" specifier, instead it takes the data first in my ajax file which in this case is slug.
My DataTable.js file
ajax: '/api/datatable',
columns: [
{ data: 'slug' },
{ data: 'company' },
],
"columnDefs": [
{ targets: [0, 1], visible: true},
{ "targets": 0,
"data": "This doesnt even seem needed?",
"render": function ( data, type, row, meta ) {
return 'full company name';
}
}
],
I will go ahead and give an answer, based on the assumption that you are trying to merge values of 2 columns. If my assumption is not correct, please, update the question to clarify "expected vs actual" results.
You are using data parameter in render function. That parameter is based on the value you specified in columns.data. In your case (with target === 0), data will contain the value that DataTable got for { data: 'slug' } column.
If you want to merge values from different columns into one, then render function is the correct way to do it. However, instead of data, you should use row parameter, which contains all key-value fields for a row.
For example:
// ...
"targets": 0,
"data": null,
"render": function ( data, type, row, meta ) {
return ''+row.company+'';
// or whatever your row object key-value structure is
}
// ...