I have a MySQL table with the structure below
+------------------+------------------+----------------+
| comp_id | name | parent |
|------------------|------------------|----------------+
| 1 | comp1 | NULL |
+------------------+------------------+----------------+
| 2 | comp2 | 1 |
+------------------+------------------+----------------+
| 3 | comp3 | 2 |
+------------------+------------------+----------------+
| 4 | comp4 | 2 |
+------------------+------------------+----------------+
Assuming that no data has been inserted into the table. In other words, assuming that the table is empty how should i go about the following:
traverse the JSON data below for entry into the table:
{
"org_name":"paradise island",
"daughters" : [
{
"org_name": "banana tree",
"daughters": [
{"org_name":"Yellow Banana"},
{"org_name":"Brown Banana"}
]
},
{
"org_name": "big banana tree",
"daughters": [
{"org_name":"green banana"},
{"org_name":"yellow banana"},
{
"org_name": "Black banana",
"daughters": [
{"org_name": "red spider"}
]
}
]
}
]
}
what effective SQL query can I write to insert the JSON above into MYSQL database at once.
I've researched a host of resources on adjacency list model and nested models but none has been exhaustive on how inserts should be done via JSON input
If using uuidv4's as id's would be ok for you, you could just normalize your object with a recursive function, add uuid's as id's and construct the parental relationships.
After That you just bulk insert your data with whatever database client you are using.
This is an ideal use case for UUIDV4's. The chance of a collision is very unlikely, so you can consider it production safe.
You just have to make sure, that your uuid generator is random enough. Bad implementations could lead to much higher probabillities of collisions.
I suggest using the uuid package. You can look up it's sources on github. It uses nodes crypto library as random data generator which is cryptographically strong according to the documentation
Nice to know:
MongoDB's ObjectId's are created in a similar way. They also don't provide 100 percent security against collision. But they are so unlikely that they consider this chance irrelevant.
EDIT: I assumed your code runs server side in node.js. Generating uuid's in the browser is not safe, because the crypto api is still experimental and users might try to intentionally cause collisions.
Related
This question already has an answer here:
Having a POJO like feature in KarateAPI?
(1 answer)
Closed 1 year ago.
I am a newbie at both JavaScript and Karate. This may not be a Karate centric question per se, however, I am wondering if the solution can be done in Karate natively by any chance.
I have looked at existing questions on here but they don't seem to work likely due to my unique input. This answer looked promising, but it didn't work out for me: Adding new key-value pair into json using karate
I have a JAVA method that produces a payload consisting of a JSON object (which has a secondary json object in it) for a POST call. The payload looks something like this:
[
{
"keyId": "s123",
"clientId": "c0909",
"regionInfo": {
"geoTag": "3s98d238",
"locationId": 32
}
}
]
Now I am doing a test where I have to insert a bogus key/value pair into the payload and make sure it is ignored in the POST call itself and we return a 200. I have tried using karate.merge and karate.append, but they have not worked thus far.
The key value pair looks like this:
{'bogusfield': 'ABC', 'bogusfield': '123', 'bogusfield': 'abc123', 'bogusfield': 'abc123!$%'}
So in total, there will be four POST calls , each with a different value from above .Is ttherea way to get this done? I apologize if I missed on giving any crucial details/and/or if this too newb of a question. Thank you in advance for all the help!
Here's a simple example that makes 4 requests with the payload edits you want which should get you on your way:
Feature:
Scenario Outline:
* url 'https://httpbin.org'
* path 'anything'
* def body = { foo: 'bar' }
* body.bogusField = bogusValue
* request body
* method post
Examples:
| bogusValue |
| ABC |
| 123 |
| abc123 |
| abc123!$% |
I im newbie in splunk.
I have this json:
"request": {
"headers": [
{
"name": "x-real-ip",
"value": "10.31.68.186"
},
{
"name": "x-forwarded-for",
"value": "10.31.68.186"
},
{
"name": "x-nginx-proxy",
"value": "true"
}
I need to pick a value when the property name has "x-real-ip" value.
There are a couple ways to do this - here's the one I use most often (presuming you also want the value along side the name):
index=ndx sourcetype=srctp request.headers{}.name="x-real-ip"
| eval combined=mvzip(request.headers{}.name,request.headers{}.value,"|")
| mvexpand combined
| search combined="x-real-ip*"
This skips all events that don't have "x-real-ip" somewhere in the request.headers{}.name multivalue field
Next, it combines the two multivalue fields (name & value) into a single mv field, separated by the | character
Then expand the resultset so you're looking at one line at a time
Finally, you look for only results that have the value "x-real-ip" in them
If you'd like to then extract the value from the combined field, add the following line:
| rex field-combined "\|(?<x_real_ip>.+)"
And, of course, you can do whatever other SPL operations on your data you wish
I tried #Warren's answer but I got the following error:
Error in 'eval' command: The expression is malformed. Expected ).
You need to add a rename because the {} charcters in mvzip causes problems.
This is the query that works:
index=ndx sourcetype=srctp request.headers{}.name="x-real-ip"
| rename request.headers{}.name AS headerName, request.headers{}.value AS headerValue
| eval reviewers=mvzip(headerName,headerValue ,"|")|
| mvexpand combined
| search combined="x-real-ip*"
your search
| rex max_match=0 "name\":\s\"(?<fieldname>[^\"]+)"
| rex max_match=0 "value\":\s\"(?<fieldvalue>[^\"]+)"
| eval tmp=mvzip(fieldname,fieldvalue,"=")
| rename tmp as _raw
| kv
| fields - _* field*
When you ask a question, please present the correct information.
You've run out of logs in the process.
I'm going to guess this question comes up fairly frequently and is the result of several missteps and design flaws on my part but I'm kind of stuck now and I don't know where to look because I'm too much of an SQL novice.
The basic gist of the matter is that I started building my application with Sequelize which has an include function that essentially joins tables as nested object. But now Sequelize isn't working for me and I need to do raw queries but I still have application code relying on the structure Sequelize would give. Basically I need to recreate the Sequelize include functionality but not so verbose.
So, to summarize the environment before I get into details I've got: PostgreSQL & JavaScript with Sequelize & Knex (using to help build queries programmatically).
So I have table foo
id | name | bar_id
---+------+-------
1 | Joe | 1
2 | Jan | 2
table bar
id | pet | vet_id
---+-----+-------
1 | cat | 1
2 | dog | 1
table vet
id | name
---+-----
1 | Dr. Elsey
The ideal result would look something like:
id | name | bar.id | bar.pet | bar.vet.id | bar.vet.name
---+------+--------+---------+------------+-------------
1 | Joe | 1 | cat | 1 | Dr. Elsey
2 | Jan | 2 | dog | 1 | Dr. Elsey
Sequelize achieves this by doing something like this (paraphrasing select):
select
foo.id,
foo.name,
bar.id as "bar.id",
bar.pet as "bar.pet",
"bar->vet".id as "bar.vet.id",
"bar->vet".name as "bar.vet.name"
from foo
left outer join bar on foo.bar_id = bar.id
left outer join vet as "bar->vet" on bar.vet_id = vet.id;
Is there any way to do this without enumerating all of these select aliases? Or is there some better sort of output I could try at?
Essentially I want to build objects like this:
{
id: 1,
name: 'Joe',
bar: {
id: 1,
pet: 'cat',
vet: {
id: 1,
name: 'Dr. Elsey'
}
}
};
I wouldn't classify it as easier or necessarily better -- it's more complicated for sure -- but if you really wanted to avoid the aliasing, you could try some SQL meta programming using Postgres's built-in information_schema.columns table to generate dynamic SQL for you.
Something along the lines of this adapted to your exact needs:
select ' select json_agg(x) from (select '
|| array_to_string(array_agg(table_schema
|| '.' || table_name || '.'
|| column_name || ' as "' || table_schema || '.' || table_name || '.'
|| column_name || '"'), ', ') ||
' from public.foo join public.bar using(id)) as x'
from information_schema.columns
where table_schema = 'public'
and table_name in ('foo', 'bar');
The above query will return each row as a JSON blob with fully-schema-qualified columns. You could use Postgres's other built-in JSON functions to modify this to fit your needs.
Note: It's important to be mindful of the potential for SQL injection when constructing queries like this. That is, be very careful if you end up interpolating other variables from your script into this dynamically generated SQL. If you find yourself needing to do that, you may want to wrap the dynamic SQL inside of a pure SQL function you create that uses a binding variable and takes it as an argument, as described on the Bobby Tables site.
Initially I was working with a CSV, where each row contained data e.g.
------------------------------------------------------
123 | cat | dog |
------------------------------------------------------
456 | cat | rabbit |
------------------------------------------------------
789 | snake | dog |
------------------------------------------------------
I am now getting more data with different structure, so I can no longer use a csv. Instead I am using JSON file. The JSON file looks something like this
[
[
{
"ID": 123,
"animal_one": "cat",
"animal_two": "dog"
},
{
"ID": 456,
"animal_one": "cat",
"animal_two": "rabbit"
},
{
"ID": 789,
"animal_one": "snake",
"animal_two": "dog"
}
],
[
2222
],
[
12345
],
[
"2012-01-02"
],
[
"2012-12-20"
]
]
So you can see the additional data. For this part of the application, I only want to work with the first part of the JSON, the part containing animals. The function I have basically works on values only, so I need to make this JSON like the original CSV file, whereby I only have the values, no keys.
So I am loading the JSON file, which is then contained within the variable data
I am then trying something like this
var key = "CUST_ID";
delete data[0][key];
console.log(JSON.stringify(data[0]))
Although this doesnt even work, I think it is the wrong approach anyway. I dont want to define the keys I want removed, I just want it to remove all keys, and keep the values.
How would I go about doing this? I am using data[0] to get the first section of the JSON, the part that contains animals. Not sure if this is correct either?
Thanks
You can simply do this, if you dont care what keys you are getting:
var collection = "";
data[0].forEach(function(row) {
var line = [];
Object.keys(row).forEach(function(key) {
line.push(row[key])
});
collection = collection + line.join(',') + '\n';
})
You will get csv string out of collection
I dont want to define the keys I want removed
You still will need to define which keys you want the values from. One could just take all the values from the object, in arbitrary order, but that's not robust against changes to the data format. Use something like
const table = data[0].map(value => [value.ID, value.animal_one, value.animal_two]);
data[0].forEach(function(row,index) {
data[0][index] = Object.values(data[0][index]);
});
I have a service producing objects that are like triples. They will be in this format:
{ country, attribute, value }
Example:
{ country: 'usa', attribute: 'population', value: 100 }
{ country: 'mexico', attribute: 'population', value: 200 }
{ country: 'usa', attribute: 'areaInSqM', value: 3000 }
Ultimately I want to display these as a table. Rows are countries, columns are attributes. So the table would look like:
| country | population | areaInSqM |
| usa | 100 | 3000 |
| mexico | 200 | |
My assumption (possibly wrong) is that I need to create an intermediate data structure that is an array of rows. Such as:
[ { country: 'usa', population: 100, areaInSqM: 3000 }, .... ]
My current solution is a non-RxJS mess of objects where I store a Set containing each attribute type, store a lookup object indexed by country, and convert the lookup object back to the above array at the end. Lots of looping and double storage that I'd prefer to avoid.
Does RxJS have any operators that aid in this type of operation?
Is there a smarter approach?
In this particular case, assumptions are:
The attributes are not known ahead of time
The values are always numeric
A given 'cell' can be null. In this example, mexico areaInSqM is never provided
Edit: Plunkr with solution: https://plnkr.co/edit/FVoeVmmzMN7JGJ3zWFQM?p=preview
There are two components in your question, the data structure part, and the data flow part (I suppose you get these data as a stream i.e. one by one, hence the reason why you use Rxjs).
A simple way to iteratively build you data structure is to use the scan operator. For instance :
myDataStructure$ = dataSource$.scan(function (accDataStructure, triple){
accDataStructure[triple.country] = accDataStructure[triple.country] || {}
accDataStructure[triple.country][triple.attribute] = accDataStructure[triple.country][triple.attribute] || {}
accDataStructure[triple.country][triple.attribute] = triple.value
return accDataStructure
}, {})
That makes the assumption that dataSource$ produces objects of the shape { country, attribute, value }. Then myDataStructure$ will output, for every incoming data, the iteratively built data structure that you are seeking. If you only want that data structure once it is finished building, just add a .last() to myDataStructure$.
This is not tested so let me know if that worked