I am getting a JSON response from a Restful service in the following format,
{
"comments":{
"columns":[
"clientId",
"treatmentDate",
"comments",
"photo",
"practitioner"
],
"records":[
[
"1",
"2016-09-12",
"Some Coments",
"0",
"Doc 4"
],
[
"1",
"2016-09-11",
"DDD oNE",
"1",
"Docc 3"
]
]
}
}
Record is starting with table name and separate arrays of columns and records follows. Angular is not accepting data is this format. However if I provide data with standard format as follows, it works perfectly.
[
{
"clientId":"1",
"treatmentDate":"2016-09-12",
"comments":"Some Coments",
"photo":"0",
"practitioner":"Doc 4"
},
{
"clientId":"1",
"treatmentDate":"2016-09-11",
"comments":"DDD oNE",
"photo":"1",
"practitioner":"Docc 3"
}
]
Is there a directive that can do this for me or shall I create a custom function, any ideas?
Is there a reason you cannot just manually reshape the data to conform the form you expect?
var data = json.comments.records.map(function(record) {
return json.comments.columns.reduce(function(reshaped, columnName, idx) {
reshaped[columnName] = record[idx];
return reshaped;
}, {});
});
Be careful with this though; this expects the length of each of the arrays in records to always be the same as the number of column names.
Related
sample data for title
actiontype test
booleanTest
test-demo
test_demo
Test new account object
sync accounts data test
default Mapping for title
"title": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
tried with this query search
{
"query": {
"bool": {
"must": [
{
"match": {
"title": "test"
}
}
]
}
},
}
here my expectation
with specific word(e.g. : test ) it should return following titles
expect
actiontype test
booleanTest
test-demo
test_demo
Test new account object
sync accounts data test
But
got
actiontype test
test-demo
test_demo
Test new account object
sync accounts data test
With exact match (e.g. : sync accounts data test ) it should return only this(sync accounts data test) but got all records those contains this words (sync,account,data,test).
What should I do to make this happen ? Thanks.
I am not sure which ES version you're using but the following should give you an idea.
Using your mapping you can get all title text with test, including booleanTest using query string query type. Eg.
GET {index-name}/{mapping}/_search
{
"query": {
"query_string": {
"default_field": "title",
"query": "*test*"
}
}
}
However, for this to work, make sure you give your title field an analyzer with a lowercase analyzer filter (see below settings example). Your current mapping will not work since it's just pure text as is... test /= TEST by default.
There are other ways, if you're interested to know the workings of ES... Eg. You can also match booleanTest in your match query by writing a custom nGram filter to your index settings. Something like,
{
"index": {
"settings": {
"index": {
"analysis": {
"filter": {
"nGram": {
"type": "nGram",
"min_gram": "2",
"max_gram": "20"
}
},
"ngram_analyzer": {
"filter": [
"lowercase",
"nGram"
],
"type": "custom",
"tokenizer": "standard"
}
}
}
}
}
}
NB: ngram_analyzer is just a name. You can call it whatever.
min_gram & max_gram: Pick numbers that work for you.
Learn more about n-gram filter, the goods and bad here: N-GRAM
Then you can add the analyzer to your field mapping like,
{
"title": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256,
"analyzer": "ngram_analyzer"
}
}
}
}
Lastly for exact matches, these work on type keyword. So based on your mapping, you already have the keyword field so you can use term query to get the exact match by searching on the title.keyword field;
GET {index-name}/{mapping}/_search
{
"query": {
"term": {
"title.keyword": {
"value": "sync accounts data test"
}
}
}
}
Also, you will want to read/ learn more about these solutions and decide on the best solution based on your indexing setup and needs. Also, there may be more ways to achieve what you need, this should be a good start.
I'm trying to fetch timeserie data from PostgreSQL & after successful queries and parsing of data, I have some problem in indexing it. This mistake is probably quite small, but I just cant find it.
After I get data from PostgreSQL, it looks like this:
[
{ id: 2,
time: 2019-09-12T03:36:04.433Z,
value: 0.311303124694538
},
{ id: 2,
time: 2019-09-12T03:36:03.434Z,
value: 0.13233108292117
},
{ id: 3,
time: 2019-09-12T03:36:03.434Z,
value: 0.13233108292117 }
]
After this step I'm reducing data by id:
let results = sqlresult.rows.reduce(function(results, row) {
(results[row.id] = results[row.id] || []).push([row.time,row.value]);
return results;
}, {})
let clonedObj = { ...results };
After this step data is formatted like in below:
{ '2':
[ [ 2019-09-12T03:36:04.433Z, 0.311303124694538 ],
[ 2019-09-12T03:36:03.434Z, 0.13233108292117 ],
[ 2019-09-12T03:36:02.432Z, 0.171794173529729 ]
]
}
But once I'm about to drop it into Highchart it won't work. My problem is probably that I didn't fully understand how does that reduce function work and now I'm trying to copy it. If some of you could show me how to avoid this step and to do all in data reduce step, I'd be thankful.
for(let i=0; i< Object.keys(clonedObj).length; i++){
highchart[i] = {
name: Object.keys(clonedObj)[i],
data: clonedObj[i]
}
}
I'm expecting result like this below:
[{"name":1,"data":[["2019-09-12T03:36:00.433Z",20],["2019-09-12T03:35:38.433Z",-20]]},{"name":2,"data":[["2019-09-12T03:36:04.433Z",0.311303124694538]}]]
From your nicely formatted data listings, it looks like you're using Postgres to package rows of data already. This is something I do all the time, but without some pretty narrow limits. I'd like to get better at this, so I figured I'd give your question a bit of time. To start with, I created a table named "reading" with your data:
CREATE TABLE IF NOT EXISTS reading (
id integer,
"time" text,
"value" real
);
I get back a listing like your top one with this query:
select array_to_json(array_agg(row_to_json(reading_row))) as reading_object
from (select id, time, value from reading) as reading_row
Your target output example doesn't parse right for me, I think you're after this:
[
{
"name":1,
"data":[
[
"2019-09-12T03:36:00.433Z",
20
],
[
"2019-09-12T03:35:38.433Z",
-20
]
]
},
{
"name":2,
"data":[
"2019-09-12T03:36:04.433Z",
0.311303124694538
]
}
]
Fair warning: Yeah, I don't really know how to do that, and I'm hoping someone answers with a simple script to generate exactly the format you want on the Postgres side. But I made a start. Check this out:
select id, json_object_agg(time, value order by time)
from reading
group by id
Here's what I get:
2 "{ ""2019-09-12T03:36:03.434Z"" : 0.132331, ""2019-09-12T03:36:04.433Z"" : 0.311303 }"
3 "{ ""2019-09-12T03:36:03.434Z"" : 0.132331 }"
Here's something that's...not right..but getting closer:
select array_to_json(array_agg(row_to_json(reading_row))) as reading_object
from (
select id, json_object_agg(time, value order by time) as data
from reading
group by id
) as reading_row
Which returns:
[
{
"id":2,
"data":{
"2019-09-12T03:36:03.434Z":0.132331,
"2019-09-12T03:36:04.433Z":0.311303
}
},
{
"id":3,
"data":{
"2019-09-12T03:36:03.434Z":0.132331
}
}
]
I took another crack at it here, this might be what you're after, or close. I noticed you're renaming 'id' as 'name', so that's in the final query:
select array_to_json(array_agg(row_to_json(subquery)))
from (
select id as name,
array_to_json(array_agg(json_build_object('time', time, 'value', value))) as data
from reading
group by id
) subquery
The output, pretty-printed, looks like this:
[
{
"name":2,
"data":[
{
"time":"2019-09-12T03:36:04.433Z",
"value":0.311303
},
{
"time":"2019-09-12T03:36:03.434Z",
"value":0.132331
}
]
},
{
"name":3,
"data":[
{
"time":"2019-09-12T03:36:03.434Z",
"value":0.132331
}
]
}
]
This variant has the same structure, but without labels on the elements within the array:
select array_to_json(array_agg(row_to_json(subquery)))
from (
select id as name,
array_to_json(array_agg(array[time, value::text])) as data
from reading
group by id
) subquery
Apart from the numeric value being cast as text, I think this is what you asked for:
select array_to_json(array_agg(row_to_json(subquery)))
from (
select id as name,
array_to_json(array_agg(array[time, value::text])) as data
from reading
group by id
) subquery
[
{
"name":2,
"data":[
[
"2019-09-12T03:36:04.433Z",
"0.311303"
],
[
"2019-09-12T03:36:03.434Z",
"0.132331"
]
]
},
{
"name":3,
"data":[
[
"2019-09-12T03:36:03.434Z",
"0.132331"
]
]
}
]
Note: I don't see where you're getting your output of 20, -20 in your example.
Between array_to_json(), row(), array_agg(), and json_build_object(), it looks like you can get most any format you need.
Here's hoping that someone who actually knows what they're doing chimes in.
I am looking out methods of Extracting only portion of JSON document with REST API search call in MarkLogic using JavaScript or XQuery.
I have tried using query options of re extract-document-data but was not successful. Tried checking my extract path using CTS.validextract path but that function was not recognised in Marklogic 9.0-1
Do I have to using specific search options like constraints or structured query.
Could you please help out? TIA.
I have below such sample document
{
"GenreType": {
"Name": "GenreType",
"LongName": "Genre Complex",
"AttributeDataType": "String",
"GenreType Instance Record": [
{
"Name": "GenreType Instance Record",
"Action": "NoChange",
"TitleGenre": [
"Test1"
],
"GenreL": [
"Test1"
],
"GenreSource": [
"ABC"
],
"GenreT": [
"Test1"
]
},
{
"Name": "GenreType Instance Record",
"Action": "NoChange",
"TitleGenre": [
"Test2"
],
"GenreL": [
"Test2"
],
"GenreSource": [
"PQR"
],
"GenreT": [
"Test2"
]
}
]
}
}
in which i need to search a document with attribute "TitleGenre" WHERE GenreSource = “ABC” in the GenreType complex attribute. It's an array in json document.
I was using the search option as below, (writing search option in XML, but searching the in json documents)
<extract-path>/GenreType/"GenreType Instance Record"[#GenreSource="ABC"]</extract-path>
I am still facing the issues. If possible could you please let me know how json documents can be searched for such specific requirement? #Wagner Michael
You can extract document data by using the extract-document-data option.
xquery version "1.0-ml";
let $doc := object-node {
"GenreType": object-node {
"Name": "GenreType",
"LongName": "Genre Complex",
"AttributeDataType": "String",
"GenreType-Instance-Record": array-node {
object-node {
"TitleGenre": array-node {
"Test1"
},
"GenreSource": array-node {
"ABC"
}
},
object-node {
"TitleGenre": array-node {
"Test2"
},
"GenreSource": array-node {
"PQR"
}
}}
}
}
return xdmp:document-insert("test.xml", $doc);
import module namespace search = "http://marklogic.com/appservices/search"
at "/MarkLogic/appservices/search/search.xqy";
search:search(
"Genre Complex",
<options xmlns="http://marklogic.com/appservices/search">
<extract-document-data>
<extract-path>/GenreType/GenreType-Instance-Record[GenreSource = "ABC"]</extract-path>
</extract-document-data>
</options>
)
In this case /GenreType/GenreType-Instance-Record is the xpath to the extracted element.
Relating to your comment, i also added a predicate [GenreSource = "ABC"]. This way only GenreType-Instance-Record which have a GenreSource of "ABC" are being extracted!
Result:
....
<search:extracted kind="array">[{"GenreType-Instance-Record":{"TitleGenre":["Test1"], "GenreSource":["ABC"]}}]
</search:extracted>
....
Note:
You can add multiple <search:extract-path> elements!
I had to change the name of GenreType Instance Record to GenreType-Instance-Record. I am not sure if you can have property names with whitespaces and access them with xpath. I couldn't get it working this way.
Please post your search options, if this does not work for you.
Edit: Added a predicate to the extract-path.
Thank you so much Wagner, for your prompt trials. Helped me look out for accurate solution to my problem as of now. I have used below extract path, as i could not modify the names in documents. /GenreType/array-node("GenreType Instance Record")/object-node()/TitleGenre[following-sibling::GenreSource="ABC"]
How can we format the below json format to convert into the required format to use in Highchart. Need to convert the heirarchical json to flat json having proper filter values(categories). Series value is to be use to show the sub-bar graphs inside the main bar(Catgories)
Please refer the below link which i am trying to get from the below json.
https://www.highcharts.com/demo/column-basic
Json to Convert(Available):
{
"SalesValue": {
"barValue":[{
"type" :"Maruti",
"Drills": [
{"region":"Asia", "value":4},
{"region":"Australia", "value":87},
{"region":"America", "value":12}
]
},{
"type" :"Hyundai",
"Drills": [
{"region":"Asia", "value":33},
{"region":"Australia", "value":5}
]
},{
"type" :"Toyota",
"Drills": [
{"region":"America", "value":33}
]
}]
}
}
Desired Json(to be made):
{
"Chart Value": {
"Categories":[{
"name" :["Maruti","Hyundai","Toyota"]
}],
"Series":[{
"name":"Asia",
"data":[4,33]
},{
"name":"Australia",
"data":[87, 5]
},{
"name":"America",
"data":[12 , 33]
}]
}
}
I'm working on a application for that i need to map json data for storing in Elasticsearch. The number of fields in json data is dynamic.then how can i do mapping for this scenario.
mapping Snippet
var fs = uploadedFiles[0].fd;
var xlsxRows = require('xlsx-rows');
var rows = xlsxRows(fs);
console.log(rows);
client.indices.putMapping({
"index": "testindex",
"type": "testtype",
"body": {
"testtype": {
"properties": {
"Field 1": {
"type": "string"
},
"Field 3": {
"type": "string"
},
"Field 2":{
"type":"string"
} .....
//Don't know how many fields are in json object.
}
}
}
}, function (err, response) {
if(err){
console.log("error");
}
console.log("REAPONCE")
console.log(response);
});
This is my sample json data
//result of rows
[
{ Name: 'paranthn', Age: '43', Address: 'trichy' },
{ Name: 'Arthick', Age: '23', Address: 'trichy' },
{ Name: 'vel', Age: '24', Address: 'trichy' } //property fields
]
NOTE: The number of property fields are dynamic.
ES will always make its best efforts to infer the correct type for the fields in your documents that have no mapping defined and set sensible defaults.
A mapping is only necessary if you need to apply some special treatment to certain fields other than the set defaults, like specifying an indexing analyzer, a search analyzer, whether string fields need to be analyzed or not, specifying sub-fields with a different analyzer, etc.
If you don't need any of this, then you can let ES infer the mapping of your documents.