Is possible query an !equalTo: null on Firebase? - javascript

I'm using this query to verify if data exists on my Firebase (using AngularFire2)
let aux = this.afData.list("/drivers", { query: { orderByChild: '/accountHistory/approved', equalTo: null } });
Works pretty fine, but i also need to make an inverse query.
Like this
let aux = this.afData.list("/drivers", { query: { orderByChild: '/accountHistory/approved', equalTo: !null } });
The problem is. This second query, only returns if the value is TRUE, i'm storing a TIMESTAMP on /driveruid/accountHistory/approved'
There is any way to only verify if the Value exist or doesn't exist?
Thanks!

From the Firebase docs, queries with orderByChild return lists in the following order :-
Children with a null value for the specified child key come first.
Children with a value of false for the specified child key come next. If multiple children have a value of false, they are sorted lexicographically by key.
Children with a value of true for the specified child key come next. If multiple children have a value of true, they are sorted lexicographically by key.
Children with a numeric value come next, sorted in ascending order. If multiple children have the same numerical value for the specified child node, they are sorted by key.
Strings come after numbers and are sorted lexicographically in ascending order. If multiple children have the same value for the specified child node, they are ordered lexicographically by key.
Objects come last and are sorted lexicographically by key in ascending order.
While your first query works fine, for your second query you could try the following code.
let aux = this.afData.list("/drivers", { query: { orderByChild: '/accountHistory/approved', startAt: false });
By doing this, your query results would not contain data with a value of null for your child node.
Though, I would recommend that you make sure that the data at the child node is of the same type for all the nodes to minimise the chances of Class cast exceptions and other errors. This is more of a hack.

Related

How to delete a child node with unique key in firebase Realtime database in JavaScript

I want to remove child which having unique number as a key. And I have created the unique number using .push(). My database structure is like this-
{
mobile{
+917654387629{
-NE0wCPZV-tRi84DHFxs:"gc27dghd07671a91e3" //-NE0wCPZV-tRi84DHFxs(unique key),gc27dghd07671a91e3 (value of unique key)
-NE5-b0YTUVPZ7tad9Dz:"gc2ie23c636gde57a6" //-NE5-b0YTUVPZ7tad9Dz(unique key)
}
}
}
I am getting mobile number and value from the user and I want to match the number and the value, if I get that given value under the given mobile number then I want to delete that unique key and its value only. I have tried this code
firebase.database().ref('mobile/'+localStorage.getItem('OldPhoneNo')).once('value',snapshot =>{
snapshot.forEach((data) => {
console.log(data.val())
console.log(data.key)
if(data.val() == qrCodeMessage){
firebase.database().ref('mobile/'+localStorage.getItem('OldPhoneNo')+(data.val())).remove();
}
})
});
but I am unable to understand how to delete the value of unique key.
This definitely seems wrong:
firebase.database().ref('mobile/'+localStorage.getItem('OldPhoneNo')+(data.val())).remove();
Two things:
To remove a node, you need to specify its key and not its value.
You need a / between all keys in the path, and you're missing one after OldPhoneNo.
So:
firebase.database().ref('mobile/'+localStorage.getItem('OldPhoneNo')+'/'+data.key).remove();

Performant way of finding unique values of keys from a list of objects in typescript

I am working on an angular app that has a
a set of filters
and records present in a table
Columns in the table correspond to filters. At any time filters contain unique values from corresponding columns as options.
For a record, a column can contain more than 1 values (i.e. more than 1 options from the corresponding filter)
When a user selects an option from a filter, the records in the table are filtered and the filtered results (as per user the selection) are shown to the user.
Once the set of filtered records is derived, unique values for each filter are derived from the set of filtered records by finding unique values for each column.
Key of Filter objects correspond to columns of Record objects I have a list of records and a list of filters. I want to iterate over both these lists and find the unique value of columns for each key.
I am using the below logic to find unique options for filters from my messages.
export function filterOptionsService(records: Record[], filters: RecordFilter[]): RecordFilter[] {
const newFilters: RecordFilter[] = filters.map(filter => {
//logic to find all values for a column from the set of records
const filterOptions = records.reduce((uniqueList, record) => uniqueList.concat(record[filter.key]), []);
//logic to find unique values for a column from the set of records
//which act as options of corresponding filters.
const uniqueOptions = uniqBy(filterOptions, (opt) => filter.valueFunction ? filter.valueFunction(opt) : opt);
const filterOptions: FilterOption[] = uniqueOptions.map(value => {
return {
label: filter.labelFunction ? filter.labelFunction(value) : value,
value: filter.valueFunction ? filter.valueFunction(value) : value,
};
});
filter.options = orderBy(dropListOptions, 'label');
//here is my logic to find the count of each option, present in the filtered records
filter.options = filter.options.map(option => ({
...option,
count: filter.valueFunction
? filterOptions.filter(value => filter.valueFunction(value) === option.value).length
: filterOptions.filter(value => value === option.value).length
}));
return filter;
});
return newFilters;
}
interface Filter {
key: string;
labelFunction?: Function;
valueFunction?: Function;
order: number;
multiple: boolean;
options: DropListOption[];
values: DropListOption[];
}
interface Record {
column1: string;
column2: string;
column3: string | string[];
.
.
columnN: string;
}
The below logic takes the most time in my code. It takes around 7 seconds for 8k records.
const filterOptions = records.reduce((uniqueList, record) =>
uniqueList.concat(record[filter.key]), []);
I am unable to make my code perform better. Please can you suggest where am I going wrong.
Here is sample code on typescrip playground
https://www.typescriptlang.org/play?#code/FASwdgLgpgTgZgQwMZQAQDEQBtowPIAOEIA9mKgN7Co201YIBGUWAXKgM4QzgDmA3NTq0AbgiwBXKOwRgAnoOG1kxEVAD87RiRJYosxUtQEEcrCQQATTalkKhSpCQmQbYCQFtmMQzQC+-LTA1AD0IRwIHgR6qGJ6HKhwJDCoANZQcrYwaAQwJASwEHIAjAA0xnkFMEUATKigkLCIKKgASlBOMJaYOLCUDsLpcuxcPGAC9UaoDMxY6C5IxGQ282CLpGC+wnFSq+vL7HtLmwN0yZaw7O5esFt0HhI4INHSqNq6+idT+ccch9i4QjHADaAF07qJxFI-hgAbAgRswVsAkFgGEKvlqiAoAkSHA2h1zqgkAwOAkENlOJE0AgEkNUHiCZ1unCUiTaRw0SFUBAABYgBICzggKJYEBwbGWVDZKo4yAIY6oDyZFQScTE0kJXm0pV2JXJHKVQrYzngXDNNCtEgAd36dFy+UKJRG3D4SIGDqqtRdYwEwACwWATjAXFQ4BAxHEVutMOjYNQAF5UMDTlNhBRPU7iqxgQAiMhQXPlXN87KF0GlTPVOQ1Vi5phIXN+UqptO0DNG6vZvMStRF1C5sDgcuVzve3MXOBNlttqYdx1dnP5sCF4ulqCrge9zeD4e5itV8cNputtPg4JBMIASVsHgZYCwKrJIF4Q-GsShOJ5JDSGQxXsyJIUg4EVonFEAkAVDYgzIUNw0jOZWVjQkuh6XB4yTFNZyUCghjrQ8Sn7c5LjKJVHmIF5WEQLAOCgcofg2P4wXKHYcXYMFm1PNNcIyfCxxrIiukuGpygeJ5KOo2j6KIRicwrD9JDY5NQT8GguKMc9gglXp8Bk2CAGVYBECCoAACngkAoxtDhSgs8Q0NgDgAEpBGCdEb21NRUGKAAWRJWQZRgACsOggTl0WtXkoHIDyEC8gAOVJpRtQKQsWBI+TQWYUuySwJBaTEICgsgeQQdIEgANgAWgAdk4QkwEsDgADpgDgBZFW0wE9JDQyYGMlBTLyGN2DjeSusc0aUJZHSwSc-pTgmmAWo8BACFMpbEwAPjtIwwnMXgIO-fzGtsLAsAU6FEmSWxiV0TxyDgPI70y+qIAZfFsmZBJT2DUMloRWDE2SmNmty-KzNMlwQAARykAAZAUIHKYb5oTHbUGhuGoERrhmuDSCICGm1gSW5qhlBJzyjmwIuN+2CPmag7TIAcgBnqOBZ0p2d+FzsNbPwXP9QMgA
I think the performance problem you're having is that Array.prototype.concat() does not modify an existing array, but instead returns a new array. Immutability is nice, but it doesn't seem relevant to your use case: every uniqueList array you create except for the very last one will be discarded. Object creation is fairly fast in JavaScript, but creating thousands of array objects only to immediately throw them away is slowing things down.
My suggestion would be to replace concat() with something that modifies the existing array, such as Array.prototype.push():
That is, you could change
const filterOptions = rows.reduce(
(uniqueList, row) => uniqueList.concat(row[filter.key]), []
);
to
const filterOptions: string[] = [];
for (let row of rows) filterOptions.push(...row[filter.key]);
When I run a simulation where I create 6000 rows and 14 filters (see playground link below), the concat() version takes about 7.5 seconds, whereas the push() version takes about 38 milliseconds. Hopefully that factor of ~200 improvement holds in your environment and makes enough of an impact to be sufficient for your needs.
Note that I'm not dealing with any "uniquifying" or "function" aspects that your original problem seems to have, since your reproducible example code doesn't touch that either. Unique strings might be more easily tallied via an object key than an array, or maybe even via a Set. But again, hopefully changing concat() to push() will be enough for you.
Playground link to code

Mongodb E11000 duplicate key error on array field

I have a quick question about mongoose schema real quick. Here is the code: https://i.ibb.co/Db8xPMw/5555.png
I tried to create a document without the property "work". It works in the first time, but it didn't start to work on the second time that I do the same thing again.
Do you have any idea?
Basically I create two documents without an "work" property, which causes a duplicate key error. However, I didn't set up unqiue: true though.
Error :
"errmsg" : "E11000 duplicate key error collection: test.user index work_1 dup key: { : null }
From the message it says your collection has an index with name work_1 probably on field work, Since you've created a document without work field then basically you cannot create another document without work field what so ever in the same collection, cause two documents with no work field or even work field with value as null or same cannot exist as it violates unique constraint policies (it says dup key : { : null}) !! Uniques indexes can be created via mongoose schemas or can also be created by manually running queries on database.
Ref : Search for Unique Index and Missing Field in index-unique
So you need to drop the existing index using dropIndex & then if needed recreate it using createIndex. MongoDB would automatically convert a created index to index-multikey (multi-key index - indexes on array fields) if at least one existing document has array value for that indexed field by the time you create index or even if an array value gets inserted for that field in future.
Through code - Drop index : yourSchema.dropIndex({yourFieldName: 1}) && Create index : yourSchema.index({yourFieldName : 1})
NOTE : Just in case if you want to have certain criteria in unique indexes like situation from this question where indexed field can be missing in some documents but it shouldn't be considered as duplicate insertion, then you can take use of partial-indexes (Search for Partial Index with Unique Constraint) which would only index documents where work field exists.
Ex of partial-indexes :-
db.yourCollectionName.createIndex(
{ work: 1 },
{ unique: true, partialFilterExpression: { work: { $exists: true } } }
)
Ref : mongoose-Indexes

AWS Dynamodb Query - get items from table with condition

There is Dynamo table with fields:
email (primary)
tenant
other stuff
I want to get all the items where email contains 'mike'
In my nodejs server, I have this code
const TableName= 'UserTable';
const db = new aws.DynamoDB();
const email = 'mike.green#abc.com'
params = {
TableName: userTableName,
KeyConditionExpression: '#email = :email',
ExpressionAttributeNames: {
'#email': 'email',
},
ExpressionAttributeValues: {
':email': { S: email },
},
};
db.query(params, (err, data) => {
if (err) {
reject(err);
} else {
const processedItems = [...data.Items].sort((a, b) => a.email < b.email ? -1 : 1);
const processedData = { ...data, Items: processedItems };
resolve(processedData);
}
this works ^^ only if I search entire email mike.green#abc.com
Question 1 -
But, if i want to search mike, and return all items where email contains mike, How can i get that?
Question 2
If I want to get all the rows where email contains mike and tenant is Canada. How can i get that?
I'm not a NodeJS user but hope it will be helpful.
Question 1 - But, if i want to search mike, and return all items where
email contains mike, How can i get that?
Key expressions are reserved to equality constraints. If you want to have more querying flexibility, you need to use a filter expression. Please notice that you won't be able to use filter expression on your partition key. You can find more information on https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Query.html but the most important is:
Key Condition Expression
To specify the search criteria, you use a key condition expression—a
string that determines the items to be read from the table or index.
You must specify the partition key name and value as an equality
condition.
You can optionally provide a second condition for the sort key (if
present). The sort key condition must use one of the following
comparison operators:
a = b — true if the attribute a is equal to the value b
a < b — true if a is less than b
a <= b — true if a is less than or equal to b
a > b — true if a is greater than b
a >= b — true if a is greater than or equal to b
a BETWEEN b AND c — true if a is greater than or equal to b, and less than or equal to c.
The following function is also supported:
begins_with (a, substr)— true if the value of attribute a begins with a particular substring.
......
Question 2 If I want to get all the rows where email contains mike and
tenant is Canada. How can i get that?
You can use a filter expression to do that and use one of available functions https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html#Expressions.OperatorsAndFunctions.Syntax. A filter expression is:
If you need to further refine the Query results, you can optionally
provide a filter expression. A filter expression determines which
items within the Query results should be returned to you. All of the
other results are discarded.
A filter expression is applied after a Query finishes, but before the
results are returned. Therefore, a Query will consume the same amount
of read capacity, regardless of whether a filter expression is
present.
A Query operation can retrieve a maximum of 1 MB of data. This limit
applies before the filter expression is evaluated.
A filter expression cannot contain partition key or sort key
attributes. You need to specify those attributes in the key condition
expression, not the filter expression.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Query.html
To wrap-up:
if e-mail is your partition key, you cannot apply contains on it - you have to query it directly.
eventually you can do a scan over your table and apply filter on it (https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Scan.html) but I wouldn't do that because of consumed capacity of the table and response time. Scan involves operating over all rows in the table, so if you have kind of hundreds of GB, you will likely not get the information in real-time. And real-time serving is one of purposes of DynamoDB.

Fetch and sort mixed character from firebase javascript

Firebase Structure
Periods
+Period1
+Period10
+Period2
+Period3
+Period4
I am getting the value as it is. But i want to be sorted like Period1, Period2, Period3..,Period10
Referring to :
https://firebase.google.com/docs/database/web/lists-of-data#data-order
When using orderByKey() to sort your data, data is returned in ascending order by key.
Children with a key that can be parsed as a 32-bit integer come first, sorted in ascending order.
Children with a string value as their key come next, sorted lexicographically in ascending order.
You could avoid the "Period" as a redundancy as it is implied from the table and use the numeric id directly (not recommended by firebase) or use the push to generate the keys (ids) native to firebase.
The Firebase Database sorts strings lexicographically. And in alphabetical order Period10 comes before Period2.
To ensure the lexicographical order also matches the numerical order that you're looking for, you can consider padding the numbers:
Periods
+Period001
+Period010
+Period002
+Period003
+Period004
With this padding, the lexicographical order matches the numerical ordering that you're looking for. Of course the amount of padding will depend on the maximum number you realistically expect.
Alternatively you can simply store a Period property with a numeric value in each child node and order on that. In that case you can also use push IDs for the keys, as Alex said:
Periods: {
-Laaaaaa: {
Period: 1
},
-Laaaaab: {
Period: 10
},
-Laaaaac: {
Period: 2
},
-Laaaaad: {
Period: 3
},
-Laaaaae: {
Period: 4
}
}
Now you can query ref.orderByChild("Period") and the children will be returned in the wanted numerical order.

Categories