Converting Firebase Realtime Database to a Javascript Array - javascript

I'm currently working on a project that requires a way to pull a set of data from the databse, and then update the UI accordingly. What I'm trying to to do, is store an array in firebase, of items, then pull that to the user and have it convert back to a simple array.
My database is structured as the following;
database {
master {
itemarray: ['item1', 'item2', 'item3', 'item4']}}
To pull this data, I use the following code;
database.ref("master/itemarray").on('value', showArray);
function showArray(globalArray){
console.log(globalArray.val())
var simpleArray = Array.from(globalArray.val());
console.log(simpleArray)
}
}
When I run this however, the array seems to break each letter into its own value, so I'm left with an array filled with letters of each 'item' instead of the entire value.
The console logs for both globalArray.val() and simple Array are below;
Storing data in firebase with indexes, such as;
[0: 'item', 1: 'item2' 2: 'item3', 3: 'item4']
Leaves me with the same problem.
Any advice would be awesome!
Thanks in advance :)

From the first line of the second screenshot it looks like your master/itemarray is actually a single string value, with comma separate values. It is not an array.
It's impossible to say why it's stored as a single string value without seeing the code that you use for writing master/itemarray, but based on what you shared the problem originates there.

Related

How do I check if an object is within an array in Javascript?

I am getting stuck on a project where I am trying to pull out relevant data from a super massive Google Maps Timeline json, and running into the problem of the structure not being as orderly as I thought it was. Essentially, I am trying to pull out an address, time, date and mileage out of this json for every trip in my car. to use this data, I pasted it into a normal javascript file and named it so I can use it as an object. I then take this data and create a string that will format that info like a CSV file.
From going over the structure of the json by looking at only a few trips, I was able to determine the following general structure:
const google = {
timelineObjects: [
0: {activitySegment: {A NUMBER OF OBJECTS}},
1: {placeVisit : {A NUMBER OF OBJECTS}},
2: {activitySegment: {A NUMBER OF OBJECTS}},
3: {placeVisit : {A NUMBER OF OBJECTS}}
]
}
activitySegment has all the travelling info, like distance, travel times, etc. placeVisit has info about the destination. In my small sample, I was able to just loop through each using an if statement with i%2=0, and just change what I wanted to pull out from each, which worked well.
When I tried adding a larger sample, I was finding that Google occasionally did not create a activitySegment object and only had a placeVisit, which was throwing "Uncaught TypeError: Cannot read property 'distance' of undefined".
I am sure that the even/odd sorting will not work out any more. Is there a way to use a conditional to show if google[timelineObjects][i] is either a {activitySegment} or {placeVisit}? Or, is there a better way to figuring out what object is next in the array?
You can test to see if the Object at the particular array index has a given property using Object.prototype.hasOwnProperty():
const google = {
timelineObjects: [
{activitySegment: {}},
{placeVisit : {}},
{activitySegment: {}},
{placeVisit : {}}
]
};
console.log(google.timelineObjects[0].hasOwnProperty("activitySegment")); // true
console.log(google.timelineObjects[1].hasOwnProperty("activitySegment")); // false
If your objective to see what type of object you get. You can iterate over each object, see what the key of the object is and process the data depending on the key value. Something like this.
Object.entries(google).forEach(([key, value]) => {
if(key==='activitySegment') {
//process activeSegment here
}else {
//process placeVisit here
}
})

Insert data in Firebase in Reactjs

During storing an object to my firebase, I am expecting the structure as image below, but what I get was a generated running number as a key. This is my code to store an object to firebase
var location = [];
location.push({
ms_jhr_1 : {
name: value
},
...
});
const a = firebase.database().ref('Food/'+id);
a.set(location);
How do I keep my structure without generate the running number?
The problem is you are using an array to store your data and then setting that array in firebase. To get the expected result you have to modify your code a little bit.
Here use this and remove other code
const a = firebase.database().ref('Food/'+id);
a.set(ms_jhr_1);
So you just need to pass the object you want to store under that id and not the whole array.
Note:- If you want to store multiple entries under one id then you have to push all those entries in an Object and not in array.
So it will look something like this
var location = {};
Now use for loop to insert all your data into this object (Remember, you are adding objects inside an object). You don't need array. Because in firebase data is stored in JSON tree format.
Hope it helps.

Push to database and increment integer key

So I'm using Firebase Realtime database as my backend, and I have a problem, that probably has a really simple solution..
I want to push some information to the database, but as things stand, instead of pushing a new object, it is simply updating the db..
Here is how I'm doing it:
firebase.database().ref('users/' + uid + '/entries/'+$scope.todos.id+'/trackers/'+$scope.currTrack.id).update({
note: individual.note,
value: individual.value
});
So it's pretty straight forward, but this is what's happening in the db:
...
-KuB-l8OX9zxGYuGw_EK //this is $scope.currTrack.id
note: "the note",
value: "the value"
so instead of creating a new incrementing integer key it just puts it right under the id..
What I would like to happen is:
...
-KuB-l8OX9zxGYuGw_EK //this is $scope.currTrack.id
0 //incrementing integer key
note: "the note",
value: "the value"
1
note: "the 2nd note",
value: "the 2nd value"
What am I doing wrong? I'm using update but set works the same as well..
Any thoughts? Thank you!
I'm using update but set works the same as well..
From firebase.google.com Docs:
Ways to Save Data
set Write or replace data to a defined path, like messages/users/
update Update some of the keys for a defined path without replacing all of the data
I think you need to use push API:
push Add to a list of data in the database. Every time you push a new node onto a list, your database generates a unique key, like messages/users//
According to your code , Every time you do update() ,the array on
'users/' + uid + '/entries/'+$scope.todos.id+'/trackers/'+$scope.currTrack.id
is reset with the new value i.e. it clears the data on firebase first and the adds a new entry into it.If you want what you said then you can achieve it by
retrieving all the array elements first using once() and save it to an array , say list
push your item to list
then perform an update()
But then it does not make any sense.
Another way
Maintain a count variable.
var count = 0;
...update({
[count++]: {
note: individual.note,
value: individual.value
}
});
But would recommend you to either use post() or structure your firebase db as this link says.

How can I get the pre-sort index of a record in an ExtJS store?

When I load JSON data into an ExtJS store, the index of each record reflects the original order of the objects in the array. However, after applying a sort to the store, the index changes based on the new order.
How can I access the original index as it was when the data was loaded? I need this to perform crud operations on the records.
Like FoxMulder900 said, you can add an id to the record itself to have the original index position, but if you don't want or can't change your source, you can add an idGenerator to your model like this:
Ext.define('MyModel', {
extend: 'Ext.data.Model',
fields: [
// your fields here
],
idgen: {
type: 'sequential',
id: 'myIdField',
seed: 0
}
});
So when you call record.get('myIdField') you will always get your original index.
I've setup a fiddle demonstrating this: https://fiddle.sencha.com/#fiddle/cgd (Double click any record to check the Original index and new index).
That information might be lost once you have sorted the store (not entirely sure). I would just try to include an id in the record itself. The initial index should work as the id if you can include that in the JSON before you send it.

Algorithm for data filter

Can you suggest me an algorithm for filtering out data.
I am using javascript and trying to write out a filter function which filters an array of data.I have an array of data and an array of filters, so in order to apply each filter on every data, I have written 2 for loops
foreach(data)
{
foreach(filter)
{
check data with filter
}
}
this is not the proper code, but in short that what my function does, the problem is this takes a huge amount of time, can someone suggest a better method.
I am using the Mootools library and the array of data is JSON array
Details of data and Filter
Data is JSON array of lets say user, so it will be
data = [{"name" : "first", "email" : "first#first", "age" : "20"}.
{"name" : "second", "email" : "second#second", "age" : "21"}
{"name" : "third", "email" : "third#third", "age" : "22"}]
Array of filters is basically self define class for different fields of data
alFilter[0] = filterName;
alFilter[1] = filterEmail;
alFilter[2] = filterAge;
So when I enter the first for loop, I get a single JSON opbject (first row) in the above case.
When I enter the second for loop (filters loop) I have a filter class which extracts the exact field on which the current filter would work and check the filter with the appropriate field of the data.
So in my example
foreach(data)
{
foreach(filter)
{
//loop one - filter name
// loop two - filter email
// loop three - filter age
}
}
when the second loop ends i set a flag denoting if the data has been filtered or not and depending on it the data is displayed.
You're going to have to give us some more detail about the exact structure of your data and filters to really be able to help you out. Are the filters being used to select a subset of data, or to modify the data? What are the filters doing?
That said, there are a few general suggestions:
Do less work. Is there some way you can limit the amount of data you're working on? Some pre-filter that can run quickly and cut it down before you do your main loop?
Break out of the inner loop as soon as possible. If one of the filters rejects a datum, then break out of the inner loop and move on to the next datum. If this is possible, then you should also try to make the most selective filters come first. (This is assuming that your filters are being used to reject items out of the list, rather than modify them)
Check for redundancy in the computation the filters perform. If each of them performs some complicated calculations that share some subroutines, then perhaps memoization or dynamic programming may be used to avoid redundant computation.
Really, it all boils down to the first point, do less work, at all three levels of your code. Can you do less work by limiting the items in the outer loop? Do less work by stopping after a particular filter and doing the most selective filters first? Do less work by not doing any redundant computation inside of each filter?
That's pretty much how you should do it. The trick is to optimize that "check data with filter"-part. You need to traverse all your data and check against all your filters - you'll not going to get any faster than that.
Avoid string comparisons, use data models as native as possible, try to reduce the data set on each pass with filter, etc.
Without further knowledge, it's hard to optimize this for you.
You should sort the application of your filters, so that two things are optimized: expensive checks should come last, and checks that eliminate a lot of data should come first. Then, you should make sure that checking is cut short as soon as an "out" result occurs.
If your filters are looking for specific values, a range, or start of a text then jOrder (http://github.com/danstocker/jorder) will fit your problem.
All you need to do is create a jOrder table like this:
var table = jOrder(data)
.index('name', ['name'], { grouped: true, ordered: true })
.index('email', ['email'])
.index('age', ['age'], { grouped: true, ordered: true, type: jOrder.number });
And then call table.where() to filter the table.
When you're looking for exact matches:
filtered = table.where([{name: 'first'}, {name: 'second'}]);
When you're looking for a certain range of one field:
filtered = table.where([{age: {lower: 20, upper: 21}}], {mode: jOrder.range});
Or, when you're looking for values starting with a given string:
filtered = table.where([{name: 'fir'}], {mode: jOrder.startof});
Filtering will be magnitudes faster this way than with nested loops.
Supposing that a filter removes the data if it doesn't match, I suggest, that you switch the two loops like so:
foreach(filter) {
foreach(data) {
check data with filter
}
}
By doing so, the second filter doesn't have to work all data, but only the data that passed the first filter, and so on. Of course the tips above (like doing expensive checks last) are still true and should additionally be considered.

Categories