Cesium large number of entity updates - javascript

I am working on a project dealing with sensor data. In my backend everything is stored in a database which is getting polled by a controller and converted into kml to display on the cesium globe. This poll happens every 5-10 seconds and contains around 4000-8000 objects (we store up to 1 minute worth of data so we are looking at somewhere like 20k - 50k points). Following this I have an update function which slowly fades the markers out which updates every 5 seconds.
To load the kml on the map I use the following function:
var dataSource = new Cesium.KmlDataSource();
dataSource.load('link').then(function(value);
viewer.dataSources.add(dataSource);
});
On the update color function I am iterating over all of the objects within the datasources entity collection and updating them like so (this is very inefficient):
var colorUpdate = Cesium.Color.fromAlpha(newColor, .4);
dataSource.entities.values[i].billboard.color = colorUpdate;
When I do and add or color update I see a large amount of lag and was curious if there was anything you would suggest to fix this? Generally I get a freeze up for a few seconds. After 60 seconds of the data being on the map it gets removed like so (just a different if case within the color update loop)
dataSource.entities.remove(dataSource.entities.values[i]);
Is there potentially a way to set a propertiy for an entire entity collection so when this collection becomes 30 seconds old it updates the color to a new one? It seems that I just need to find a way to set a property for the entire collection vs individual entities. Does anyone know how to do that or have a suggestion for something better?

Related

Check Validity of Data Before Update Phase in D3.js

I have data which updates every 10 seconds and I would like to check that all the data is valid before progressing with updates. I am currently getting false data intermittently which occurs as a negative number in one of the values. If one of the objects has a negative value then I don't trust the whole set and don't want to update any elements.
Ideally I don't want to update some items and then bail once the incorrect value occurs, but rather, determine if the whole set is good before updating anything
I'm not sure how d3 can manage this but I've tried with this and it seems to work. But it doesn't seem particularly in keeping with the elegance of D3 so I think there's probably a correct and better way to do it. But maybe not?!
var dataValid = true;
abcItems.each(function (d, i) {
if (0 > dd.Number1 - dd.Number2) dataValid = false;
});
if (dataValid) {
abcItems.each(function (d, i) {
// updating elements here
});
} else {
console.log("negative value occurred");
}
Is there a better way to manage this through D3?
A little bit more context:
The data (JSON provided via a RESTful API) and visualisation (a bar chart) are updating every 10 seconds. The glitch in the API results in incorrect data once every hour or so at the most (sometimes it doesn't happen all day). The effect of the glitch is that the bars all change dramatically whereas the data should only change by ones or twos each iteration. In the next fetch of data 10 seconds later the data is fine and the visualisation comes right.
The data itself is always "well-formed" it's just that the values provided are incorrect. Therefore even during the glitch it is safe to bind the data to elements.
What I want to do, is skip the entire iteration and update phase if the data contains one of these negative values.
Perhaps also worth noting is that the items in the data are always the same, that is to say the only "enter" phase that occurs is on page load and there are no items that exit (though I do include these operations to capture any unexpected fluctuations in the data). The values for these items do change though.
Looking at your code it seams you already have bound the dataset to your DOM elements abcItems.each(...).
Why not bail out of the update function when the data is not valid.
d3.json("bar-tooltip.json", function(dataset) {
if (!dataset.every(d => d.Number2 <= d.Number1)) return;
// do the update of the graph
});
The example assumes you call d3.json() froma function that is called every update interval, but you can use a different update method.

change the domain of my line chart at runtime

In my code I am loading a JSON of more than 900 data. these data represent the data emitted by some machines. I'm drawing a line chart, the keys in this JSON represent the name of the machines.
This is the structure of my JSON:
{"AF3":3605.1496928113393,"AF4":-6000.4375230516,"F3":1700.3827875419374,"F4":4822.544985821321,"F7":4903.330735023786,"F8":824.4048714773611,"FC5":3259.4071092472655,"FC6":4248.067359141752,"O1":3714.5106599153364,"O2":697.2904723891061,"P7":522.7300768483767,"P8":4050.79490288753,"T7":2939.896657485737,"T8":9.551935316881588}
each line represents each machine and I put a space to see each machine separately. I am currently reading the data with the help of an counter called cont. all the data in the JSON is between 0 to 5000. But I have modified some objects of the JSON to achieve change the domain and then the new domain in general for all the lines must be equal to the change.
for example on line 106 of the JSON to "AF3":7000. (in this case the domain should be [0-7000] for all the lines)
in the line 300, "AF4": - 1000.(in this case the domain should be [-1000,7000] for all the lines)
I have modified some data on purpose to achieve this change. I would like all lines to be updated to this new domain, if possible with an animation.
How can I do it?
this is my code:
http://plnkr.co/edit/KVVyOYZ4CVjxeei7pd9H?p=preview
To update domain across all line charts, we need to recalculate the domain before new data gets pushed in.
Plunker: http://plnkr.co/edit/AHWVM3HT7TDAiINFRlN9?p=preview
var newDomain = d3.extent(ids.map(function(d) {
return aData[cont][d]
}));
var oldDomain = y.domain()
newDomain[0] = newDomain[0] < oldDomain[0] ? newDomain[0] : oldDomain[0]
newDomain[1] = newDomain[1] > oldDomain[1] ? newDomain[1] : oldDomain[1]
y.domain(newDomain)
domain.text(y.domain())
With respect to graph getting trimmed, data needs to be manipulated ( In your case, 14 arrays, push and shift operation to the array and D3 transition) all within a 1ms, which may not be enough. Unfortunately I don't have any resource to back this up. In case anyone can edit this answer to provide proof, please feel free.

Big data amounts with Highcharts / Highstock (async loading)

Since my data amount becomes bigger everyday (right now > 200k MySQL rows in one week), the chart is very slow at loading. I guess the async loading method is the right way to go here (http://www.highcharts.com/stock/demo/lazy-loading).
I tried to implement it, but it doesn't work. So far I can provide my data with Python via URL parameters e.g. http://www.url.de/data?start=1482848100&end=1483107000, but there are several things I don't understand in the example code:
If a period of All Data is chosen in the Navigator, then all data is
provided by my server and loaded by the chart. So its the same as I
what I do right now without lazy loading. Whats the difference then?
Why there is a second getJSON() method without any URL parameter in
the above mentioned example code? Its the following URL, which is empty. What do I
need it for? I don't understand it:
https://www.highcharts.com/samples/data/from-sql.php?callback=?
And which method to load the data is better?:
This one: chart.series[0].setData(data);
or the code below which I use so far:
var ohlc = [],
volume = [],
dataLength = data.length,
i = 0;
for (i; i < dataLength; i += 1) {
ohlc.push([
data[i]['0'], // date
data[i]['1_x'], // open
data[i]['2_x'], // high
data[i]['3'], // low
data[i]['4'] // close ]);
The idea behind the lazy loading demo is that you fetch only the amount of points which is necessary, so if you have the data which includes 1.7 mln points, you never load so many points to the chart.
Based on Highcharts demo. Instead of loading too many points, you request for already grouped points, you have 1.7 milion daily points, you set the navigator to 'all' (time range 1998 - 2011), you don't need daily data, so the response will include monthly points. Gains are: fetching smaller amount of data (12 * 14 = 168 instead of 1.7 mln), avoiding heavy processing data on the client side (processing, grouping, etc.) -> lower memory and cpu usage for the client, faster chart loading.
The request for the data is in JSONP format. More information about its advantages here. So actually, url has 3 params - mandatory callback=? and optional start=?&stop=? - which indicates the points time range and its density. The first request does not have start/stop params because the server has some default values already set. After the navigator is moved, more detailed points are requested and loaded to the chart. This is the downside of the lazy loading - after the navigator is moved, your request a new set of data -> frequent data request and interruption due to the network failure.
The answer for your last question depends on if you have your data in a proper format or you don't. If you do, you can avoid looping the data on the client side and load it to the chart directly. If the format is not correct, then you have to preprocess the data, so the chart will be able to visualize them correctly. Ideally, you want the data to be in the right format after you request them - so if you can, you should do it on the server side.

d3 — Progressively draw a large dataset

I'm using d3.js to plot the contents of an 80,000 row .tsv onto a chart.
The problem I'm having is that since there is so much data, the page becomes unresponsive for aprox 5 seconds while the entire dataset is churned through at once.
Is there an easy way to process the data progressively if it's spread over a longer period of time?
Ideally the page would remain responsive, and the data would be plotted as it became available, instead of in one big hit at the end
I think you'll have to chunk your data and display it in groups using setInterval or setTimeout. This will give the UI some breathing room to jump in the middle.
The basic approach is:
1) chunk the data set
2) render each chunk separately
3) keep track of each rendered group
Here's an example:
var dataPool = chunkArray(data,100);
function updateVisualization() {
group = canvas.append("g").selectAll("circle")
.data(dataPool[poolPosition])
.enter()
.append("circle")
/* ... presentation stuff .... */
}
iterator = setInterval(updateVisualization,100);
You can see a demo fiddle of this -- done before I had coffee -- here:
http://jsfiddle.net/thudfactor/R42uQ/
Note that I'm making a new group, with its own data join, for each array chunk. If you keep adding to the same data join over time ( data(oldData.concat(nextChunk) ), the entire data set still gets processed and compared even if you're only using the enter() selection, so it doesn't take long for things to start crawling.

KnockoutJS foreach blocks main thread

When I have a large dataset in my viewModel and I use foreach to loop over an Array of Objects to render each Object as a row within a table, KnockoutJS will block the main thread until it can render, which sometimes takes minutes (!).
Here is a jsFiddle example using a dataset containing 2000 Objects, containing a url and a code. Real data will have longer URLs in some cases and 4 other columns (only 2 in this example.) I also added some simple styles because adding styles also seems to slow things down a bit during the process.
Warning: your browser might break
http://jsfiddle.net/DESC3/7/
I suggest that if you have such large datasets you try an alternative solution. For example slickGrid renders large datasets in a much more efficient way, by only generating HTML elements for the data that is actually visible. We've used this for large datasets, and it performs well.
How about something like this. Say, you've got viewModel.items = ko.observableArray() that you'd like to render.
Have a separate non-observable array of all your data: var itemsToRender = functionThatReturnsLargeArray().
Put some portion of your data from itemsToRender into your observable array. Say, 50 elements only.
Keep adding elements into observable array in portions inside a setTimeout callback.
NOTE1: You can add some time-tracking into setTimeout callback and increase/reduce the number of items that you add at each iteration. Your goal is to keep each callback time below 50-100 milliseconds so your application still feels responsive.
var batchSize = 50; // default number of items rendered per iteration
var batchOffset = 0;
function render(items, itemsToRender, done) {
setTimeout(function () {
var startTime = new Date().getTime();
items.pushAll(itemsToRender.slice(batchOffset, batchSize));
batchOffset += batchSize;
// at this point Knockout rendered next batchSize items from itemsToRender
var endTime = new Date().getTime();
// update batchSize for next iteration
batchSize = batchSize * 50 / (endTime - startTime); // 50 milliseconds
batchSize = Math.min(itemsToRender.length, batchOffset + batchSize);
if (batchSize > 0) render() else done(); // callback if you need one
}, 0);
}
/* I haven't actually tested the code */
Another batch size updating strategy could be based on target FPS. Say you'd like to achieve 60 fps update rate and thus 60 calls to setTimeout per 1000 milliseconds. That would take somewhat longer to process the whole collection. You can also use requestAnimationFrame instead of setTimeout and see how that would work out.
EDIT: Build-in throttling was added into Knockout JS 1.3 (currently it's in beta but seems pretty stable).
NOTE2: If some other data on the view depends on viewModel.items you can still map it down to original array itemsToRender. Say, for example, that you'd like to show the number of items in collection. If you use viewModel.items().length you'll end up with changing size value in the UI while more items get renderred. To avoid that you can first define your size binding as a dependentObservable based on itemsToRender, not viewModel.items. After you've done rendering all items you can re-map it onto viewModel.items if you see fit.

Categories