I noticed that the data I'm getting from the leap motion controller is quite noisy. Apart from the obvious (i.e. position of the fingers), I've encoutered events such as
fingers moving between hands,
"phantom" hands appearing,
fingers disappearing and reappearing immediately afterwards.
Does the API (in particular the Javascript API) provide any means of cleaning this data or is there any other way of making this data less noisy? All of these events can be handled in user code of course, but it seems that having to do this yourself every time would be less than ideal.
In short, no- at the moment the developers have to implement the logic for that. Be aware that this might not be true in the future, the API changes fast.
I had problems with this as well, I solved this by using a circular queue with a max limit of (for example) 100 frames. Then I would track the data for just one pointable. I would then filter the data for the conditions I considered to be not normal. For example width, which is very unreliable. I would get the modal value, and accept a +2 -2 range for the modal value. I would ignore everything else. Works rather well :)
In short, as you already mentioned, you need to collect data, and filter out the noise. Tool and width precision will change they told me. Do a search on the forum for isTool and see how others found ways to get 'stabilized' data.
For me the solution was (for what I wanted, which was to track one pointable, and a reliable width):
Hold a queue of max X items
Set a tolerance limit
Compare the data in the queue
Filter out what was considered noise
Related
I've been performing some research, in order to find the best approach to identify break points (trend direction change) in a dataset (with pairs of x/y coordinates), that allow me to identify the trend lines behind my data collections.
However I had no luck, finding anything that brings me some light.
The yellow dots in the following image, represent the breakpoints I need to detect.
Any suggestion about an article, algorithm, or implementation example (typescript prefered) would be very helpful and appreciated.
Usually, people tend to filter the data by looking only maximums (support) or only minimums (resistance). A trend line could be the average of those. The breakpoints are when the data crosses the trend, but this gives a lot of false breakpoints. Because images are better than words, you can look at page 2 of http://www.meacse.org/ijcar/archives/128.pdf.
There are a lot of scripts available look for "ZigZag" in
https://www.tradingview.com/
e.g. https://www.tradingview.com/script/lj8djt1n-ZigZag/ https://www.tradingview.com/script/prH14cfo-Trend-Direction-Helper-ZigZag-and-S-R-and-HH-LL-labels/
Also you can find an interesting blog post here (but code in in python):
https://towardsdatascience.com/programmatic-identification-of-support-resistance-trend-lines-with-python-d797a4a90530
with code available: https://pypi.org/project/trendln/
If you can identify trend lines then can't you just identify a breakpoint as when the slope changes? If you can't identify trend lines, then can you for example, take a 5-day moving average and see when that changes slope?
This might sound strange, or even controversial, but -- there are no "breakpoints". Even looking at your image, the fourth breakpoint might as well be on the local maximum immediately before its current position. So, different people might call "breakpoints" different points on the same graph (and, indeed, they do).
What you have in the numbers are several possible moving averages (calculated on a varying interval, so you might consider MA5 for five-day average, or MA7 for a week average) and their first and maybe second derivatives (if you feel fancy you can experiment with third derivatives). If you plot all these lines, suitably smoothed, over your data, you will notice that the salient points of some of them will roughly intersect near the "breakpoints". Those are the parameters that your brain considers when you "see" the breakpoints; it is why you see there the breakpoints, and not somewhere else.
Another method that the human vision employs to recognize features is trimming outliers: you discard in the above calculations either any value outside a given tolerance, or a fixed percentage of all values starting from those farther from the average. You can also experiment not trimming those values that are outliers for longer-period moving averages but are not for shorter periods (this gives better responsivity but will find more "breakpoints"). Then you run the same calculations on the remaining data.
Finally you can attribute a "breakpoint score" based on weighing the distance from nearby salient points. Then, choose a desired breakpoint distance and call "breakpoint" the highest scoring point in that interval; repeat for all subsequent breakpoints. Again, you may want to experiment with different intervals. This allows for a conveniently paced breakpoint set.
And, finally, you will probably notice that different kinds of signal sources have different "best" breakpoint parameters, so there is no one "One-Size-Fits-All" parameter set.
If you're building an interface to display data, leaving the control of these parameters to the user might be a good idea.
I'm creating an online psychology experiment in JavaScript with the goal of studying people's ability to recognize images quickly (this RSVP, Rapid Serial Visual Presentation, paradigm is a common approach in psychophysics for isolating the feedforward, pre-attentive stages of visual processing). I aim to show a sequence of images for brief durations (e.g. 30 ms, 50 ms, 70 ms, etc.). For valid results, it's important to get the presentation times as precise as possible. Usually, this is done in a lab with specialized software or equipment, but this setup isn't an option for me. I recognize that some slop is an inevitable side effect of doing this experiment in a browser, but I'm hoping to (A) minimize it as much as possible, and (B) quantify, to the extent that I can, the amount of imprecision that actually occurred.
Right now, I'm preloading images, and am using setTimeout() to control stimulus duration. I'm assuming that experiment participants will use a variety of monitors (so refresh rates may differ) and browsers.
By eye, the timings of the images with my current approach seem to be wrong, and, in particular, highly variable. This is consistent with things I've read saying that variability for brief presentation times can be high.
Are there tools available to control timing more precisely in JavaScript? Additionally, are there tools that can measure/estimate the time an image was actually shown on screen? Finally, are there other options I should look at, e.g. compiling the sequence of images into a video, gif, etc.?
setTimeout does not specify when an action should take place. It only states that an action should take place once the timer has expired; but there's no way to guarantee when exactly the event will take place (https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/setTimeout).
When working on (psychological) expiriments, and when you have no control over the participant's environment, I would argue (apart from not entertaing such an uncontrolled environment in such cases) for claiming the whole tab the user is using (effectively blocking any interaction while the image is shown) or, as you already mention yourself, using an animated gif or sequence of images.
Or you can monitor and store the time the image was actually shown to your participant (using checking when the image comes into the viewport), and compare that timestamp with the participant's interaction. That would perhaps give the most valid results, as it will eliminate the inpredictability of setTimeout.
I am currently building a web application that acts as a storage tank level dashboard. It parses incoming data from a number of sensors in tanks and stores these values in a database. The application is built using express / node.js. The data is sampled every 5 minutes but is sent to the server every hour (12 samples per transmission).
I am currently trying to expand the application's capabilities to detect changes in the tank level due to filling or emptying. The end goal is to have a daily report that generates a summary of the filling / emptying events with the duration of time and quantity added or removed. This image shows a screenshot of tank capacity during one day - https://imgur.com/a/kZ50N.
My questions are:
What algorithms / functions are available that detects the changes in tank level? How would I implement them into my application?
When should the data handling take place? As the data is parsed and saved into the server? At the end of the day with a function that goes through all the data for that day?
Is it worth considering some sort of data cleaning during the parsing stage? I have noticed times when there are random spikes in the data due to noise.
How should I handle events when they immediately start emptying the tank immediately after completing a delivery? I will need the algorithm to be robust enough that it detects a change in the direction of the slope to be the end of an event. Example of this is in the provided image.
I realise that it may difficult to put together a robust solution. There are times when the tank is being emptied at the same time that it is being filled. This makes it difficult to measure these reductions. The only was to know that this took place is the slope of during the delivery flatlines for approximately 15 minutes and the delivery is a fixed amount less than the usual delivery total.
This has been a fun project to put together. Thanks for any assistance.
You should be able to develop an algorithm that specifies what you mean by a fill or en emptying (a change in tank level). A good place to start is X% in Y seconds. You then calibrate to avoid false positives or false negatives (e.g. showing a fill when there was none vs. missing a fill when it occurs. One potential approach is to average the fuel level over a period of time (say 10 minutes) and compare it with the average for the next 10 minutes. If there is a difference above a threshold (say 5%), you can call this a change.
When you process the data depends on when you need it, so if the users need to be constantly informed of changes, this could be done on querying of the data. Processing the data into changes in level on write to your datastore might be more efficient (you only do it once), however you lose the ability to tweak your algorithm. It could well depend on performance, e.g. if someone wants to pull a years worth of data, is the system able to deal with this?
You will almost certainly need to do something like a low pass filter on the incoming data. You don't want to show a tank fill based on a temporary spike in level. This is easy to do with an array of values. As mentioned above, a moving average, say of the last 10 minutes of levels is another way of smoothing the data. You may never get a 0% false positive rate or a 0% false negative rate, you can only aim for values as low as possible.
In this case it looks like a fill followed by an emptying of the tank. If you consider these to be two separate events then you can simply detect changes on the incoming data. I'd suggest you create a graph marking fills as a symbol on the graph as well as emptying. This way you can eyeball the data to ensure you are detecting changes. I would also say you could add some very useful unit tests for your calculations using perhaps jasmin.js or cucumber.js.
Link 1 - http://horebmultimedia.com/Sam3/
Link 2 - http://horebmultimedia.com/Sam5/
In the above links, i have added a set of numbers added in separate containers in each file and u can find the FPS on the top right. The issue is when i mouse over in this Link 1 and click any numbers, as u see the FPS is getting slower & slower, making the world to rotate slower on the left side.
While on this link, Link 2, I added only one mouse over and 5 mouse over, but there is not much difference in FPS, why it lags so much when i have 37 containers. I can give my code if u need to resolve.
I had a rough look at your code, but digging through an entire project is not a fantastic way to debug an optimization problem.
The first thing to consider is if you have mouseOver enabled on your stage, I would recommend a liberal use of mouseChildren=false on interactive elements, and mouseEnabled=mouseChildren=false on anything not interactive. The rollover could be a big cause, as it requires everything to be drawn 20 times per second (in your usage). Text and vectors can be expensive to redraw.
// Non-interactive elements (block all mouse interactions)
element.mouseEnabled = element.mouseChildren = false;
// Interactive elements (reduce mouse-checking children individually)
element.mouseChildren = false;
If they don't change, you might consider caching text elements, or button graphics. I think I saw some caching in the source - but its generally a good thing to consider.
--
With that said, debugging optimization can be tough.. If removing all the buttons brings your performance up, consider how your buttons are being constructed, and what their cost is.
* Mouse over is expensive
* Vectors and text can be expensive
* Caching can help when used right, but can be expensive if it happens too often.
* Review what is happening on tick(). Sometimes, code is running constantly, which doesn't need to.
--
A few other notes:
This does not do what you think: _oButton.off("mousedown"); -- You need to pass the result of the on() call. If you are just cleaning up, call _oButton.removeAllEventListeners().
You don't need to set the cursor on mouseover. The cursor will only change when it rolls over -- so just set it once, and then get rid of your buttonover stuff.
It might make sense to just extend EventDispatcher for your custom classes, which gives you things like the on() method, which supports a data param. I might recommend this in place of your addEventListener stuff in CTextButton
Note that RAF does not support a framerate property (it just uses the browser's RAF rate, which is usually 60fps). Use createjs.Ticker.timingMode instead of the deprecated useRAF.
Hope that helps a little.
I got several objects where every one of them got priority value. Priority value can be between 1(lowest) to 200(higgest). Every value is represented by a color, lowest value got green color "rgba("0","255",0,1)"; and highest value got "rgba("255","0",0,1)";
I calculate color value by classic equation where every priority value determine different value(different color). So in the end i got possible chance of 200 different colors in range from green(0) to yellow(100) to red(200) based on priority.
My question is: When I'm redrawing on canvas all objects every 100ms. Is it better to calculate those values everytime to get wanted color or generate only ONCE in initialization function an array of 200 colors where value on array[100] will be color for object with 100 priority.
I expect there won't be like big a difference but still one of those approach must be better.
Calculate once is the better option in almost every case (classically called a lookup table). Memory is cheaper than CPU cycles which means consumer hardware has plenty of RAM, but is always needing cycles.
In this case you are right, 200 colours every 100ms is insignificant even at full frame rate of 16.666...ms (60fps), but clients will have many applications/tabs/services running on the device and anything a programmer does to reduce CPU load will benefit the client.
There is also a added benefit that programmers tend to forget. CPU cycles require much more power than memory. For a single machine adding a few million cycles is nothing, but if every programmer wrote in a manner that reduced overall load the world wide savings in power are considerable. I am off to hug a tree now, hope that helps.
from question i understand that you don't want calculation of color to affect your animation, use Web worker to run your calculation on separate thread Web Workers And one more thing Read all Style at one time and Write style at one time don't do Read/Write together because it may cause Layout Thrashing Layout Thrashing