I have this function which parse a value received on mqtt. The value is actually a timestamp send by an arduino and is number like 1234 , 1345 etc...
var parts = msg.payload.trim().split(/[ |]+/);
var update = parts[10];
msg.payload = update;
return msg;
What i want is actually instead last value (which is update variable in my case) is to get difference between last received value and previous one.
Basically if I receive 1234 and then 1345, I want to remember 1234 and the value returned by function to be 1345 - 1234 = 111.
Thank you
If you want to store a value to compare to later you need to look at how to use context to store it.
The context is normally an in memory store for named variables, but it is backed by an API that can be used to persist the context between restarts.
I wanted to suggest an alternative approach. Node-RED has a few core nodes that are designed to work across sequences and for this purpose, they keep an internal buffer. One of these nodes is the batch node. Some use cases, like yours, can take advantage of this functionality to store values thus not requiring using context memory. The flow I share below uses a batch node configured to group two messages in a sequence, meaning it will always send downstream the current payload and the previous one. Then a join node will work on such sequence to reduce the payload to a single value, that is the difference between the timestamps. You need to open the configuration dialog for each node to fully understand how to set up those nodes to achieve the desired goal. I configured the join node to apply a fix-up expression to divide the payload by one thousand, so you get the value in seconds (instead of milliseconds).
Flow:
[{"id":"3121012f.c8a3ce","type":"tab","label":"Flow 1","disabled":false,"info":""},{"id":"2ab0e0ba.9bd5f","type":"batch","z":"3121012f.c8a3ce","name":"","mode":"count","count":"2","overlap":"1","interval":10,"allowEmptySequence":false,"topics":[],"x":310,"y":280,"wires":[["342f97dd.23be08"]]},{"id":"17170419.f6b98c","type":"inject","z":"3121012f.c8a3ce","name":"","topic":"timedif","payload":"","payloadType":"date","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":160,"y":280,"wires":[["2ab0e0ba.9bd5f"]]},{"id":"342f97dd.23be08","type":"join","z":"3121012f.c8a3ce","name":"","mode":"reduce","build":"string","property":"payload","propertyType":"msg","key":"topic","joiner":"\\n","joinerType":"str","accumulate":false,"timeout":"","count":"","reduceRight":false,"reduceExp":"payload-$A","reduceInit":"0","reduceInitType":"num","reduceFixup":"$A/1000","x":450,"y":280,"wires":[["e83170ce.56c08"]]},{"id":"e83170ce.56c08","type":"debug","z":"3121012f.c8a3ce","name":"Debug 1","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","x":600,"y":280,"wires":[]}]
Related
I would like to create two queries, with pagination option. On the first one I would like to get the first ten records and the second one I would like to get the other all records:
.startAt(0)
.limit(10)
.startAt(9)
.limit(null)
Can anyone confirm that above code is correct for both condition?
Firestore does not support index or offset based pagination. Your query will not work with these values.
Please read the documentation on pagination carefully. Pagination requires that you provide a document reference (or field values in that document) that defines the next page to query. This means that your pagination will typically start at the beginning of the query results, then progress through them using the last document you see in the prior page.
From CollectionReference:
offset(offset) → {Query}
Specifies the offset of the returned results.
As Doug mentioned, Firestore does not support Index/offset - BUT you can get similar effects using combinations of what it does support.
Firestore has it's own internal sort order (usually the document.id), but any query can be sorted .orderBy(), and the first document will be relative to that sorting - only an orderBy() query has a real concept of a "0" position.
Firestore also allows you to limit the number of documents returned .limit(n)
.endAt(), .endBefore(), .startAt(), .startBefore() all need either an object of the same fields as the orderBy, or a DocumentSnapshot - NOT an index
what I would do is create a Query:
const MyOrderedQuery = FirebaseInstance.collection().orderBy()
Then first execute
MyOrderedQuery.limit(n).get()
or
MyOrderedQuery.limit(n).get().onSnapshot()
which will return one way or the other a QuerySnapshot, which will contain an array of the DocumentSnapshots. Let's save that array
let ArrayOfDocumentSnapshots = QuerySnapshot.docs;
Warning Will Robinson! javascript settings is usually by reference,
and even with spread operator pretty shallow - make sure your code actually
copies the full deep structure or that the reference is kept around!
Then to get the "rest" of the documents as you ask above, I would do:
MyOrderedQuery.startAfter(ArrayOfDocumentSnapshots[n-1]).get()
or
MyOrderedQuery.startAfter(ArrayOfDocumentSnapshots[n-1]).onSnapshot()
which will start AFTER the last returned document snapshot of the FIRST query. Note the re-use of the MyOrderedQuery
You can get something like a "pagination" by saving the ordered Query as above, then repeatedly use the returned Snapshot and the original query
MyOrderedQuery.startAfter(ArrayOfDocumentSnapshots[n-1]).limit(n).get() // page forward
MyOrderedQuery.endBefore(ArrayOfDocumentSnapshots[0]).limit(n).get() // page back
This does make your state management more complex - you have to hold onto the ordered Query, and the last returned QuerySnapshot - but hey, now you're paginating.
BIG NOTE
This is not terribly efficient - setting up a listener is fairly "expensive" for Firestore, so you don't want to do it often. Depending on your document size(s), you may want to "listen" to larger sections of your collections, and handle more of the paging locally (Redux or whatever) - Firestore Documentation indicates you want your listeners around at least 30 seconds for efficiency. For some applications, even pages of 10 can be efficient; for others you may need 500 or more stored locally and paged in smaller chucks.
I have jQuery autocomplete field that has to search through several thousand items, populated from an IndexedDB query (using the idb wrapper). The following is the autocomplete function called when the user begins typing in the box. hasKW() is a function that finds keywords.
async function siteAutoComplete(request, response) {
const db = await openDB('AgencySite');
const hasKW = createKeyWordFunction(request.term);
const state = "NY";
const PR = 0;
const agency_id = 17;
const range = IDBKeyRange.bound([state, PR, agency_id], [state, PR, agency_id || 9999999]);
let cursor = await db.transaction('sites').store.index("statePRAgency").openCursor(range);
let result = [];
while (cursor) {
if (hasKW(cursor.value.name)) result.push({
value: cursor.value.id,
label: cursor.value.name
});
cursor = await cursor.continue();
}
response(result);
}
My question is this: I'm not sure if the cursor is making everything slow. Is there a way to get all database rows that match the query without using a cursor? Is building the result array slowing me down? Is there a better way of doing this? Currently it takes 2-3s to show the autocomplete list.
I hope this will be useful to someone else. I removed the cursor and just downloaded the whole DB into a javascript array and then used .filter. The speedup was dramatic. It took 2300ms using the way above and about 21ms using this:
let result = await db.transaction('sites').store.index("statePRAgency").getAll();
response(result.filter(hasKW));
You probably want to use an index, where by the term index, I mean a custom built one that represents a search engine index. You cannot easily and efficiently perform "startsWith" style queries over one of indexedDB's indices because they are effectively whole value (or least lexicographic).
There are many ways to create the search engine index I am suggesting. You probably want something like a prefix-tree, also known informally as a trie.
Here is a nice article by John Resig that you might find helpful: https://johnresig.com/blog/javascript-trie-performance-analysis/. Otherwise, I suggest searching around on Google for trie implementations and then figuring out how to represent a similar data structure within an indexedDb object store or indexdDb index on an object store.
Essentially, insert the data first without the properties used by the index. Then, in an "indexing step", visit each object and index its value, and set the properties used by the indexedDb index. Or do this at time of insert/update.
From there, you probably want to open a connection shortly after page load and keep it open for the entire duration of the page. Then query against the index every time a character is typed (probably want to rate limit this call to refrain from querying more than n/second, perhaps using some kind of debounce helper function).
On the other hand, I might be a bit rusty on this one, but maybe you can create an index on the string prop, then use a lower bound that is the entered characters. A string that is lesser length than another string that contains it is present earlier in lexicographic order. So maybe it is actually that easy. You would also need to impose an upper bound that contains the entered characters thus far concatenated with some kind of sentinel value that can never realistically exist in the data, something silly like ZZZZZ.
Try this out in the browser's console:
indexedDB.cmp('test', 'tasting'); // 1
indexedDB.cmp('test', 'testing'); // -1
indexedDB.cmp('test', 'test'); // 0
You essentially want to experiment with a query like this:
const sentinel = 'ZZZ';
const index = store.index('myStore');
const bounds = IDBKeyRange.bound(value, value + sentinel);
const request = index.get(bounds);
You might need to tweak the sentinel, experiment with other parameters to IDBKeyRange.bound (the inclusive/exclusive flags), probably need to store the value in homogenized case so that the search is case insensitive, avoid every sending a query when nothing has been typed, etc.
I am trying to adapt a script I already have to run using .csv data input. When the script is ran without the .csv, it runs perfectly for any configurations I choose to use. When it runs using the .csv, whatever scenario is in the first row will run perfect, but everything from there on will fail. The reason for the failure is because some of my variables are being reused from the first thread and I don't know how to stop this from happening.
This is what my script looks like:
HTTP Request - GET ${url} (url is declared in the CSV data input, and changes each run)
-> postprocessor that extracts Variable_1, Variable_2 and Variable_3
Sampler1
-> JSR223 preprocessor: creates payloadSampler1 using javascript, example:
var payloadSampler1 = { };
payloadSampler1.age = vars.get("Variable_2");
payloadSampler1.birthDate = "1980-01-01";
payloadSampler1.phone = {};
payloadSampler1.phone.number = "555-555-5555";
vars.put("payloadSampler1", JSON.stringify(payloadSampler1));
Sampler2
-> JSR223 preprocessor: creates payloadSampler1 using javascript (same as above but for different values)
Sampler3
-> JSR223 preprocessor: creates payloadSampler1 using javascript (same as above but for different values)
Sampler4
-> JSR223 preprocessor: creates payloadSampler1 using javascript (same as above but for different values)
HTTP Request - POST ${url}/${Variable_1}/submit
-> JSR223 preprocessor: creates payloadSubmit using javascript, and mix and matching the results from the above samplers - like so:
var payloadSubmit = { };
if (vars.get("someVar") != "value" && vars.get("someVar") != "value2" && vars.get("differentVar") != "true") {
payloadSubmit.ageInfo = [${payloadSampler1}];
}
if (vars.get("someVar2") != "true") {
payloadSubmit.paymentInfo = [${payloadSampler2}];
}
payloadSubmit.emailInfo = [${payloadSampler3"}];
payloadSubmit.country = vars.get("Variable_3");
vars.put("payloadSubmit", JSON.stringify(payloadSubmit));
-> BodyData as shown in the screenshot:
request
I have a Debug PostProcessor to see the values of all these variables I am creating. For the first iteration of my script, everything is perfect. For the second one, however, the Debug PostProcessor shows the values for all payloadSamplers and all the Variables correctly changed to match the new row data (from the csv), but, the final variable, payloadSubmit just reuses whatever the values where for the first thread iteration.
Example:
Debug PostProcessor at the end of first iteration shows:
Variable_1=ABC
Variable_2=DEF
Variable_3=GHI
payloadSampler1={"age":"18","email":null,"name":{"firstName":"Charles"}},{"age":"38","email":null}}
payloadSampler2={"paymentChoice":{"cardType":"CreditCard","cardSubType":"VI"}},"amount":"9.99","currency":"USD"}
payloadSampler3={"email":"tes#email.com"}
payloadSubmit={"ageInfo":[{"age":"18","email":null,"name":{"firstName":"Charles"}},{"age":"38","email":null}],"paymentInfo":[{"paymentChoice":{"cardType":"CreditCard","cardSubType":"VI"}},"amount":"9.99","currency":"USD"],"emailInfo":[{"email":"tes#email.com"}],"country":"GHI"}
But at the end of the 2nd iteration it shows:
Variable_1=123
Variable_2=456
Variable_3=789
payloadSampler1={"age":"95","email":null,"name":{"firstName":"Sam"}},{"age":"12","email":null}}
payloadSampler2={"paymentChoice":{"cardType":"CreditCard","cardSubType":"DC"}},"amount":"19.99","currency":"USD"}
payloadSampler3={"email":"tes2#email.com"}
payloadSubmit={"ageInfo":[{"age":"18","email":null,"name":{"firstName":"Charles"}},{"age":"38","email":null}],"paymentInfo":[{"paymentChoice":{"cardType":"CreditCard","cardSubType":"VI"}},"amount":"9.99","currency":"USD"],"emailInfo":[{"email":"tes#email.com"}],"country":"USA"}
I can also see that the final HTTP Request is indeed sending the old values.
My very limited understanding is that because I am invoking the variables like so "${payloadSampler1}" it will use the value that was set for that the first time the sampler was ran (back in the 1st thread iteration). These are the things I have tried:
If I use vars.get("payloadSubmit") on the body of an HTTP Sampler, I get an error, so that is not an option. If I use vars.get("payloadSampler1") on the Samplers that create the variables, extra escape characters are added, which breaks my JSON. I have tried adding a counter to the end of the variable name and having that counter increase on each thread iteration, but the results is the same. All the variables and samplers other than the last one have updated values, but the last one will always reuse the variables from the first thread iteration.
I also tried to use ${__javaScript(vars.get("payloadSubmit_"+vars.get("ThreadIteration")))}, but the results are always the same.
And I have also tried using the ${__counter(,)} element, but if I set it to TRUE, it will always be 1 for each thread iteration, and if I set it to FALSE, it starts at 2 (I am assuming it is because I use counter in another sampler within this thread - but even after removing that counter this still happens).
I am obviously doing something (or many things) wrong.
If anyone can spot what my mistakes are, I would really appreciate hearing your thoughts. Or even being pointed to some resource I can read for an approach I can use for this. My knowledge of both javascript and jmeter is not great, so I am always open to learn more and correct my mistakes.
Finally, thanks a lot for reading through this wall of text and trying to make sense of it.
It's hard to tell where exactly your problem is without seeing the values of these someVar and payload, most probably something cannot be parsed as a valid JSON therefore on 2nd iteration your last JSR223 PostProcessor fails to run till the end and as the result your payloadSubmit variable value doesn't get updated. Take a closer look at JMeter GUI, there is an yellow triangle with exclamation sign there which indicates the number of errors in your scripts. Also it opens JMeter Log Viewer on click
if there is a red number next to the triangle - obviously you have a problem and you will need to see the jmeter.log file for the details.
Since JMeter 3.1 it is recommended to use Groovy language for any form of scripting mainly due to the fact that Groovy has higher performance comparing to other scripting options. Check out Parsing and producing JSON guide to learn more on how to work with JSON data in Groovy.
I want to have an array in Redis (using Node) where I can add values to it and specify how long I want it to stay in there. After that time limit, they should be deleted, and ideally be able to call something so I know what just left. ex. I may get a request with 120s, so I want to add that value to a map for that long and then have it deleted.
Is there a better way to do this? I thought of using the EXPIRE but that seems to be just for keys, not elements in an array?
Any thoughts would be great.
This is what I am doing:
app.get('/session/:length', function(req, res) {
var length = parseInt(req.param('length'), 10);
addToArray(length, ip)
var ip = req.connection.remoteAddress;
res.json({ip: ip, length: length});
});
Basically, I when I add it to the array I want it to only keep it in the array for the time that is passed in. So if you say 30 seconds, it's in that array for 30s, and then is gone, and calls a callback. Maybe there is a better way to solve this problem?
What I do now is keep the times added and ip, time in an array and periodically loop through the array checking and deleting, but thought maybe it would be possible in redis to automatically do this.
While there isn't an automatic way to do that in Redis, the common approach to these kind of problems is to use a Redis sorted set. In your case, set the IP as the member's value and the expiry time (now + time to live) as the score using epoch representation.
Instead of looping periodically, you can just call ZREMRANGEBYSCORE every once in a while.
Since set members are unique, however, that means that you'll only be able to save each IP once. If that's OK, just update the score for an IP with every hit from it, otherwise make the member value unique by concatenating the IP with the timestamp.
Lastly, to get the IPs that haven't "expired", use ZRANGEBYSCORE to get members that have scores (expiry times) higher than now. Similarly and before deleting with ZREMRANGEBYSCORE, get the keys that expired for the callback logic that you mentioned.
Just a quick question about saving an apps state using local storage. I'm about to start work on an iOS web app and I'm wondering if there may be any advantages or disadvantage to either of these models. Also, is there any major performance hit to saving every tiny change of the app state into local storage?
Number 1
Save the entire app state object as an JSON string to a single local storage key value pair.
var appstate = {
string: 'string of text',
somebool: true,
someint: 16
}
localStorage.setItem('appState', JSON.stringify(appstate));
Number 2
Save each variable of the app state to it's own key value pair in local storage.
var appstate = {
string: 'string of text',
somebool: true,
someint: 16
}
localStorage.setItem('string', appstate.string);
localStorage.setItem('bool', appstate.somebool);
localStorage.setItem('int', appstate.someint);
The only reason I would think it could be more efficient to store values separately would be if you anticipate values changing independently of each other. If they are stored separately you could refresh one set of values without touching another, and therefore have better handling of value expiration. If, for example, somebool changes frequently but the rest does not, it might be better to store it separately. Group together data with similar expiration and volatility.
Otherwise, if you just want to save the entire application state, I would think that a single string would be fine.
Consider reads vs. writes (changes). For frequent reads, it doesn't really matter because JavaScript objects are hashes with constant (O(1)) response time (see Javascript big-O property access performance). For writes, as nullability says, there is a difference. For the single-object design, frequent writes could get slower if you end up with a large number of (or large-sized) properties in the object.