I have a scenario on my web application and I would like suggestions on how I could better design it.
I have to steps on my application: Collection and Analysis.
When there is a collection happening, the user needs to keep informed that this collection is going on, and the same with the analysis. The system also shows the 10 last collection and analysis performed by the user.
When the user is interacting with the system, the collections and analysis in progress (and, therefore, the last collections/analysis) keep changing very frequently. So, after considering different ways of storing these informations in order to display them properly, as they are so dynamic, I chose to use HTML5's localStorage, and I am doing everything with JavaScript.
Here is how they are stored:
Collection in Progress: (set by a function called addItem that receives ITEMNAME)
Key: c_ITEMNAME_Storage
Value: c_ITEMNAME
Collection Finished or Error: (set by a function called editItem that also receives ITEMNAME and changes the value of the corresponding key)
Key: c_ITEMNAME_Storage
Value: c_Finished_ITEMNAME or c_Error_ITEMNAME
Collection in the 10 last Collections (set by a function called addItemLastCollections that receives ITEMNAME and prepares the key with the current date and time)
Key: ORDERNUMBER_c_ITEMNAME_DATE_TIME
Value: c_ITEMNAME
Note: The order number is from 0 to 9, and when each collection finishes, it receives the number 0. At the same time, the number 9 is deleted when the addItemLastCollections function is called.
For the analysis is pretty much the same, the only thing that changes is that the "c" becomes an "a".
Anyway, I guess you understood the idea, but if anything is unclear, let me know.
What I want is opinions and suggestions of other approaches, as I am considering this inefficient and impractical, even though it is working fine. I want something easily maintained. I think that sticking with localStorage is probably the best, but not this way. I am not very familiar with the use of Design Patterns in JavaScript, although I use some of them very frequently in Java. If anyone can give me a hand with that, it would be good.
EDIT:
It is a bit hard even for me to explain exactly why I feel it is inefficient. I guess the main reason is because for each case (Progress, Finished, Error, Last Collections) I have to call a method and modify the String (adding underline and more information), and for me to access any data (let's say, the name or the date) of each one of them I need to test to see which case is it and then keep using split( _ ). I know this is not very straightforward but I guess that this whole approach could be better designed. As I am working alone on this part of the software, I don't have anyone that I can discuss things with, so I thought here would be a good place to exchange ideas :)
Thanks in advance!
Not exactly sure what you are looking for. Generally I use localStorage just to store stringified versions of objects that fit my application. Rather than setting up all sorts of different keys for each variable within localStore, I just dump stringified versions of my object into one key in localStorage. That way the data is the same structure whether it comes from server as JSON or I pull it from local.
You can quickly save or retrieve deeply nested objects/arrays using JSON.stringify( object) and JSON.parse( 'string from store');
Example:
My App Object as sent from server as JSON( I realize this isn't proper quoted JSON)
var data={ foo: {bar:[1,2,3], baz:[4,5,6,7]},
foo2: {bar:[1,2,3], baz:[4,5,6,7]}
}
saveObjLocal( 'app_analysis', data);
function saveObjLocal( key, obj){
localStorage.set( key, JSON.stringify(obj)
}
function getlocalObj( key){
return JSON.parse( localStorage.get(key) );
}
var analysisObj= =getlocalObj('app_analysis');
alert( analysisObj.foo.bar[2])
Related
I have certain requirements , I wanted to do the following in quickest way as possible.
I have 1000's of objects like below
{id:1,value:"value1"} . . {id:1000,value:"value1000"}
I want to access above objects by id
I want to clean the objects Lesser than certain id every few minutes (Because it generates 1000's of objects every second for my high frequency algorithm)
I can clean easily by using this.
myArray = myArray.filter(function( obj ) {
return obj.id > cleanSize;
});
I can find the object by id using
myArray.find(x => x.id === '45');
Problem is here , I feel that find is little slower when there is larger sets of data.So I created some objects of object like below
const id = 22;
myArray["x" + id] = {};
myArray["x" + id] = { id: id, value:"test" };
so I can access my item by id easily by myArray[x22]; , but problem is i am not able find the way to remove older items by id.
someone guide me better way to achieve the three points I mentioned above using arrays or objects.
The trouble with your question is, you're asking for a way to finish an algorithm that is supposed to solve a problem of yours, but I think there's something fundamentally wrong with the problem to begin with :)
If you store a sizeable amount of data records, each associated with an ID, and allow your code to access them freely, then you cannot have another part of your code dump some of them to the bin out of the blue (say, from within some timer callback) just because they are becoming "too old". You must be sure nobody is still working on them (and will ever need to) before deleting any of them.
If you don't explicitly synchronize the creation and deletion of your records, you might end up with a code that happens to work (because your objects happen to be processed quickly enough never to be deleted too early), but will be likely to break anytime (if your processing time increases and your data becomes "too old" before being fully processed).
This is especially true in the context of a browser. Your code is supposed to run on any computer connected to the Internet, which could have dozens of reasons to be running 10 or 100 times slower than the machine you test your code on. So making assumptions about the processing time of thousands of records is asking for serious trouble.
Without further specification, it seems to me answering your question would be like helping you finish a gun that would only allow you to shoot yourself in the foot :)
All this being said, any JavaScript object inherently does exactly what you ask for, provided you're okay with using strings for IDs, since an object property name can also be used as an index in an associative array.
var associative_array = {}
var bob = { id:1456, name:"Bob" }
var ted = { id:2375, name:"Ted" }
// store some data with arbitrary ids
associative_array[bob.id] = bob
associative_array[ted.id] = ted
console.log(JSON.stringify(associative_array)) // Bob and Ted
// access data by id
var some_guy = associative_array[2375] // index will be converted to string anyway
console.log(JSON.stringify(some_guy)) // Ted
var some_other_guy = associative_array["1456"]
console.log(JSON.stringify(some_other_guy)) // Bob
var some_AWOL_guy = associative_array[9999]
console.log(JSON.stringify(some_AWOL_guy)) // undefined
// delete data by id
delete associative_array[bob.id] // so long, Bob
console.log(JSON.stringify(associative_array)) // only Ted left
Though I doubt speed will really be an issue, this mechanism is about as fast as you will ever get JavaScript to run, since the underlying data structure is a hash table, theoretically O(1).
Anything involving array methods like find() or filter() will run in at least O(n).
Besides, each invocation of filter() would waste memory and CPU recreating the array to no avail.
I am in the process of making a WordPress based application where a student can take the examination on his web-browser. The questions will be randomly selected and served from the question bank stored in a WordPress CMS.
In this regard following is important to share:
-each examination can have as many as 100 multiple choice questions.
-Each question can have images, each choice can have associated images.
-since the examination is time bound I can not send request to server every time the student completes his question.
My query is :
How do I send the questions from the server:
-should I send the whole question set in one go and then have the Java Script parse all the questions and choices parsed at the client side
or
-should the client repeatedly request the questions from server in the background in the chunks of say 5 question each, for example. If this is better approach I am not sure how do I implement this. Any pointers?, please.
Or is there a third approach which I am not aware of.
Please advise for any comments and solutions for the problem.
Thanks in advance.
Depends on user's selection,send appropriate JSON data to client and render it dynamivally,but if you want to use XML then lets talk about it:
I should mention that this comparison is really from the perspective of using them in a browser with JavaScript. It's not the way either data format has to be used, and there are plenty of good parsers which will change the details to make what I'm saying not quite valid.
JSON is both more compact and (in my view) more readable - in transmission it can be "faster" simply because less data is transferred.
In parsing, it depends on your parser. A parser turning the code (be it JSON or XML) into a data structure (like a map) may benefit from the strict nature of XML (XML Schemas disambiguate the data structure nicely) - however in JSON the type of an item (String/Number/Nested JSON Object) can be inferred syntactically, e.g:
myJSON = {"age" : 12,
"name" : "Danielle"}
The parser doesn't need to be intelligent enough to realise that 12 represents a number, (and Danielle is a string like any other). So in javascript we can do:
anObject = JSON.parse(myJSON);
anObject.age === 12 // True
anObject.name == "Danielle" // True
anObject.age === "12" // False
In XML we'd have to do something like the following:
<person>
<age>12</age>
<name>Danielle</name>
</person>
(as an aside, this illustrates the point that XML is rather more verbose; a concern for data transmission). To use this data, we'd run it through a parser, then we'd have to call something like:
myObject = parseThatXMLPlease();
thePeople = myObject.getChildren("person");
thePerson = thePeople[0];
thePerson.getChildren("name")[0].value() == "Danielle" // True
thePerson.getChildren("age")[0].value() == "12" // True
Actually, a good parser might well type the age for you (on the other hand, you might well not want it to). What's going on when we access this data is - instead of doing an attribute lookup like in the JSON example above - we're doing a map lookup on the key name. It might be more intuitive to form the XML like this:
<person name="Danielle" age="12" />
But we'd still have to do map lookups to access our data:
myObject = parseThatXMLPlease();
age = myObject.getChildren("person")[0].getAttr("age");
Say you have a very simple data structure:
(personId, name)
...and you want to store a number of these in a javascript variable. As I see it you have three options:
// a single object
var people = {
1 : 'Joe',
3 : 'Sam',
8 : 'Eve'
};
// or, an array of objects
var people = [
{ id: 1, name: 'Joe'},
{ id: 3, name: 'Sam'},
{ id: 8, name: 'Eve'}
];
// or, a combination of the two
var people = {
1 : { id: 1, name: 'Joe'},
3 : { id: 3, name: 'Sam'},
8 : { id: 8, name: 'Eve'}
};
The second or third option is obviously the way to go if you have (or expect that you might have) more than one "value" part to store (eg, adding in their age or something), so, for the sake of argument, let's assume that there's never ever going to be any more data values needed in this structure. Which one do you choose and why?
Edit: The example now shows the most common situation: non-sequential ids.
Each solution has its use cases.
I think the first solution is good if you're trying to define a one-to-one relationship (such as a simple mapping), especially if you need to use the key as a lookup key.
The second solution feels the most robust to me in general, and I'd probably use it if I didn't need a fast lookup key:
It's self-describing, so you don't
have to depend on anyone using
people to know that the key is the id of the user.
Each object comes self-contained,
which is better for passing the data
elsewhere - instead of two parameters
(id and name) you just pass around
people.
This is a rare problem, but sometimes
the key values may not be valid to
use as keys. For example, I once
wanted to map string conversions
(e.g., ":" to ">"), but since ":"
isn't a valid variable name I had to
use the second method.
It's easily extensible, in case
somewhere along the line you need to
add more data to some (or all) users.
(Sorry, I know about your "for
argument's sake" but this is an
important aspect.)
The third would be good if you need fast lookup time + some of the advantages listed above (passing the data around, self-describing). However, if you don't need the fast lookup time, it's a lot more cumbersome. Also, either way, you run the risk of error if the id in the object somehow varies from the id in people.
Actually, there is a fourth option:
var people = ['Joe', 'Sam', 'Eve'];
since your values happen to be consecutive. (Of course, you'll have to add/subtract one --- or just put undefined as the first element).
Personally, I'd go with your (1) or (3), because those will be the quickest to look up someone by ID (O logn at worst). If you have to find id 3 in (2), you either can look it up by index (in which case my (4) is ok) or you have to search — O(n).
Clarification: I say O(logn) is the worst it could be because, AFAIK, and implementation could decide to use a balanced tree instead of a hash table. A hash table would be O(1), assuming minimal collisions.
Edit from nickf: I've since changed the example in the OP, so this answer may not make as much sense any more. Apologies.
Post-edit
Ok, post-edit, I'd pick option (3). It is extensible (easy to add new attributes), features fast lookups, and can be iterated as well. It also allows you to go from entry back to ID, should you need to.
Option (1) would be useful if (a) you need to save memory; (b) you never need to go from object back to id; (c) you will never extend the data stored (e.g., you can't add the person's last name)
Option (2) is good if you (a) need to preserve ordering; (b) need to iterate all elements; (c) do not need to look up elements by id, unless it is sorted by id (you can do a binary search in O(logn). Note, of course, if you need to keep it sorted then you'll pay a cost on insert.
Assuming the data will never change, the first (single object) option is the best.
The simplicity of the structure means it's the quickest to parse, and in the case of small, seldom (or never) changing data sets such as this one, I can only imagine that it will be frequently executed - in which case minimal overhead is the way to go.
I created a little library to manage key value pairs.
https://github.com/scaraveos/keyval.js#readme
It uses
an object to store the keys, which allows for fast delete and value retrieval
operations and
a linked list to allow for really fast value iteration
Hope it helps :)
The third option is the best for any forward-looking application. You will probably wish to add more fields to your person record, so the first option is unsuitable. Also, it is very likely that you will have a large number of persons to store, and will want to look up records quickly - thus dumping them into a simple array (as is done in option #2) is not a good idea either.
The third pattern gives you the option to use any string as an ID, have complex Person structures and get and set person records in a constant time. It's definitely the way to go.
One thing that option #3 lacks is a stable deterministic ordering (which is the upside of option #2). If you need this, I would recommend keeping an ordered array of person IDs as a separate structure for when you need to list persons in order. The advantage would be that you can keep multiple such arrays, for different orderings of the same data set.
Given your constraint that you will only ever have name as the value, I would pick the first option. It's the cleanest, has the least overhead and the fastest look up.
As can be seen in the example at https://plnkr.co/edit/YyTPVQ?p=preview (Once loaded the app click on any of the names on the left bar)
If I modify any user's scale it also modifies the other user too.
Don't ask me why but, I somehow managed to fix it by deleting all .map and .js files, committed the fix to git, woke up this morning and now it doesn't work AGAIN (YES is a miraculous as it sounds!) https://github.com/thurft/appraisal
My problem is as follows the employees.component.ts handles the logic of employees.component.html
When I rate a question it modifies the same question of all this.employee instead of only doing it for this.selectedEmployee. This would be triggered updateQuestionRequest(question) and there is a console.log(this.employees) to show the Objects being modified.
In no way I modify this.employees array, so Angular somehow knows that it needs to modify the object in the array. But it also modifies all objects in that array that have the same question.
The question/problem is: How can I save the selectedEmployee rated question in the selectedEmployee OBJ instead of the value being saved across all employees OBJ?
I can't tell if is a bug on Angular or is a problem in my code, as sometimes it work, sometimes it doesn't and there is no consistency.
You have to clone TECHNICALQUESTIONS otherwise all employees will share the same reference:
employee[i].technicalQuestions = TECHNICALQUESTIONS.map(_=>{return Object.assign({}, _)};);
Because of Chrome not implmenenting yet the Object.assign the work around is to create a new json file to force the create of a new object employee[i].technicalQuestions = JSON.parse(JSON.stringify(TECHNICALQUESTIONS));
I want to query object from Parse DB through javascript, that has only 1 of some specific relation object. How can this criteria be achieved?
So I tried something like this, the equalTo() acts as a "contains" and it's not what I'm looking for, my code so far, which doesn't work:
var query = new Parse.Query("Item");
query.equalTo("relatedItems", someItem);
query.lessThan("relatedItems", 2);
It seems Parse do not provide a easy way to do this.
Without any other fields, if you know all the items then you could do the following:
var innerQuery = new Parse.Query('Item');
innerQuery.containedIn('relatedItems', [all items except someItem]);
var query = new Parse.Query('Item');
query.equalTo('relatedItems', someItem);
query.doesNotMatchKeyInQuery('objectId', 'objectId', innerQuery);
...
Otherwise, you might need to get all records and do filtering.
Update
Because of the data type relation, there are no ways to include the relation content into the results, you need to do another query to get the relation content.
The workaround might add a itemCount column and keep it updated whenever the item relation is modified and do:
query.equalTo('relatedItems', someItem);
query.equalTo('itemCount', 1);
There are a couple of ways you could do this.
I'm working on a project now where I have cells composed of users.
I currently have an afterSave trigger that does this:
const count = await cell.relation("members").query().count();
cell.put("memberCount",count);
This works pretty well.
There are other ways that I've considered in theory, but I've not used
them yet.
The right way would be to hack the ability to use select with dot
notation to grab a virtual field called relatedItems.length in the
query, but that would probably only work for me because I use PostGres
... mongo seems to be extremely limited in its ability to do this sort
of thing, which is why I would never make a database out of blobs of
json in the first place.
You could do a similar thing with an afterFind trigger. I'm experimenting with that now. I'm not sure if it will confuse
parse to get an attribute back which does not exist in its schema, but
I'll find out, by the end of today. I have found that if I jam an artificial attribute into the objects in the trigger, they are returned
along with the other data. What I'm not sure about is whether Parse will decide that the object is dirty, or, worse, decide that I'm creating a new attribute and store it to the database ... which could be filtered out with a beforeSave trigger, but not until after the data had all been sent to the cloud.
There is also a place where i had to do several queries from several
tables, and would have ended up with a lot of redundant data. So I wrote a cloud function which did the queries, and then returned a couple of lists of objects, and a few lists of objectId strings which
served as indexes. This worked pretty well for me. And tracking the
last load time and sending it back when I needed up update my data allowed me to limit myself to objects which had changed since my last query.