I am attempting to create an object from JSON.
Part of my object looks like
a: {
name: "A name",
b: {
c: {
d: []
}
}
}
To save bandwidth, empty arrays and empty objects are stripped from the JSON, so the JSON that I receive is
{ "a": { "name": "A name" }}
If I just convert the JSON to an object, this leads to errors when I try to do things like a.b.c.d.length, as undefined does not have a length.
At the moment, I am populating the missing fields by something like
const json = getJson();
const obj = {
...json,
a: {
...json.a,
b: {
...json.a.b,
c: {
...json.a.b.c,
d: json.a.b.c.d || []
}
}
}
};
This is pretty verbose, and is going to get even uglier when there are several fields that need default values.
One obvious solution is to not strip out the empty arrays from the transmitted JSON. Assuming that that is not possible, is there a better way to handle this case?
In the real case, there are multiple fields at each level, so all of the spread operators are necessary. (Even if they were not, I would want to include them as the structure is likely to grow in the future.)
Edit: I am not using JQuery. I am using whatwg-fetch to retrieve data.
I would suggest some sort of solution in which you establish a pre-determined schema, and continue to omit empty arrays/values. You can then do something like
_.get(object, 'property.path.here', defaultValue)
with lodash.get(), where defaultValue is determined by your schema as an empty string "", 0, null, etc.
Removing empty objects/arrays from response is not a good practice, because these can be valid values.
It is premature optimization, which does not have real benefit on performance. In return it will give you a lot of headaches when you will need to specially handle each response.
But if you still think that it is worth, then I recommend lodash get function.
_.get(object, 'property.path.here')
For best practice, I think you should not employ sophisticated approach or syntax.
Just use a common default value set of Javascript:
var _d = a.b.c.d || [].
// does something with _d.length
While you are trying to do something, you can achieve that but code quality is worse. Think different and do different it may be better.
Moreover, I think declare the original a object explicitly with all properties is better than remove empty property like you do.
Related
This is a question about Javascript(and possibly other languages) fundamentals.
I was developing a project and at some point a realized I had defined the same key more than once in an object but no error appeared on console. After some research I couldn't find a clear/official statement to what happens in this situation or how this affects the program. All I could find out is the output in a simple example:
let obj = {key1:1, key1:2, key1:3};
console.log(obj.key1);// >> output = 3
console.log(Object.keys(obj));// >> output = ['key1']
My question is: The value is just redefined and all previous declarations are erased? Is this a limitation or some kind of error of Javascript?
The value is just redefined and all previous declarations are erased?
Yes. When an object literal contains duplicate keys, only the final key in the object literal will exist on the object after the object is evaluated.
Is this a limitation or some kind of error of Javascript?
It's permitted, but it's nonsense. Like lots of things in programming, there are things which are syntactically allowed but don't make any sense in code.
But you can use a linter to prevent from making these sorts of mistakes, eg ESLint's no-dupe-keys.
There's one case where something very similar to duplicate keys can be common, which is when using object spread, eg:
const obj = { foo: 'foo', bar: 'bar' };
// Now say we want to create a new object containing the properties of `obj`
// plus an updated value for `foo`:
const newObj = { ...obj, foo: 'newFoo' }
This sort of approach is very common when working with data structures immutably, like in React. It's not exactly the same thing as an outright duplicate key in the source code, but it's interpreted the same way - whatever value gets interpreted last in the key-value list (regardless of if the key is spread or static) will be the value that the final object contains at that key.
I have several objects. They are structured many different ways. For example:
var obj1 = {
'key1':'value',
'key2':[{
'somekey':'somevalue',
'nestedObject': [{
'something':'{{THIS STRING}}'
}]
}]
}
var obj2 = {
'key5':'some text {{THIS STRING}} some more text',
'name':[{
'somekey':'somevalue'
}]
}
There are many more objects than this, and their structures can be infinitely different.
I am looking for a way to find {{THIS STRING}}, no matter where it appears in the object, and no matter what other text surrounds it. All I need to know is a true/false of if it appears anywhere at all in the values of any given object, regardless of how deeply-nested in the object it is.
Thank you!
Note: This is a quick method indeed, but it does not work for all use cases. e.g. if your keys may contain the desired string, this will give wrong output. See comments below.
Not the cleanest of solutions, but you can turn your object into a JSON string using JSON.stringify(), and then look for the string you want inside that string.
var obj1_str = JSON.stringify(obj1);
var isInFile = obj1_str.includes("your_string"); //true if your string is there, false otherwise.
I'm trying to get my head around how to use Immutables in JavaScript/TypeScript without taking all day about it. I'm not quite ready to take the dive into Immutable.js, because it seems to leave you high and dry as far as type safety.
So let's take an example where I have an Array where the elements are all of Type MyType. In my Class, I have a method that searches the Array and returns a copy of a matching element so we don't edit the original. Say now that at a later time, I need to look and see if the object is in the Array, but what I have is the copy, not the original.
What is the standard method of handling this? Any method I can think of to determine whether I already have this item is going to take some form of looping through the collection and visiting each element and then doing a clunky equality match, whether that's turning both of them to strings or using a third-party library.
I'd like to use Immutables, but I keep running into situations like this that make them look pretty unattractive. What am I missing?
I suspect that my solution is not "...the standard method of handling this." However, I think it at least is a way of doing what I think you're asking.
You write that you have a method that "...returns a copy of a matching element so we don't edit the original". Could you change that method so that it instead returns both the original and a copy?
As an example, the strategy below involves retrieving both an original element from the array (which can later be used to search by reference) as well as a clone (which can be manipulated as needed without affecting the original). There is still the cost of cloning the original during retrieval, but at least you don't have to do such conversions for every element in the array when you later search the array. Moreover, it even allows you to differentiate between array elements that are identical-by-value, something that would be impossible if you only originally retrieved a copy of an element. The code below demonstrates this by making every array element identical-by-value (but, by definition of what objects are, different-by-reference).
I don't know if this violates other immutability best practices by, e.g., keeping copies of references to elements (which, I suppose, leaves the code open to future violations of immutability even if they are not currently being violated...though you could deep-freeze the original to prevent future mutations). However it at least allows you to keep everything technically immutable while still being able to search by reference. Thus you can mutate your clone as much as you want but still always hold onto an associated copy-by-reference of the original.
const retrieveDerivative = (array, elmtNum) => {
const orig = array[elmtNum];
const clone = JSON.parse(JSON.stringify(orig));
return {orig, clone};
};
const getIndexOfElmt = (array, derivativeOfElement) => {
return array.indexOf(derivativeOfElement.orig);
};
const obj1 = {a: {b: 1}}; // Object #s are irrelevant.
const obj3 = {a: {b: 1}}; // Note that all objects are identical
const obj5 = {a: {b: 1}}; // by value and thus can only be
const obj8 = {a: {b: 1}}; // differentiated by reference.
const myArr = [obj3, obj5, obj1, obj8];
const derivedFromSomeElmt = retrieveDerivative(myArr, 2);
const indexOfSomeElmt = getIndexOfElmt(myArr, derivedFromSomeElmt);
console.log(indexOfSomeElmt);
The situation you've described is one where a mutable datastructure has obvious advantages, but if you otherwise benefit from using immutables there are better approaches.
While keeping it immutable means that your new updated object is completely new, that cuts both ways: you may have a new object, but you also still have access to the original object! You can do a lot of neat things with this, e.g. chain your objects so you have an undo-history, and can go back in time to roll back changes.
So don't use some hacky looking-up-the-properties in the array. The problem with your example is because you're building a new object at the wrong time: don't have a function return a copy of the object. Have the function return the original object, and call your update using the original object as an index.
let myThings = [new MyType(), new MyType(), new MyType()];
// We update by taking the thing, and replacing with a new one.
// I'll keep the array immutable too
function replaceThing(oldThing, newThing) {
const oldIndex = myThings.indexOf(oldThing);
myThings = myThings.slice();
myThings[oldIndex] = newThing;
return myThings;
}
// then when I want to update it
// Keep immutable by spreading
const redThing = myThings.find(({ red }) => red);
if (redThing) {
// In this example, there is a 'clone' method
replaceThing(redThing, Object.assign(redThing.clone(), {
newProperty: 'a new value in my immutable!',
});
}
All that said, classes make this a whole lot more complex too. It's much easier to keep simple objects immutable, since you could simple spread the old object into the new one, e.g. { ...redThing, newProperty: 'a new value' }. Once you get a higher than 1-height object, you may find immutable.js far more useful, since you can mergeDeep.
After a lot of bug-hunting, I managed to narrow my problem down to this bit of code:
dup = {a: [1]}
chrome.storage.local.set({x: [dup, dup]});
chrome.storage.local.get(["x"], function(o) {console.log(JSON.stringify(o['x']));});
This prints out: [{"a":[1]},null]
Which I find to be a pretty strange behaviour. So my questions are:
Is this intentional? Is it documented?
Can you recommend a good solution to bypass this limitation?
My current idea is to use JSON.stringify (which handles this case correctly) and later parse the string. But that just seems wasteful.
Thanks.
No it is not intentional and should be reported as a bug: https://crbug.com/606955 (and now it is fixed as of Chrome 52!).
As I explained in the bug report, the cause of the bug is that the objects are identical. If your object dup only contains simple values (i.e. no nested arrays or objects, only primitive values such as strings, numbers, booleans, null, ...), then a shallow clone of the object is sufficient:
dup = {a: [1]}
dup2 = Object.assign({}, dup);
chrome.storage.local.set({x: [dup, dup2]});
If you need support for nested objects, then you have to make a deep clone. There are many existing libraries or code snippets for that, so I won't repeat it here. A simple way to prepare values for chrome.storage is by serializing it to JSON and then parsing it again (then all objects are unique).
dup = {a: [1]}
var valueToSave = JSON.parse(JSON.stringify([dup, dup]));
chrome.storage.local.set({x: valueToSave});
// Or:
var valueToSave = [ dup, JSON.parse(JSON.stringify(dup)) ];
chrome.storage.local.set({x: valueToSave});
I want to return some errors to my jquery method. What is happening is I am doing a post(with a type of "json") and if they get errors I want to display the error back to them. I am doing client side validation but some of these are errors that are like server related(ie the database died or something and that update could not happen).
Anyways there could be a few errors and I want to return them all at one go.
So the only way I know really how is to use Json but I now I get the json object I want to get all the fields out of it. I don't want to call it by their name though since I want to use the same methods for all my methods and each one has different names.
So if I could call it by index there would be alot less typing.
Can I do this?
Since you are using jQuery, you could use $.each to iterate over object properties, for example:
var obj = { one:1, two:2, three:3, four:4, five:5 };
jQuery.each(obj, function(key, val) {
console.log(key,val);
});
For objects jQuery internally executes a for...in statement, which does not iterate over built-in properties, however you can have problems if the Object.prototype is extended since that extended members will be iterated also.
Is not a common practice to extend the Object.prototype, but to avoid problems you can use the hasOwnProperty function to ensure that the property exist directly on the object being iterated:
for ( var key in obj) {
if (obj.hasOwnProperty(key)) {
console.log(key,obj[key]);
}
}
JSON is nothing more than yet another markup-language to describe complex datastructures. JSON gets parsed into javascript data-structures and can represent objects, arrays or just a string in theoretically unlimited depth.
Without knowing if your JSON structure consists of arrays, objects, or {} constructs it's hard to tell if you can.
However, you could have a look at:
var dataObject = {
key1: "error1",
key2: "error2"
};
for (var key in dataObject) {
alert(key + " : " + dataObject[key]);
}