Are there any reliable techniques for storing prototype-based libraries/frameworks in mongoDB's system.js? I came across this issue when trying to use dateJS formats within a map-reduce. JIRA #SERVER-770 explains that objects' closures - including their prototypes - are lost when serialized to the system.js collection, and that this is the expected behavior. Unfortunately, this excludes a lot of great frameworks such as dojo, Google Closure, and jQuery.
Is there a way to somehow convert or contain libraries such that they don't rely on prototyping? There's some promise to initializing before the Map-Reduce and passing them in through the scope object, but I haven't had much luck so far. If my approach is flawed, what is a better way to enable server-side javascript re-use for mongo?
Every query using JS may reuse or get a brand new JS context, on which stored JS objects are loaded.
In order to do what you want, you need either:
mongod to run the stored code automatically when installing it
mapreduce to have an init method
The first is definitely the more interesting feature.
Turns out that mongodb v8 build automatically does it (but not officially supported), but not the official spidermonkey build.
Say you store code like:
db.system.js.save({ _id: "mylib", value: "myprint = function() { print('installed'); return 'installed';" }
Then in v8 you can use myprint() freely in your code, but with SM you would need to call mylib() explicitly.
As a workaround you can create another method:
db.system.js.save({ _id: "installLib", value: "if (!libLoaded) mylib(); libLoaded = true;" }
And call it from your map() function.
Created ticket in order to standardize engines and allow automatic run:
https://jira.mongodb.org/browse/SERVER-4450
Related
I have several functions that I'd like to replicate across different use cases in various requests and folders within the same collection (I'm using it as a template mostly, so it'll be pulling in variables externally)
There are many different suggestions in the Postman documentation but what's the best way to re-use code for such a use case?
What I've been doing lately is adding functions to my collection level pre-request script like so
collectionUtils = {
clearEnvData: function (pm) {
some useful code
},
// called after every request to ensure server coverage durring smoke testing.
cycleCurrentServer: function (serverCount, pm) {
some useful code
}
}
Then wherever I want to use these methods I do something like this
collectionUtils.cycleCurrentServer(index, pm);
I think generally the best way is to externalize the code into a library. You'll then make changes to the library and those changes will be reflected everywhere. Now there are several ways to implement this method, I'll leave with with the two that make sense for your use case:
Load your library from a remote site
If you are using some sort of development workflow that publishes your changes upstream, and you have your library published somewhere, you can load it at runtime:
pm.sendRequest("https://cdnjs.cloudflare.com/ajax/libs/dayjs/1.11.0/dayjs.min.js", (err, res) => {
//convert the response to text and save it as an environment variable
pm.collectionVariables.set("dayjs_library", res.text());
// eval will evaluate the JavaScript code and initialize the min.js
eval(pm.collectionVariables.get("dayjs_library"));
// you can call methods in the cdn file using this keyword
let today = new Date();
console.log("today=", today);
console.log(this.dayjs(today).format())
})
Store your code on a collection variable and load it that way
A less pretty way to do it but more convenient to a lot of folks is just to drop the whole library into a collection variable like this:
Then you can load it when you need it:
const dayjs_code = pm.collectionVariables.get('dayjs_code');
// Invoke an anonymous function to get access to the dayjs library methods
(new Function(dayjs_code))();
let today = new Date();
console.log(dayjs(today).format())
In both cases when you update your library you either have to republish it or copy paste again to the collection variable. However that surely beats copy pasting a piece of code to 20 odd places and figuring out what's updated, what's not and fighting bugs while at it.
I'm using a query on both server and client (pub/sub). So I have something like this at a few different locations.
const FOO = 'bar';
Collection.find({property:FOO})
Foo may potentially change and rather than have to update my code at separate locations, I was thinking it may be worth it to abstract this away to a global variable that is visible by both client and server.
I created a new file 'lib/constants.js' and simply did FOO = 'bar; (note no keyword). This seems to work just fine. I found this solution as the accepted answer How can I access constants in the lib/constants.js file in Meteor?
My question is if this a desired pattern in Meteor and even general JS.
I understand I can abstract this away into a module, but that may be overkill in this case. I also think using session/reactive vars is unsafe as it can kinda lead to action at a distance. I'm not even gonna consider using settings.json as that should only be for environment variables.
Any insights?
yes, If you are using older version of meteor then you can use setting.json but for updated version we have import option.
I don't think the pattern is that bad. I would put that file in /imports though and explicitly import it.
Alternatively, you can write into Meteor.settings.public from the server, e.g., on start-up, and those values will be available on the client in the same location. You can do this without having a settings file, which is nice because it doesn't require you to make any changes between development and production.
Server:
Meteor.startup(() => {
// code to run on server at startup
Meteor.settings.public.FOO = 'bar';
});
Client:
> console.log(Meteor.settings.public.FOO);
bar
This is actually a b̶a̶d̶ unfavoured pattern because with global variables you cannot track what things changed and in general constructing a modular and replaceable components is much better. This pattern was only made possible due to Meteor early days where imports directory/pattern was not supported yet and you'd have your entire code split up between both,server and client.
https://docs.meteor.com/changelog.html#v13220160415
You can find many write ups about it online and event stackoverflow answers so I don't want to restate the obvious.
Using a settings.json variable is not an option since we may dynamically change so what are our options? For me I'd say:
Store it the database and either publish it or retrieve it using methods with proper access scoping of course. Also you can dynamically modify it using methods that author DB changes.
Or, you may try using Meteor.EnvironmentVariable. I'd be lying if I said I know how to use it properly but I've seen it being used in couple Meteor projects to tackle a similar situation.
https://www.eventedmind.com/items/meteor-dynamic-scoping-with-environment-variables
Why are global variables considered bad practice?
I'd like something like Python's defaultdict in Javascript, except without using any libraries. I realize this won't exist in pure Javascript. However, is there a way to define such a type in a reasonable amount of code (not just copying-and-pasting some large library into my source file) that won't hit undesirable corner cases later?
I want to be able to write the following code:
var m = defaultdict(function() { return [] });
m["asdf"].push(0);
m["qwer"].push("foo");
Object.keys(m).forEach(function(value, key) {
// Should give me "asdf" -> [0] and "qwer" -> ["foo"]
});
I need this to work on recent versions of Firefox, Chrome, Safari, and ideally Edge.
Again, I do not want to use a library if at all possible. I want a way to do this in a way that minimizes dependencies.
Reasons why previous answers don't work:
This answer uses a library, so it fails my initial criteria. Also, the defaultdict it provides doesn't actually behave like a Javascript object. I'm not looking to write Python in Javascript, I'm looking to make my Javascript code less painful.
This answer suggests defining get. But you can't use this to define a defaultdict over collection types (e.g. a map of lists). And I don't think the approach will work with Object.keys either.
This answer mentions Proxy, but it's not obvious to me how many methods you have to implement to avoid having holes that would lead to bad corner cases later. Writing all of the Proxy methods certainly seems like a pain, but if you skip any methods you might cause painful bugs for yourself down the road if you try to use something you didn't implement a handler for. (Bonus question: What is the minimal set of Proxy methods you'd need to implement to avoid any such holes?) On the other hand, the suggested getter approach doesn't follow standard object syntax, and you also can't do things like Object.keys.
You really seem to be looking for a proxy. It is available in the modern browsers you mention, is not a library, and is the only technology allowing you to keep the standard object syntax. Using a proxy is actually quite simple, all you need to overwrite is the get trap that should automatically create non-existing properties:
function defaultDict(createValue) {
return new Proxy(Object.create(null), {
get(storage, property) {
if (!(property in storage))
storage[property] = createValue(property);
return storage[property];
}
});
}
var m = defaultDict(function() { return [] });
m["asdf"].push(0);
m["qwer"].push("foo");
Object.keys(m).forEach(console.log);
This question is regarding a best practice for structuring data objects using JS Modules on the server for consumption by another module.
We have many modules for a web application, like login view, form handlers, that contain disparate fragments of data, like user state, application state, etc. which we need to be send to an analytics suite that requires a specific object format. Where should we map the data? (Pick things we want to send, rename keys, delete unwanted values.)
In each module. E.g.: Login knows about analytics and its formatting
requirements.
In an analytics module. Now analytics has to know
about each and every module's source format.
In separate [module]-analytics modules. Then we'll have dozen of files which don't have much context to debug and understand.
My team is split on what is the right design. I'm curious if there is some authoritative voice on the subject that can help us settle this.
Thanks in advance!
For example,
var objectForAnalytics = {
logged_in: user.get('isLoggedIn'),
app_context: application.get('environment')
};
analytics.send(objectForAnalytics);
This short sample script uses functions from 3 modules. Where should it exist in a well-organized app?
JS doesn't do marshaling, in the traditional sense.
Since the language encourages duck typing and runs all loaded modules in a single VM, each module can simply pass the data and let a consumer choose the fields that interest them.
Marshaling has two primary purposes, historically:
Delivering another module the data that it expects, since C-style structures and objects do not support extra data (usually).
Transferring data between two languages or modules built on two compilers, which may be using different memory layouts, calling conventions, or ABIs.
JavaScript solves the second problem using JSON, but the first is inherently solved with dictionary-style objects. Passing an object with 1000 keys is just as fast as an object with 2, so you can (and often are encouraged) to simply give the consumer what you have and allow them to decide what they need.
This is further reinforced in languages like Typescript, where the contract on a parameter type is simply a minimum set of requires. TS allows you to pass an object that exceeds those requires (by having other fields), instead only verifying that you are aware of what the consumer states they require in their contract and have met that.
When you do need to transform an object, perhaps because two library use the same data with different keys, creating a new object with shallow references is pretty easy:
let foo = {
bar: old.baz,
baz: old.bin
};
This does not change or copy the underlying data, so any changes to the original (or the copy) will propagate to the other. This does not include primitive values, which are immutable and so will not propagate.
You can extend native objects in javascript. For example, sugar.js extends Array, String, and Function among other things. Native-object extensions can be very useful, but inherently break encapsulation - ie if someone uses the same extension name (overwriting another extension) things will break.
It would be incredibly nice if you could extend objects for a particular scope. E.g. being able to do something like this in node.js:
// myExtension1.js
Object.prototype.x = 5
exports.speak = function() {
var six = ({}.x+1)
console.log("6 equals: "+six)
}
// myExtension2.js
Object.prototype.x = 20
exports.speak = function() {
var twenty1 = ({}.x+1)
console.log("21 equals: "+twenty1)
}
and have this work right:
// test.js
var one = require('myExtension1')
var two = require('myExtension2')
one.speak(); // 6 equals: 6
two.speak(); // 21 equals: 21
Of course in reality, this will print out "6 equals: 21" for the first one.
Is there any way, via any mechanism, to do something where this is possible? The mechanisms I'm interesting in hearing about include:
Pure javascript
Node.js
C++ extensions to Node.js
Unfortunately you cannot do that currently in node, because node shares the same built-in objects across modules.
This is bad because it could brings to unexpected side effects, like it happened in the browsers history in the past, and that's why now everyone is yelling "don't extend built-in object".
Other commonJS environment are following more the original commonJS specs, so that you do not share built-in object, but every module has its own. For instance in jetpack, the Mozilla SDK to build Firefox's add-on, it works in this way: so you the built-in objects are per module and if you extend one you can't clash.
Anyway, in general I believe that extending built-in object nowadays is not really necessary and should be avoided.
This is not possible since a native type only has a single source for its prototype. In general I would discourage mucking with the prototype of native types. Not only are you limiting your portability (as you pointed out), but you may also be unknowingly overwriting existing properties or future properties. This also creates a lot of "magic" in your code that a future maintainer will have a hard time tracking down. The only real exception to this rule is polyfils. If your environment has not implemented a new feature yet then a polyfil can provide you this.