I have this splice line, with a debug line either side:
const obj = { x:[1,2], y:{t:"!!!"} }
const lenBefore = S.groupings.length
const ret = S.groupings.splice(gix, 0, obj)
console.log(`${lenBefore} --> ${S.groupings.length}, gix=${gix}, ret=${JSON.stringify(ret)}`)
It gives this output, which I believe is impossible:
3 --> 3, gix=2, ret=[]
I.e. the splice did not add or remove anything.
Another run, on different data, gave:
18 --> 18, gix=2, ret=[]
This function is fairly complex, but I have some unit tests to cover the main ways it is used. I left the console.log() lines in when running them, and those tests give me:
1 --> 2, gix=0, ret=[]
3 --> 4, gix=1, ret=[]
I.e. exactly what I'd expect. So, something about the environment when run for real is causing this.
(It is an electron app, this code is running in the front-end part, i.e. effectively in Chrome, and it is with Chrome developer tools that I'm looking at the console. The unit tests are running in mocha 3.2.0, node 6.11.4. The real environment is Electron 1.8.1, which is Chrome 59, i.e. slightly newer.)
Has anyone any idea what external context could possibly cause splice() to not do its job?
UPDATE:
If I instead use this, then everything works both in the live code and in the unit tests:
S.groupings = S.groupings.slice(0,gix).concat(obj, S.groupings.slice(gix))
There is obviously something about S.groupings that stops it being mutated, but just in my complex live code, not in a unit test! That in itself is quite interesting, as I thought it was not possible to make immutable javascript objects...
BTW, the following code:
console.log(`S type=${typeof S}; isArray=`+ Array.isArray(S))
console.log(`S.groupings type=${typeof S.groupings}; isArray=`+ Array.isArray(S.groupings))
tells me identical results in live and unit test code:
S type=object; isArray=false
S.groupings type=object; isArray=true
And I also tried this, near the top of the function:
S.groupings = Array.from(S.groupings)
It made no difference. I.e. all the evidence points to that S.groupings is just a normal JavaScript array.
UPDATE 2: Not frozen or sealed:
Object.isFrozen(S.groupings) //false
Object.isSealed(S.groupings) //false
Object.isExtensible(S.groupings) //true
By the way, to try to narrow it down I made the following three, increasingly complex, simplifications of the real code, as mocha tests. They all pass perfectly. Of course they do. I decided to include it here, as it gives you more context than the one line I posted above, and also shows some things that are obviously not the explanation.
it("1", function(){
const S = {
groupings:[ {a:1,b:2}, {a:2,b:"xxx"}, {a:3,b:false} ],
tom:"hello",
dick:[1,2,3],
harry:null
}
const obj = {a:2.5, b:"INSERT ME"}
let gix = 2
assert.equal(S.groupings.length, 3)
S.groupings.splice(gix, 0, obj)
assert.equal(S.groupings.length, 4)
})
//--------------------------------
it("2", function(){
const S = {
groupings:[ {a:1,b:2}, {a:2,b:"xxx"}, {a:3,b:false} ],
tom:"hello",
dick:[1,2,3],
harry:null
}
const CG = [ {z:1}, {z:2}, {z:3} ]
const obj = {a:2.5, b:"INSERT ME"}
for(let gix = 0;gix < CG.length; ++gix){
const g = CG[gix]
if(g.z < obj.a)continue
assert.equal(S.groupings.length, 3)
S.groupings.splice(gix, 0, obj)
assert.equal(S.groupings.length, 4)
break
}
})
//--------------------------------
it("3", function(){
const data = {
"1":{},
"2":{
groupings:[ {a:1,b:2}, {a:2,b:"xxx"}, {a:3,b:false} ],
tom:"hello",
dick:[1,2,3],
harry:null
}}
const CG_outer = [ {z:1}, {z:2}, {z:3} ]
function inner(CG, txt){
const S = data["2"]
const obj = {a:2.5, b:txt}
for(let gix = 0;gix < CG.length; ++gix){
const g = CG[gix]
if(g.z < obj.a)continue
assert.equal(S.groupings.length, 3)
S.groupings.splice(gix, 0, obj)
assert.equal(S.groupings.length, 4)
break
}
}
inner(CG_outer, "INSERT ME")
assert.deepEqual(data["2"].groupings,
[ {a:1,b:2}, {a:2,b:"xxx"}, {a:2.5, b:"INSERT ME"}, {a:3,b:false} ] )
})
I've finally tracked it down. Electron apps have distinct back-end/front-end processes, with message parsing. I was doing this:
S = mainProcess.getLatestData(...)
If I change that to:
const ret = mainProcess.getLatestData(...)
Object.assign(S, ret)
then everything, including splice() works as expected.
I cannot really explain what is going on, though. ret is not frozen, sealed, and is extensible. It appears to be a plain JS object in all respects, and ret.groupings to be a simple JS array.
And S was originally data that came from the front-end. (However that was sent in an event from the front-end, rather than requested, and those are two different mechanisms in Electron.)
Related
I'm currently trying to implement some basic Prolog queries in Tau-Prolog. Although I have working queries in SWI-Prolog, I can't implement them to work in Tau-Prolog.
I would like to return the name of all Robots that are in the database and have the Interface "B".
Is there something important I am missing here? I think that sub_string/5 might be the reason why it's not working. It also won't work when I paste the Code into the trial interpreter on http://tau-prolog.org/
Does anyone know a way to fix this query so it could work in Tau-Prolog? Thanks in advance!
<script>
var session = pl.create(1000)
var database = `
robot('Roboter1','A', 1, 550).
robot('Roboter2','C', 2, 340).
robot('Roboter3','B', 2, 430).
robot('Roboter4','A', 2, 200).
robot('Roboter5','B', 3, 260).
`
function start_query_RwB(){
query_RwB();
}
function query_RwB(){
var queryRwB = "queryRwB :-write('Interface B has: '),robot(Name, Interface,_,_),sub_string(Interface,_,_,_,'B'),write(Name),nl, fail."
var code_pl = database.concat(queryRwB);
var parsed = session.consult(code_pl)
var query = session.query('queryRwB.')
function inform(msg) {
show_result4.innerHTML += msg
}
session.current_output.stream.put = inform;
var callback = function(answer) {
}
session.answer(callback);
}
</script>
Use sub_atom/5 instead of sub_string/5 in the definition of the queryRwB variable as you use atoms, not strings, in the definition of the predicate robot/4:
var queryRwB = "queryRwB :-write('Interface B has: '),robot(Name, Interface,_,_), sub_atom(Interface,_,_,_,'B'),write(Name),nl, fail."
Note that sub_atom/5 is a standard predicate (that's implemented by Tau Prolog) while sub_string/5 is a proprietary predicate only found in some Prolog systems like ECLiPSe and SWI-Prolog.
I have an input file which may potentially contain upto 1M records and each record would look like this
field 1 field 2 field3 \n
I want to read this input file and sort it based on field3 before writing it to another file.
here is what I have so far
var fs = require('fs'),
readline = require('readline'),
stream = require('stream');
var start = Date.now();
var outstream = new stream;
outstream.readable = true;
outstream.writable = true;
var rl = readline.createInterface({
input: fs.createReadStream('cross.txt'),
output: outstream,
terminal: false
});
rl.on('line', function(line) {
//var tmp = line.split("\t").reverse().join('\t') + '\n';
//fs.appendFileSync("op_rev.txt", tmp );
// this logic to reverse and then sort is too slow
});
rl.on('close', function() {
var closetime = Date.now();
console.log('Read entirefile. ', (closetime - start)/1000, ' secs');
});
I am basically stuck at this point, all I have is the ability to read from one file and write to another, is there a way to efficiently sort this data before writing it
DB and sort-stream are fine solutions, but DB might be an overkill and I think sort-stream eventually just sorts the entire file in an in-memory array (on through end callback), so I think performance will be roughly the same, comparing to the original solution.
(but I haven't ran any benchmarks, so I might be wrong).
So, just for the hack of it, I'll throw in another solution :)
EDIT:
I was curious to see how big a difference this will be, so I ran some benchmarks.
Results were surprising even to me, turns out sort -k3,3 solution is better by far, x10 times faster then the original solution (a simple array sort), while nedb and sort-stream solutions are at least x18 times slower than the original solution (i.e. at least x180 times slower than sort -k3,3).
(See benchmark results below)
If on a *nix machine (Unix, Linux, Mac, ...) you can simply use
sort -k 3,3 yourInputFile > op_rev.txt and let the OS do the sorting for you.
You'll probably get better performance, since sorting is done natively.
Or, if you want to process the sorted output in Node:
var util = require('util'),
spawn = require('child_process').spawn,
sort = spawn('sort', ['-k3,3', './test.tsv']);
sort.stdout.on('data', function (data) {
// process data
data.toString()
.split('\n')
.map(line => line.split("\t"))
.forEach(record => console.info(`Record: ${record}`));
});
sort.on('exit', function (code) {
if (code) {
// handle error
}
console.log('Done');
});
// optional
sort.stderr.on('data', function (data) {
// handle error...
console.log('stderr: ' + data);
});
Hope this helps :)
EDIT: Adding some benchmark details.
I was curious to see how big a difference this will be, so I ran some benchmarks.
Here are the results (running on a MacBook Pro):
sort1 uses a straightforward approach, sorting the records in an in-memory array.
Avg time: 35.6s (baseline)
sort2 uses sort-stream, as suggested by Joe Krill.
Avg time: 11.1m (about x18.7 times slower)
(I wonder why. I didn't dig in.)
sort3 uses nedb, as suggested by Tamas Hegedus.
Time: about 16m (about x27 times slower)
sort4 only sorts by executing sort -k 3,3 input.txt > out4.txt in a terminal
Avg time: 1.2s (about x30 times faster)
sort5 uses sort -k3,3, and process the response sent to stdout
Avg time: 3.65s (about x9.7 times faster)
You can take advantage of streams for something like this. There's a few NPM modules that will be helpful -- first include them by running
npm install sort-stream csv-parse stream-transform
from the command line.
Then:
var fs = require('fs');
var sort = require('sort-stream');
var parse = require('csv-parse');
var transform = require('stream-transform');
// Create a readble stream from the input file.
fs.createReadStream('./cross.txt')
// Use `csv-parse` to parse the input using a tab character (\t) as the
// delimiter. This produces a record for each row which is an array of
// field values.
.pipe(parse({
delimiter: '\t'
}))
// Use `sort-stream` to sort the parsed records on the third field.
.pipe(sort(function (a, b) {
return a[2].localeCompare(b[2]);
}))
// Use `stream-transform` to transform each record (an array of fields) into
// a single tab-delimited string to be output to our destination text file.
.pipe(transform(function(row) {
return row.join('\t') + '\r';
}))
// And finally, output those strings to our destination file.
.pipe(fs.createWriteStream('./cross_sorted.txt'));
You have two options, depending on how much data is being processed. (1M record count with 3 columns doesn't say much about the amount of actual data)
Load the data in memory, sort in place
var lines = [];
rl.on('line', function(line) {
lines.push(line.split("\t").reverse());
});
rl.on('close', function() {
lines.sort(function(a, b) { return compare(a[0], b[0]); });
// write however you want
fs.writeFileSync(
fileName,
lines.map(function(x) { return x.join("\t"); }).join("\n")
);
function compare(a, b) {
if (a < b) return -1;
if (a > b) return 1;
return 0;
}
});
Load the data in a persistent database, read ordered
Using a database engine of your choice (for example nedb, a pure javascript db for nodejs)
EDIT: It seems that NeDB keeps the whole database in memory, the file is only a persistent copy of the data. We'll have to search for another implementation. TingoDB looks promising.
// This code is only to give an idea, not tested in any way
var Datastore = require('nedb');
var db = new Datastore({
filename: 'path/to/temp/datafile',
autoload: true
});
rl.on('line', function(line) {
var tmp = line.split("\t").reverse();
db.insert({
field0: tmp[0],
field1: tmp[1],
field2: tmp[2]
});
});
rl.on('close', function() {
var cursor = db.find({})
.sort({ field0: 1 }); // sort by field0, ascending
var PAGE_SIZE = 1000;
paginate(0);
function paginate(i) {
cursor.skip(i).take(PAGE_SIZE).exec(function(err, docs) {
// handle errors
var tmp = docs.map(function(o) {
return o.field0 + "\t" + o.field1 + "\t" + o.field2 + "\n";
});
fs.appendFileSync("op_rev.txt", tmp.join(""));
if (docs.length >= PAGE_SIZE) {
paginate(i + PAGE_SIZE);
} else {
// cleanup temp database
}
});
}
});
i had quite similar issue, needed to perform an external sort.
I figured out, after waste a few time on it that i could load up the data on a database and then query out the desired data from it.
It not even matter if the inserts aren't ordered, as long as my query result could be.
Hope it can work for you too.
In order to insert your data on a database, there are plenty of tools on node to perform such task. I have this pet project which does a similar job.
I'm also sure that if you search the subject, you'll find much more info.
Good luck.
I'm working with a large dataset that needs to be efficient with its Mongo queries. The application uses the Ford-Fulkerson algorithm to calculate recommendations and runs in polynomial time, so efficiency is extremely important. The syntax is ES6, but everything is basically the same.
This is an approximation of the data I'm working with. An array of items and one item being matched up against the other items:
let items = ["pen", "marker", "crayon", "pencil"];
let match = "sharpie";
Eventually, we will iterate over match and increase the weight of the pairing by 1. So, after going through the function, my ideal data looks like this:
{
sharpie: {
pen: 1,
marker: 1,
crayon: 1,
pencil: 1
}
}
To further elaborate, the value next to each key is the weight of that relationship, which is to say, the number of times those items have been paired together. What I would like to have happen is something like this:
// For each in the items array, check to see if the pairing already
// exists. If it does, increment. If it does not, create it.
_.each(items, function(item, i) {
Database.upsert({ match: { $exist: true }}, { match: { $inc: { item: 1 } } });
})
The problem, of course, is that Mongo does not allow bracket notation, nor does it allow for variable names as keys (match). The other problem, as I've learned, is that Mongo also has problems with deeply nested $inc operators ('The dollar ($) prefixed field \'$inc\' in \'3LhmpJMe9Es6r5HLs.$inc\' is not valid for storage.' }).
Is there anything I can do to make this in as few queries as possible? I'm open to suggestions.
EDIT
I attempted to create objects to pass into the Mongo query:
_.each(items, function(item, i) {
let selector = {};
selector[match] = {};
selector[match][item] = {};
let modifier = {};
modifier[match] = {};
modifier[match]["$inc"] = {};
modifier[match]["$inc"][item] = 1
Database.upsert(selector, modifier);
Unfortunately, it still doesn't work. The $inc breaks the query and it won't let me go more than 1 level deep to change anything.
Solution
This is the function I ended up implementing. It works like a charm! Thanks Matt.
_.each(items, function(item, i) {
let incMod = {$inc:{}};
let matchMod = {$inc:{}};
matchMod.$inc[match] = 1;
incMod.$inc[item] = 1;
Database.upsert({node: item}, matchMod);
Database.upsert({node: match}, incMod);
});
I think the trouble comes from your ER model. a sharpie isn't a standalone entity, a sharpie is an item. The relationship between 1 item and other items is such that 1 item has many items (1:M recursive) and each item-pairing has a weight.
Fully normalized, you'd have an items table & a weights table. The items table would have the items. The weights table would have something like item1, item2, weight (in doing so, you can have asymmetrical weighting, e.g. sharpie:pencil = 1, pencil:sharpie = .5, which is useful when calculating pushback in the FFA, but I don't think that applies in your case.
Great, now let's mongotize it.
When we say 1 item has many items, that "many" is probably not going to exceed a few thousand (think 16MB document cap). That means it's actually 1-to-few, which means we can nest the data, either using subdocs or fields.
So, let's check out that schema!
doc =
{
_id: "sharpie",
crayon: 1,
pencil: 1
}
What do we see? sharpie isn't a key, it's a value. This makes everything easy. We leave the items as fields. The reason we don't use an array of objects is because this is faster & cleaner (no need to iterate over the array to find the matching _id).
var match = "sharpie";
var items = ["pen", "marker", "crayon", "pencil"];
var incMod = {$inc:{}};
var matchMod = {$inc:{}};
matchMod.$inc[match] = 1;
for (var i = 0; i < items.length; i++) {
Collection.upsert({_id: items[i]}, matchMod);
incMod.$inc[items[i]] = 1;
}
Collection.upsert({_id: match}, incMod);
That's the easy part. The hard part is figuring out why you want to use an FFA for a suggestion engine :-P.
I'm trying to implement the answer to this question about monitoring a Windows filesystem asynchronously. I'm using js ctypes within a ChomeWorker as part of a XULRunner application but I assume this would be the same if I implemented as a Firefox add-on.
As part of the task, I have tried to declare the function ReadDirectoryChangesW as follows (based on my limited knowledge of js ctypes and the MSDN documentation).
const BOOL = ctypes.bool;
const DWORD = ctypes.uint32_t;
const LPDWORD = ctypes.uint32_t.ptr;
const HANDLE = ctypes.int32_t;
const LPVOID = ctypes.voidptr_t;
var library = self.library = ctypes.open("Kernel32.dll");
ReadDirectoryChangesW = library.declare(
"ReadDirectoryChangesW"
, ctypes.winapi_abi
, BOOL // return type
, HANDLE // hDirectory
, LPVOID // lpBuffer
, DWORD // nBufferLength
, BOOL // bWatchSubtree
, DWORD // dwNotifyFilter
, LPDWORD // lpBytesReturned
);
In addition (not featured here), I have declared function mappings for FindFirstChangeNotification() and WaitForSingleObject() which seem to work fine.
The problem I have is that when a filesystem event occurs, I have no idea what I'm supposed to pass in to the lpBuffer argument, or how to interpret the result.
All of the C++ examples seem to use a DWORD array and then cast out the results. My attempt at that is as follows:
const DWORD_ARRAY = new ctypes.ArrayType(DWORD);
var lBuffer = new DWORD_ARRAY(4000);
var lBufferSize = DWORD.size * 4000;
var lBytesOut = new LPDWORD();
ReadDirectoryChangesW(lHandle, lBuffer.address(), lBufferSize, true, WATCH_ALL, lBytesOut)
This seems to just crash XULRunner every time.
Can anyone suggest what I should pass in for the lpBuffer argument and/or how to get results back from ReadDirectoryChangesW()? All I can find online is C++ examples and they're not a lot of help. Thanks.
Here is a cleaner solution: read the comments, lots of learning there. For type definitions, see here.
var path = OS.Constants.Path.desktopDir; // path to monitor
var hDirectory = ostypes.API('CreateFile')(path, ostypes.CONST.FILE_LIST_DIRECTORY | ostypes.CONST.GENERIC_READ, ostypes.CONST.FILE_SHARE_READ | ostypes.CONST.FILE_SHARE_WRITE, null, ostypes.CONST.OPEN_EXISTING, ostypes.CONST.FILE_FLAG_BACKUP_SEMANTICS | ostypes.CONST.FILE_FLAG_OVERLAPPED, null);
console.info('hDirectory:', hDirectory.toString(), uneval(hDirectory));
if (ctypes.winLastError != 0) { //cutils.jscEqual(hDirectory, ostypes.CONST.INVALID_HANDLE_VALUE)) { // commented this out cuz hDirectory is returned as `ctypes.voidptr_t(ctypes.UInt64("0xb18"))` and i dont know what it will be when it returns -1 but the returend when put through jscEqual gives `"breaking as no targetType.size on obj level:" "ctypes.voidptr_t(ctypes.UInt64("0xb18"))"`
console.error('Failed hDirectory, winLastError:', ctypes.winLastError);
throw new Error({
name: 'os-api-error',
message: 'Failed to CreateFile',
});
}
var dummyForSize = ostypes.TYPE.FILE_NOTIFY_INFORMATION.array(1)(); // accept max of 1 notifications at once (in application you should set this to like 50 or something higher as its very possible for more then 1 notification to be reported in one read/call to ReadDirectoryChangesW)
console.log('dummyForSize.constructor.size:', dummyForSize.constructor.size);
console.log('ostypes.TYPE.DWORD.size:', ostypes.TYPE.DWORD.size);
var dummyForSize_DIVIDED_BY_DwordSize = dummyForSize.constructor.size / ostypes.TYPE.DWORD.size;
console.log('dummyForSize.constructor.size / ostypes.TYPE.DWORD.size:', dummyForSize_DIVIDED_BY_DwordSize, Math.ceil(dummyForSize_DIVIDED_BY_DwordSize)); // should be whole int but lets round up with Math.ceil just in case
var temp_buffer = ostypes.TYPE.DWORD.array(Math.ceil(dummyForSize_DIVIDED_BY_DwordSize))();
var temp_buffer_size = temp_buffer.constructor.size; // obeys length of .array
console.info('temp_buffer.constructor.size:', temp_buffer.constructor.size); // will be Math.ceil(dummyForSize_DIVIDED_BY_DwordSize)
var bytes_returned = ostypes.TYPE.DWORD();
var changes_to_watch = ostypes.CONST.FILE_NOTIFY_CHANGE_LAST_WRITE | ostypes.CONST.FILE_NOTIFY_CHANGE_FILE_NAME | ostypes.CONST.FILE_NOTIFY_CHANGE_DIR_NAME; //ostypes.TYPE.DWORD(ostypes.CONST.FILE_NOTIFY_CHANGE_LAST_WRITE | ostypes.CONST.FILE_NOTIFY_CHANGE_FILE_NAME | ostypes.CONST.FILE_NOTIFY_CHANGE_DIR_NAME);
console.error('start hang');
var rez_RDC = ostypes.API('ReadDirectoryChanges')(hDirectory, temp_buffer.address(), temp_buffer_size, true, changes_to_watch, bytes_returned.address(), null, null);
var cntNotfications = 0;
var cOffset = 0;
while (cOffset < bytes_returned) {
cntNotfications++;
var cNotif = ctypes.cast(temp_buffer.addressOfElement(cOffset), ostypes.TYPE.FILE_NOTIFY_INFORMATION.ptr).contents; // cannot use `temp_buffer[cOffset]` here as this is equivlaent of `temp_buffer.addressOfElement(cOffset).contents` and cast needs a ptr
console.info('cNotif:', cNotif.toString());
cOffset += cNotif.NextEntryOffset; // same as doing cNotif.getAddressOfField('NextEntryoffset').contents // also note that .contents getter makes it get a primaive value so DWORD defined as ctypes.unsigned_long will not be returned as expected ctypes.UInt64 it will be primative (due to the .contents getter), so no need to do the typical stuff with a `var blah = ctypes.unsigned_long(10); var number = blah.value.toString();`
}
console.info('total notifications:', cntNotifications);
I'm working on getting the async version working but having a tricky time.
Here's what I learned as I'm working on doing the same now, still in progress
You have to create a buffer of DWORD so var buf = ctypes.ArrayType(DWORD, BUFSIZE) as it needs to be aligned on DWORD boundary, whatever this means
I don't know what BUFSIZE should be exactly but i have seen 2048 and 4096, I don't know why. I have also seen BUFSIZE of 1024*64, no idea why
Then after succesfully running ReadDirectoryChangesW cast this buffer to FILE_NOTIFY_INFORMATION and then read its contents
Pass null to the final 2 arguments only if you don't want async, we want async so we are going to use the LPOVERLAPPED struct and pass it there.
Edit
Here's solution for sync: This successfully reads one event. If have more you have to move over in temp_buff by next_entry_offset and cast, see here. Install that addon and make a new folder on your desktop or something, and it will log in browser console.
I'm working on async version, having some trouble with that.
I am actually new to both spider monkey api and this mailing list. Actually I was trying to create a Array like objectA.arrayA and the call back code goes like this.
char *value[] = {"abc", "xyz", "efg"};
int count = 0;
JSObject* val = JS_NewArrayObject(pContext, 0, NULL);
while(count < 3) {
jstr = JS_NewStringCopyZ(pContext, value[count]);
JS_DefineElement(pContext, val, count++, STRING_TO_JSVAL(jstr),
NULL, NULL, JSPROP_ENUMERATE | JSPROP_READONLY | JSPROP_PERMANENT);
}
vJs->DefineProperty(pObject, "arrayA", OBJECT_TO_JSVAL(val));
I am getting the proper value for the objectA.arrayA but when I do objectA.arrayA.length, it says arrayA does not have ay property. Can you tell what i am doing wrong. I am facing the same even when I am creating a sting.
Your first apparent problem is:
JS_NewArrayObject(pContext, 0, NULL);
Where you have ZERO should be the desired length of your array.
It is pretty apparent to me that you don't know how to use the API. I believe the documentation relavent to your question can be found at:
https://developer.mozilla.org/en/SpiderMonkey/JSAPI_Reference/JS_NewArrayObject
https://developer.mozilla.org/en/SpiderMonkey/JSAPI_Reference/JS_DefineProperty
https://developer.mozilla.org/en/SpiderMonkey/JSAPI_Reference/JS_DefineElement
and:
https://developer.mozilla.org/en/SpiderMonkey/JSAPI_Reference/JSClass.addProperty
https://developer.mozilla.org/en/SpiderMonkey/JSAPI_Reference/JS_PropertyStub
These five pages have all the info you should need to crack the code.