I have some inline-javascript containing large datasets which are hard-coded into my PHP site:
var statsData = {
"times" : [1369008000,1369094400,1369180800,],
"counts" : [49,479,516,]
};
I'd like to refactor my code so that my variables are served with this structure:
[
[1369008000, 49],
[1369094400, 479],
[1369180800, 516],
]
However I have many files to update - are there any tools that would help automate this process?
Just create a new array then loop through the original one, and place the values according to the indexes:
var statsData = {"times":[1369008000,1369094400,1369180800,],"counts":[49,479,516,]};
var result = [];//Create a new array for results.
for (var i = 0; i < statsData.times.length; ++i){//Loop the original object's times property from 0 to it's length.
result.push([statsData.times[i], statsData.counts[i]]);//Push a new array to the result array in the new order that will contain the original values you acces throught the index in the loop variable.
}
console.log(result);
Also in your code you have two start [ in your object's counts attribute but only one ] closing it.
Carrying on from the comments; Trying to parse JS from a mix of PHP/HTML is horrible so if you are prepared to do some copying and pasting then - if it were me - I'd opt for a simple command-line tool. As your Javascript won't validate as JSON it doesn't make much sense to try and parse it in any other language.
I've knocked up a quick script to work with your current example (I'll leave it up to you to extend it further as needed). To run it you will need to install Node.js
Next, save the following where ever you like to organise files - lets call it statsData.js:
process.stdin.resume();
process.stdin.setEncoding('utf8');
process.stdin.on('data', function(data){
try {
eval(data+';global.data=statsData');
processData();
} catch(e) {
process.stdout.write('Error: Invalid Javascript\n');
}
});
function processData(){
try {
var i, out = [];
while(i = data.times.shift())
out.push([i, data.counts.shift()||0]);
process.stdout.write('var statsData='+JSON.stringify(out)+';\n');
} catch(e) {
process.stdout.write('Error: Unexpected Javascript\n');
}
}
Now you have a CLI tool that works with standard I/O, to use it open a terminal window and run:
$ node path/to/statsData.js
It will then sit and wait for you to copy and paste valid javascript statements into the terminal, otherwise you could always pipe the input stream from a file where you have copied and pasted your JS to:
$ cat inputFile.js | node path/to/statsData.js > outputFile.js
cat is a unix command - if you are working on a windows machine I think the equivalent is type - but I'm unable to test that right now.
Related
First of all, yes I know this question has been asked before, but I still cannot figure out how to make it work. I believe the problem is, I am running files individually through node.js on my Mac terminal, sorta like applications.
Here is the deal. I have one file, bitt1.js, that has var mid = 293.03;.
In my other file, otherFile.js, I have an if, else statement, depending on the variable mid (which is in bitt1.js):
if (mid <= 290) {
trade = true;
} else {
trade = false; }
The issue is, in terminal, I run first bitt1.js, then after I run otherFile.js. This makes it so I can't receive the mid variable from bitt1.js and it comes up as undefined.
How can I solve this issue? I've only found things used within html or etc, where the variables are always "available".
I'm new to JS and this whole thing so some of the stuff I said may be incorrect... and I could have also just been being dumb and the answer is obvious, but please help me out... I've thought about creating a JSON file and writing/reading data from it using the two other files, but I feel there's a better way...
Thanks!
Developer NodeJS's code works if you don't want to modify the value of the variable - if you just want to share the initial value of the variable, it works perfectly.
But if you intend to mutate the value of mid during runtime execution of bitt1.js and want to use that value, perhaps you can use a Unix pipe to plug its value into the stdin of bitt1.js.
E.g.
// bitt1.js
var mid = 299;
console.log("<mid>%d</mid>", mid); // this is piped to stdin
// otherFile.js
var stdin = process.openStdin();
var data = "";
stdin.on('data', function(chunk) {
data += chunk;
});
stdin.on('end', function() {
data.match(/<mid>([\s\S]+)<\/mid>/i);
var mid = +data.match[1];
console.log(mid);
});
Then running: node bitt1.js | node otherFile.js
Would print 299 from within otherFile.js.
This is a rough solution though: it should require some undefined checking on the match expression, and of course piping doesn't allow you to print anything directly to console in the bitt1.js file - you'd have to reprint everything in otherFile.js, which leads to duplicate code.
But it could be a solution that works for you, all depends on your requirements! Hope this helps.
node.js allows imports and exports.
Say bitt1.js has:
var mid = 299
console.log(mid)
// Here is where you export the desired value
//export default mid
module.exports.mid = mid
Then, in your otherFile.js
// you import the value from bitt1.js
var mid = require('./bitt1')
console.log(mid) //Outputs 299
That's it.
Edit: updated answer
So, I'm a big fan of creating global namespaces in javascript. For example, if my app is named Xyz I normally have an object XYZ which I fill with properties and nested objects, for an example:
XYZ.Resources.ErrorMessage // = "An error while making request, please try again"
XYZ.DAL.City // = { getAll: function() { ... }, getById: function(id) { .. } }
XYZ.ViewModels.City // = { .... }
XYZ.Models.City // = { .... }
I sort of picked this up while working on a project with Knockout, and I really like it because there are no wild references to some objects declare in god-knows-where. Everything is in one place.
Now. This is ok for front-end, however, I'm currently developing a basic skeleton for a project which will start in a month, and it uses Node.
What I wanted was, instead of all the requires in .js files, I'd have a single object ('XYZ') which would hold all requires in one place. For example:
Instead of:
// route.js file
var cityModel = require('./models/city');
var cityService = require('./services/city');
app.get('/city', function() { ...........});
I would make an object:
XYZ.Models.City = require('./models/city');
XYZ.DAL.City = require('./services/city');
And use it like:
// route.js file
var cityModel = XYZ.Models.City;
var cityService = XYZ.DAL.City;
app.get('/city', function() { ...........});
I don't really have in-depth knowledge but all of the requires get cached and are served, if cached, from memory so re-requiring in multiple files isn't a problem.
Is this an ok workflow, or should I just stick to the standard procedure of referencing dependencies?
edit: I forgot to say, would this sort-of-factory pattern block the main thread, or delay the starting of the server? I just need to know what are the downsides... I don't mind the requires in code, but I just renamed a single folder and had to go through five files to change the paths... Which is really inconvenient.
I think that's a bad idea, because you are going to serve a ton of modules every single time, and you may not need them always. Your namespaced object will get quite monstrous. require will check the module cache first, so I'd use standard requires for each request / script that you need on the server.
How do I get a file name of currently executed spec?
For example:
I run: protractor conf.js --specs ./spec/first_spec.js,./spec/second_spec.js
so I want to retrieve array ['first_spec','second_spec'], because I want to show it in a report.html. Is this a good way of thinking or is there a built-in function for file names in the latest run? I'm new to protractor and angular, and I found only a way to extract individual describe which doesn't really help me. I write this on top of protractor-angular-screenshot-reporter.
This is one way of doing it. Read the array of test cases passed from CLI arguments and use it as per your convenience
onPrepare: function() {
var testCaseArr
for (i = 0; i < process.argv.length; i++) {
argument = process.argv[i];
// if "specs" are found we know that the immediate following object is the array of test scenarios
if (argument.indexOf('specs')>0) {
specIndex = i + 1;
testCaseArr = process.argv[i+1];
break;
}
}
// This will output - ['first_spec','second_spec']
console.log(testCaseArr)
},
Please refer my blog post for more details on the same.
I have a binary application which generates a continuous stream of json objects (not an array of json objects). Json object can sometimes span multiple lines (still being a valid json object but prettified).
I can connect to this stream and read it without problems like:
var child = require('child_process').spawn('binary', ['arg','arg']);
child.stdout.on('data', data => {
console.log(data);
});
Streams are buffers and emit data events whenever they please, therefore I played with readline module in order to parse the buffers into lines and it works (I'm able to JSON.parse() the line) for Json objects which don't span on multiple lines.
Optimal solution would be to listen on events which return single json object, something like:
child.on('json', object => {
});
I have noticed objectMode option in streams node documentation however I' getting a stream in Buffer format so I belive I'm unable to use it.
Had a look at npm at pixl-json-stream, json-stream but in my opinnion none of these fit the purpose. There is clarinet-object-stream but it would require to build the json object from ground up based on the events.
I'm not in control of the json object stream, most of the time one object is on one line, however 10-20% of the time json object is on multiple lines (\n as EOL) without separator between objects. Each new object always starts on a new line.
Sample stream:
{ "a": "a", "b":"b" }
{ "a": "x",
"b": "y", "c": "z"
}
{ "a": "a", "b":"b" }
There must be a solution already I'm just missing something obvious. Would rather find appropriate module then to hack with regexp the stream parser to handle this scenario.
I'd recommend to try parsing every line:
const readline = require('readline');
const rl = readline.createInterface({
input: child.stdout
});
var tmp = ''
rl.on('line', function(line) {
tmp += line
try {
var obj = JSON.parse(tmp)
child.emit('json', obj)
tmp = ''
} catch(_) {
// JSON.parse may fail if JSON is not complete yet
}
})
child.on('json', function(obj) {
console.log(obj)
})
As the child is an EventEmitter, one can just call child.emit('json', obj).
Having the same requirement, I was uncomfortable enforcing a requirement for newlines to support readline, needed to be able to handle starting the read in the middle of a stream (possibly the middle of a JSON document), and didn't like constantly parsing and checking for errors (seemed inefficient).
As such I preferred using the clarinet sax parser, collecting the documents as I went and emitting doc events once whole JSON documents have been parsed.
I just published this class to NPM
https://www.npmjs.com/package/json-doc-stream
I noticed that if I execute a JavaScript script using the mongo command, the script can treat a cursor object as if it was an array.
var conn = new Mongo('localhost:27017');
var db = conn.getDB('learn');
db.test.remove({});
db.test.insert({foo: 'bar'});
var cur = db.test.find();
print(cur[0].foo); //prints: bar
print(cur[1]); // prints: undefined
This seems like it should be beyond the capabilities of the JavaScript language, since there is no way to "overload the subscript operator". So how does this actually work?
As documentation says, it is special ability of driver. It automagicly converts cursor[0] to cursor.toArray()[0]. You can prove it by overriding toArray() with print function or new Error().stack to get callstack back. Here it is:
at DBQuery.a.toArray ((shell):1:32)
at DBQuery.arrayAccess (src/mongo/shell/query.js:290:17)
at (shell):1:2
As you can see, indexing calls arrayAccess. How? Here we have a dbQueryIndexAccess function, which calls arrayAccess.
v8::Handle<v8::Value> arrayAccess = info.This()->GetPrototype()->ToObject()->Get(
v8::String::New("arrayAccess"));
...
v8::Handle<v8::Function> f = arrayAccess.As<v8::Function>();
...
return f->Call(info.This(), 1, argv);
And here we have a code, which sets indexed property handler to this function. WOW, v8 API gives us ability to add this handler!
DBQueryFT()->InstanceTemplate()->SetIndexedPropertyHandler(dbQueryIndexAccess);
... and injects it into JS cursor class, which is defined originaly in JS.
injectV8Function("DBQuery", DBQueryFT(), _global);
Tl;dr: It is hacked in C++ source code of mongo shell.