Using ssh fingerprint in ssh2-sftp-client - javascript

Ok, this might need a little bit of explaination up front.
I am currently working on an automation project using node-red.
I want to upload and download files from an remote server using ssh. For this tasks I use this node-red package called node-red-contrib-sftpc. I rewrote the library a little bit, so that I can hand over some credentials for the sftp connection, via the payload which is handed over to the node.
To establish an connection the sftp.connect method of the ssh2-sftp-client is used:
await sftp.connect({
host: node.server.host,
port: node.server.port,
username: node.server.username,
password: node.server.password});
There you can find in the documentation that you can provide connect with the parameters hostHash and hostVerifier. The documentation of the ssh2 model, on which the ssh2-sftp-client is based, states that:
hostHash - string - Any valid hash algorithm supported by node. The host's key is hashed using this algorithm and passed to the hostVerifier function as a hex string. Default: (none)
hostVerifier - function - Function with parameters (hashedKey[,
callback]) where hashedKey is a string hex hash of the host's key for
verification purposes. Return true to continue with the handshake or
false to reject and disconnect, or call callback() with true or false
if you need to perform asynchronous verification. Default:
(auto-accept if hostVerifier is not set)
So here is my problem: How do I write the hostVerifier function? I want to pass hashedKey and also fingerprint, so that I can return true or false, when the handshake worked out or not.
I want to check, if the given server key fingerprint is the "right" one and that I connect to the correct server.
So far as I understood the second parameter, will be a callback function, but I do not know how to use that, so that it will verify the handshake.
This was my try, or at least how I tried to do it.
node.server.hostVerifier = function (hashedKey, (hashedKey, msg.fingerprint)=> {
if (hashedKey = msg.fingerprint) return true;
else return false
}){};
await sftp.connect({
host: node.server.host,
port: node.server.port,
username: node.server.username,
password: node.server.password,
hostHash: 'someHashAlgo',
hostVerifier: node.server.hostVerifier,});
I know that this is completely wrong, but I am about to get crazy, because I have no idea, how to proper check the ssh host key fingerprint.

So I found a solution myself and want to share it with you.
I defined an arrow function as the hostVerifier function, which takes implicite the value of the fingerprint through the msg.fingerprint variable. I only do this, if node.server.fingerprint has an value. So if I do not have an fingerprint at hand, the connection will still established.
node.server.fingerprint = msg.fingerprint;
if(!!node.server.fingerprint){
node.server.hostHash = 'md5';
node.server.hostVerifier = (hashedKey) => {
return (hashedKey === msg.fingerprint) ;};
node.server.algorithms = {serverHostKey: ['ssh-rsa'],};
};
For that I also declare my node.server.alogrithms. With that it was a little bit of try and error.
So I put everything together here:
await sftp.connect({
host: node.server.host,
port: node.server.port,
username: node.server.username,
password: node.server.password,
hostHash: node.server.hostHash,
hostVerifier: node.server.hostVerifier,
algorithms: node.server.algorithms,
});

Related

Is it possible to link a random html site with node javascript?

Is it possible to link a random site with node.js, when I say that, Is it possible to link it with only a URL, if not then I'm guessing it's having the file.html inside the javascript directory. I really wanna know if it's possible because the html is not mine and I can't add the line of code to link it with js that goes something like (not 100% sure) <src = file.html>
I tried doing document = require('./page.html'); and ('./page') but it didn't work and when I removed the .html at the end of require it would say module not found
My keypoint is that the site shows player count on some servers, and I wanna get that number by linking it with js and then using it in some code which I have the code to (tested in inspect element console) but I don't know how to link it properly to JS.
If you wanna take a look at the site here it is: https://portal.srbultras.info/#servers
If you have any ideas how to link a stranger's html with js, i'd really appreciate to hear it!
You cannot require HTML files unless you use something like Webpack with html-loader, but even in this case you can only require local files. What you can do, however, is to send an HTTP Request to the website. This way you get the same HTML your browser receives whenever you open a webpage. After that you will have to parse the HTML in order to get the data you need. The jsdom package can be used for both steps:
const { JSDOM } = require('jsdom');
JSDOM.fromURL('https://portal.srbultras.info/')
.then(({ window: { document }}) => {
const servers = Array.from(
document.querySelectorAll('#servers tbody>tr')
).map(({ children }) => {
const name = children[3].textContent;
const [ip, port] = children[4]
.firstElementChild
.textContent
.split(':');
const [playersnum, maxplayers] = children[5]
.lastChild
.textContent
.split('/')
.map(n => Number.parseInt(n));
return { name, ip, port, playersnum, maxplayers };
});
console.log(servers);
/* Your code here */
});
However, grabbing the server information from a random website is not really what you want to do, because there is a way to get it directly from the servers. Counter Strike 1.6 servers seem to use the GoldSrc / Source Server Protocol that lets us retrieve information about the servers. You can read more about the protocol here, but we are just going to use the source-server-query package to send queries:
const query = require('source-server-query');
const servers = [
{ ip: '51.195.60.135', port: 27015 },
{ ip: '51.195.60.135', port: 27017 },
{ ip: '185.119.89.86', port: 27021 },
{ ip: '178.32.137.193', port: 27500 },
{ ip: '51.195.60.135', port: 27018 },
{ ip: '51.195.60.135', port: 27016 }
];
const timeout = 5000;
Promise.all(servers.map(server => {
return query
.info(server.ip, server.port, timeout)
.then(info => Object.assign(server, info))
.catch(console.error);
})).then(() => {
query.destroy();
console.log(servers);
/* Your code here */
});
Update
servers is just a normal JavaScript array consisting of objects that describe servers, and you can see its structure when it is logged into the console after the information has been received, so it should not be hard to work with. For example, you can access the playersnum property of the third server in the list by writing servers[2].playersnum. Or you can loop through all the servers and do something with each of them by using functions like map and forEach, or just a normal for loop.
But note that in order to use the data you get from the servers, you have to put your code in the callback function passed to the then method of Promise.all(...), i.e. where console.log(servers) is located. This has to do with the fact that it takes some time to get the responses from the servers, and for that reason server queries are normally asynchronous, meaning that the script continues execution even though it has not received the responses yet. So if you try to access the information in the global scope instead of the callback function, it is not going to be there just yet. You should read about JavaScript Promises if you want to understand how this works.
Another thing you may want to do is to filter out the servers that did not respond to the query. This can happen if a server is offline, for example. In the solution I have provided, such servers are still in the servers array, but they only have the ip and port properties they had originally. You could use filter in order to get rid of them. Do you see how? Tell me if you still need help.

How to block an incoming socket connection if the client already has one

So i noticed that you can run 'io()' in console on client side.
I'm worried that if someone were to loop it, it would crash the node.js server.
Does anybody know how to prevent multiple connection for the same user.
It is a fairly complicated process to do that properly.
But on that same note, people won't be able to crash your server with socket.io as easy as you think they would be able to.
Node.js can handle a ton of connections at once, same with socket.io. Obviously these are both dependent on what your server actually is; but even as Raspberry Pi can handle a significant amount of connections.
But, if you truly must implement this, I'd recommend checking out this issue and just making a counter-based dictionary of IP's and to disconnect sockets if their IP goes above a specific number.
Get the client's IP address in socket.io
Very crude, but it would do what you need.
you need some helper function on server side
get user ip with this package:
npm install request-ip
create array of users:
let users = [ ];
validate and add user to array on new join request
const requestIp = require('request-ip');
const addUser = () => {
const ipMiddleware = function(req, res) {
const clientIp = requestIp.getClientIp(req);
};
const existingUser = users.find(user.clientIp === clientIp)
if (existingUser) {
return false
}
const newUser = { clientIp };
users.push(newUser)
return true
}

How to properly configure MAIL_URL?

The follwoing smtp url giving me an error
process.env.MAIL_URL="smtp://mail_name#outlook.com:Password#smtp.outlook.com:457";
What am I doing wrong?
For starters, your issue is that your user name (and perhaps your password) contain a character that cannot be placed in a URL as-is, and therefore needs to be encoded.
I want to take this opportunity to provide a little more in-depth answer to the issue of configuring the MAIL_URL environment variable.
If you simply need a quick string that will work, do:
process.env.MAIL_URL="smtp://"+encodeURIComponent("mail_name#outlook.com")+":"+encodeURIComponent("Password")+"#smtp.outlook.com:457";
Also take into account that you may need to use smtps for secure connection, and if it uses TLS, your connection may fail.
I recommend to read the rest if you need anything more robust.
URL
A URL has the following structure:
scheme:[//[user[:password]#]host[:port]][/path][?query][#fragment]
The scheme would be either smtp or smtps (for secure connection), and in this scenario you will also set the user, password, host and (most likely) port.
Each of the parts needs to be encoded in a way that is suitable for use in a URL, but since hosts (domains) are normally already appropriate, you only need to make sure that your user name/password are encoded.
In EcmaScript, encodeURIComponent can be used for this.
MAIL_URL and node environment variables
Meteor checks the value of process.env.MAIL_URL when sending an email.
process.env is populated by node.js with the environment variables available to it on startup.
It is possible to add properties to it on runtime, so setting process.env.MAIL_URL before sending an email will work. However, you should do so wisely to prevent your secrets from leaking.
I would suggest 2 methods for setting it up, either using settings.json or using the environment variable itself.
using settings.json
Create a json file in your project. It is highly recommended not to commit it into source control with the rest of your code.
for example: config/development/settings.json
{
"smtp": {
"isSecure": true,
"userName": "your_username",
"password": "your_password",
"host": "smtp.gmail.com",
"port": 465
}
}
And somewhere in your server code:
Meteor.startup(function() {
if (Meteor.settings && Meteor.settings.smtp) {
const { userName, password, host, port, isSecure } = Meteor.settings.smtp;
const scheme = isSecure ? 'smtps' : 'smtp';
process.env.MAIL_URL = `${scheme}://${encodeURIComponent(userName)}:${encodeURIComponent(password)}#${host}:${port}`;
}
});
Then you can run Meteor with the --settings switch.
meteor run --settings config/development/settings.json
using an environment variable
You can set the environment variable to the encoded string. If you want a utility script (for zsh on *nix) that will convert it (depends on node):
mail_url.sh
#!/bin/zsh
alias urlencode='node -e "console.log(encodeURIComponent(process.argv[1]))"'
ENC_USER=`urlencode $2`
ENC_PASS=`urlencode $3`
MAIL_URL="$1://$ENC_USER:$ENC_PASS#$4"
echo $MAIL_URL
which can be used as follows:
$ chmod +x mail_url.sh
$ MAIL_SCHEME=smtps
$ MAIL_USER=foo#bar.baz
$ MAIL_PASSWORD=p#$$w0rd
$ MAIL_HOST=smtp.gmail.com:465
$ export MAIL_URL=$(./mail_url.sh $MAIL_SCHEME $MAIL_USER $MAIL_PASSWORD $MAIL_HOST)
$ echo $MAIL_URL
smtps://foo%40bar.baz:p%4015766w0rd#smtp.gmail.com:465

running a mysql query using mysql-npm on AWS

Hi guys I have a problem that i don't really have idea how to solve. it's also a bit strange :/
Basically I have created this Lambda function to connect to a mysql DB using the node package 'mysql'.
If i run the function from command line on my pc using the command 'sls function run function1' and make different queries everything is fine.
But when I call the function from a web browser using the link, I have to refresh the page 2 times to get the right result because at the first refresh the server respond with the old result.
I have noticed that from the command line I always have different thredID while from webbrowser is always the same.
Also I don't close the connection in the lambda function code because everything is fine if i run the function from command line but from browser I can only make 2 queries and then I get a message that say that I cannot use a closed connection.
So it seems like Lambda store the old query result when I call it from web browser.
Obviously I'm making same stupid mistake but I don't know how to solve it.
Does anyone have an idea?
Thanks :)
'use strict';
//npm packages
var mysql=require('mysql');
var deasync = require('deasync');
//variables
var goNext=false; //use to synchronize deasync
var error=false; //it becomes TRUE if an error occured during the connection to the DB
var dataColumnTable; //the data thet you extract from the query to the DB
var errorMessage;
//----------------------------------------------------------------------------------------------------------------
//always same credentials
var connection = mysql.createConnection({
host : 'hostAddress',
user : 'Puser',
password : 'password',
port : '3306',
database : 'database1',
});
//----------------------------------------------------------------------------------------------------------------
module.exports.handler = function(event, context) {
var Email=event.email;
connection.query('SELECT City, Address FROM Person WHERE E_Mail=?', Email, function(err, rows) {
if(err){
console.log("Cannot connect to DB");
console.log(err);
error=true;
errorMessage=err;
}
else{
console.log("data from column acquired!");
dataColumnTable=rows;
}
//connection.end(function(err) {
// connection.destroy();
//});
//console.log("Connection closed!");
goNext=true;
});
require('deasync').loopWhile(function(){return goNext!=true;});
//----------------------------------------------------------------------------------------------------------------
if(error==true)
return callback('Error '+ errorMessage);
else
return callback(null,dataColumnTable); //return a JsonFile
//fine headler
};
Disclaimer: I'm not very familiar with AWS and/or AWS Lambda.
http://docs.aws.amazon.com/lambda/latest/dg/programming-model-v2.html states (emphasis mine):
Your Lambda function code must be written in a stateless style, and have no affinity with the underlying compute infrastructure. Your code should expect local file system access, child processes, and similar artifacts to be limited to the lifetime of the request. Persistent state should be stored in Amazon S3, Amazon DynamoDB, or another cloud storage service. Requiring functions to be stateless enables AWS Lambda to launch as many copies of a function as needed to scale to the incoming rate of events and requests. These functions may not always run on the same compute instance from request to request, and a given instance of your Lambda function may be used more than once by AWS Lambda.
Opening a connection and storing it in a variable outside your handler function is state. The connection will likely be closed between requests or even before your first request. Your lambda function may be reused (hence identical thread ids).
My assumption would be (and an attempt to solve this problem), that you need to create the connection on every request (i.e., inside your handler) and may not expect any value be as initialized or as on last request. (except for constants probably).

Mongoose difference between .save() and using update()

To modify a field in an existing entry in mongoose, what is the difference between using
model = new Model([...])
model.field = 'new value';
model.save();
and this
Model.update({[...]}, {$set: {field: 'new value'});
The reason I'm asking this question is because of someone's suggestion to an issue I posted yesterday: NodeJS and Mongo - Unexpected behaviors when multiple users send requests simultaneously. The person suggested to use update instead of save, and I'm not yet completely sure why it would make a difference.
Thanks!
Two concepts first. Your application is the Client, Mongodb is the Server.
The main difference is that with .save() you already have an object in your client side code or had to retrieve the data from the server before you are writing it back, and you are writing back the whole thing.
On the other hand .update() does not require the data to be loaded to the client from the server. All of the interaction happens server side without retrieving to the client.So .update() can be very efficient in this way when you are adding content to existing documents.
In addition, there is the multi parameter to .update() that allows the actions to be performed on more than one document that matches the query condition.
There are some things in convenience methods that you lose when using .update() as a call, but the benefits for certain operations is the "trade-off" you have to bear. For more information on this, and the options available, see the documentation.
In short .save() is a client side interface, .update() is server side.
Some differences:
As noted elsewhere, update is more efficient than find followed by save because it avoids loading the whole document.
A Mongoose update translates into a MongoDB update but a Mongoose save is converted into either a MongoDB insert (for a new document) or an update.
It's important to note that on save, Mongoose internally diffs the document and only sends the fields that have actually changed. This is good for atomicity.
By default validation is not run on update but it can be enabled.
The middleware API (pre and post hooks) is different.
There is a useful feature on Mongoose called Middleware. There are 'pre' and 'post' middleware. The middlewares get executed when you do a 'save', but not during 'update'. For example, if you want to hash a password in the User schema everytime the password is modified, you can use the pre to do it as follows. Another useful example is to set the lastModified for each document. The documentation can be found at http://mongoosejs.com/docs/middleware.html
UserSchema.pre('save', function(next) {
var user = this;
// only hash the password if it has been modified (or is new)
if (!user.isModified('password')) {
console.log('password not modified');
return next();
}
console.log('password modified');
// generate a salt
bcrypt.genSalt(10, function(err, salt) {
if (err) {
return next(err);
}
// hash the password along with our new salt
bcrypt.hash(user.password, salt, function(err, hash) {
if (err) {
return next(err);
}
// override the cleartext password with the hashed one
user.password = hash;
next();
});
});
});
One detail that should not be taken lightly: concurrency
As previously mentioned, when doing a doc.save(), you have to load a document into memory first, then modify it, and finally, doc.save() the changes to the MongoDB server.
The issue arises when a document is edited that way concurrently:
Person A loads the document (v1)
Person B loads the document (v1)
Person B saves changes to the document (it is now v2)
Person A saves changes to an outdated (v1) document
Person A will see Mongoose throw a VersionError because the document has changed since last loaded from the collection
Concurrency is not an issue when doing atomic operations like Model.updateOne(), because the operation is done entirely in the MongoDB server, which performs a certain degree of concurrency control.
Therefore, beware!

Categories