Making .local resolve to IP address AND port (mdns) - javascript

I'm using the multicast-dns node module to attempt making this work.
Looking up custom.local in the browser gives me the console message I setup, but I'm unable to see my actual server running (which is doing so at localhost:12345, where 12345 is a dynamic number). I want to be able to see my local server when visiting custom.local. Is this possible?
Here's some code:
mdns.on("query", query => {
if (query.questions[0] && query.questions[0].name === "custom.local") {
console.log(query);
mdns.respond({
answers: [
{
name: "custom.local",
type: "SRV",
data: {
port: n.get("p"), // dynamic port
weight: 0,
priority: 10,
target: ip // local IP
}
}, {
name: "custom.local",
type: "A",
data: ip,
ttl: 300
}
]
});
}
});
EDIT: I can connect to my local server just fine, that wasn't an issue.

Quoting cfreak:
You can't put port numbers in DNS. DNS is only for looking up an IP by name. For your browser to see it by the name alone you need a proxy program in front of your service or you need to run the service itself on port 80. Port numbers really shouldn't be dynamic. You should specify it in the setup of your service.
That answers my question and offers next steps. Thanks!
UPDATE: Figured out what I was trying to do. Here's some code!
FOUND A SOLUTION, WOOP WOOP!
I'm using this module, but tweaked the source a bit (only because I have dynamic ports, because I feel like it).
/* jshint undef: true, unused: true, esversion: 6, node: true */
"use strict";
//
// G E T
// P A C K A G E S
import express from "express";
import http from "http";
import local from "./server/local";
const n = express();
n.get("/", (req, res) => {
res.send("Welcome home");
});
//
// L A U N C H
const server = http.createServer(n);
server.listen(0, () => {
const port = server.address().port;
local.add(port, "custom.local");
});
Hope this helps you as well, future Internet searcher! :D
Don't let negative folks on other SE sites bring you down. :virtual fist bump:

Related

Why does Nuxt's proxy wont work in production mode with my setup

To avoid users know what endpoint we are requesting data, we are using #nuxtjs/proxy
This config in nuxt.config.js
const deployTarget = process.env.NUXTJS_DEPLOY_TARGET || 'server'
const deploySSR = (process.env.NUXTJS_SSR === 'true') || (process.env.NUXTJS_SSR === true)
And the proxy settings
proxy: {
'/api/**/**': {
changeOrigin: true,
target: process.env.VUE_APP_API_URL,
secure: true,
ws: false,
pathRewrite: { '^/api/': '' }
}
},
Also we deploy like so
NUXTJS_DEPLOY_TARGET=server NUXTJS_SSR=false nuxt build && NUXTJS_DEPLOY_TARGET=server NUXTJS_SSR=false nuxt start
Also in the httpClient that normally is
constructor (basePath, defaultTimeout, fetch, AbortController) {
this.basePath = basePath
this.defaultTimeout = parseInt(defaultTimeout, 10) || 1000
this.isLocalhost = !this.basePath || this.basePath.includes('localhost')
this.fetch = fetch
this.AbortController = AbortController
}
Has been modified like so:
constructor (basePath, defaultTimeout, fetch, AbortController) {
this.basePath = '/api'
this.defaultTimeout = parseInt(defaultTimeout, 10) || 1000
this.isLocalhost = !this.basePath || this.basePath.includes('localhost')
this.fetch = fetch
this.AbortController = AbortController
}
The fetch options are
_getOpts (method, options) {
const opts = Object.assign({}, options)
opts.method = opts.method || method
opts.cache = opts.cache || 'no-cache'
opts.redirect = opts.redirect || 'follow'
opts.referrerPolicy = opts.referrerPolicy || 'no-referrer'
opts.credentials = opts.credentials || 'same-origin'
opts.headers = opts.headers || {}
opts.headers['Content-Type'] = opts.headers['Content-Type'] || 'application/json'
if (typeof (opts.timeout) === 'undefined') {
opts.timeout = this.defaultTimeout
}
return opts
}
So that's making a request to https://api.anothersite.com/api/?request..
And in localhost using npm run dev its working just fine, it requests and fetchs the desired data.
But some how, when we deploy it to the staging environment all those request return
{ "code": 401, "data": "{'statusCode':401,'error':'Unauthorized','message':'Invalid token.'}", "json": { "statusCode": 401, "error": "Unauthorized", "message": "Invalid token." }, "_isJSON": true }
Note that
the front is being deployed to example.com that requires basic http authentication and we are properly authenticated
The requests in both in local and staging are done to api.example.com that doesn't require http auth where the data is served from a Strapi that doesn't need any token at all
is it posible that this response that we are getting is because as requests are from the proxy they are not http authenticated?
You should find somebody who knows some details because you will need those for that project.
Especially because here, you're hosting your app somewhere and that platform is probably missing an environment variable, hence the quite self-explanatory error
401,'error':'Unauthorized','message':'Invalid token
That also explains why that one works locally (you probably have an .env file) and not once pushed.
You could try to create a repro on an SSR-ready VPS but I'm pretty sure that #nuxtjs/proxy is working fine.
Otherwise, double checking the network requests in your browser devtools is still the way to go regarding the correct configuration of the module.
Anyway, further details are needed from your side here.
As a good practice, you should also have the following in your nuxt.config.js file
ssr: true,
target: 'server'
rather than using inline variables for those, safer and self-explanatory for everybody that way (on top of being less error-prone IMO). Or, you can use an env variable for the key itself.

Adding rule to the security group which is created automatically

I am using the AWS CDK to create an ApplicationLoadBalancer which has port 80 accepting external connections.
I want to use port 8080 of target to health check port.
const lb = new elb.ApplicationLoadBalancer(this, "LB", {
vpc: cluster.vpc,
loadBalancerName : loadBalancerName,
internetFacing: true,
vpcSubnets: { subnetType: ec2.SubnetType.PUBLIC },
});
const listener = lb.addListener("Listener", { port: 80 });
const targetGroup = listener.addTargets("ECS", {
protocol: elb.ApplicationProtocol.HTTP,
port: 80,
targets: [ecsAdminService]
});
targetGroup.configureHealthCheck({
path: "/",
port: "8080"
})
In this case ApplicationLoadBalancer makes the security group automatically.
However, it has an outbound rule only port 80. I want to add anoutbound rule port 8080
How can I change the security group so it is automatically generated?
When you create a Load Balancer with CDK if a security group isn't provided, the CDK will be automatically create a Security Group for you.
So, if want to manage the Security group rules, you can create a Security Group with the rules that you need and attach to the created ALB:
const securityGroup1 = new ec2.SecurityGroup(this, 'SecurityGroup1', { vpc });
securityGroup1.addIngressRule(
ec2.Peer.anyIpv4(),
ec2.Port.tcp(80),
'allow HTTP traffic from anywhere',
);
const lb = new elbv2.ApplicationLoadBalancer(this, 'LB', {
vpc,
internetFacing: true,
securityGroup: securityGroup1, // Optional - will be automatically created otherwise
});

Can the tinylicious server be launched at a port other than 3000?

Can the tinylicious server be launched at a port other than 3000? I've tried something like "PORT=4100 tinylicious" and I can see the terminal log saying:
#federation/shell-app: [1] info: Listening on port 4100 {"label":"winston","timestamp":"2021-03-08T19:23:37.861Z"}
but later it fails within my code, indicating something went wrong with the service call:
main.js:15815 ERROR TypeError: Cannot read property 'shapeClicked' of undefined
at Layer.onClick [as zzClickFunc] (collabmap.component.js:45)
at JS:24817
at Array.<anonymous> (JS:8190)
at window.<computed> (JS:1111)
at Object.<anonymous> (JS:51778)
at j (JS:51777)
and indeed, the Network tab reveals it's still posting via 3000:
Request URL: http://localhost:3000/documents/tinylicious
Referrer Policy: strict-origin-when-cross-origin
I know tinylicious is not the full Fluid server and it's just for testing purposes, so it might have been hardwired to 3000, but maybe someone has an idea how to launch it on a different port.
Tinylicious server port is definitely configurable.
If you override their libraries, then you will be able to run your app using any port possible.
You must've noticed this function:
getTinyliciousContainer();
within its libraries - get-tinylicious-container and tinylicious-driver, you will see one of their files in the tinylicious-driver:
insecureTinyliciousUrlResolver.ts, in which every damn host:port is hardcoded to localhost:3000.
Therefore, just copy their code from their getTinyliciousContainer and tinylicious-driver, and make your own version of getTinyliciousContainer. In the future, you need to copy these codes to configure for Routerlicious anyways, as Tinylicious is very lightweight, and is recommended just for testing purposes.
The file you need to modify in #fluidframework/tinylicious-driver is insecureTinyliciousUrlResolver.ts:
export class InsecureTinyliciousUrlResolver implements IUrlResolver {
public async resolve(request: IRequest): Promise<IResolvedUrl> {
const url = request.url.replace(`http://${serviceHostName}:${servicePort}/`,"");
const documentId = url.split("/")[0];
const encodedDocId = encodeURIComponent(documentId);
const documentRelativePath = url.slice(documentId.length);
const serviceHostName = "YOUR-PREFERRED-HOST-NAME";
const servicePort = "YOUR-PREFERRED-PORT";
const documentUrl = `fluid://${serviceHostName}:${servicePort}/tinylicious/${encodedDocId}${documentRelativePath}`;
const deltaStorageUrl = `http://${serviceHostName}:${servicePort}/deltas/tinylicious/${encodedDocId}`;
const storageUrl = `http://${serviceHostName}:${servicePort}/repos/tinylicious`;
const response: IFluidResolvedUrl = {
endpoints: {
deltaStorageUrl,
ordererUrl: `http://${serviceHostName}:${servicePort}`,
storageUrl,
},
tokens: { jwt: this.auth(documentId) },
type: "fluid",
url: documentUrl,
};
return response;
}
public async getAbsoluteUrl(resolvedUrl: IFluidResolvedUrl, relativeUrl: string): Promise<string> {
const documentId = decodeURIComponent(resolvedUrl.url.replace(`fluid://${serviceHostName}:${servicePort}/tinylicious/`, ""));
/*
* The detached container flow will ultimately call getAbsoluteUrl() with the resolved.url produced by
* resolve(). The container expects getAbsoluteUrl's return value to be a URL that can then be roundtripped
* back through resolve() again, and get the same result again. So we'll return a "URL" with the same format
* described above.
*/
return `${documentId}/${relativeUrl}`;
}
private auth(documentId: string) {
const claims: ITokenClaims = {
documentId,
scopes: ["doc:read", "doc:write", "summary:write"],
tenantId: "tinylicious",
user: { id: uuid() },
// #ts-ignore
iat: Math.round(new Date().getTime() / 1000),
exp: Math.round(new Date().getTime() / 1000) + 60 * 60, // 1 hour expiration
ver: "1.0",
};
const utf8Key = { utf8: "12345" };
return jsrsasign.jws.JWS.sign(null, JSON.stringify({ alg:"HS256", typ: "JWT" }), claims, utf8Key);
}
}
export const createTinyliciousCreateNewRequest =
(documentId: string): IRequest=> (
{
url: documentId,
headers:{
createNew: true,
},
}
);
Then, you just run this React app standalone instead of concurrently, and without the built-in Tinylicious server.
Go to their GitHub, clone their Tinylicious in the FluidFramework/server repo, and run it in whatever port you want.
And here you go, now you can run Tinylicious in any host, any port you wanted.
The Tinylicious port is now configurable. More details in https://github.com/microsoft/FluidFramework/issues/5415

How to create an offline MySQL database javascript?

I have created a sample.js file with the following code
var mysql = require('mysql');
Typically, I would connect to my online database using:
var pool = mysql.createPool({
host: 'den1.mysql5.gear.host',
user: 'myst',
password: 'hidden',
database: "myst"
});
and then do
var connection = pool.getConnection(function(err, connection) {
//do whatever like connection.query
});
How can I create a local database file and access that, instead of using server side databases?
Edit: USING ONLY MySQL!
If you do not know, please do not answer. I am not looking for an alternative (since most alternatives cause node to delete packages needed by discord.js for some reason).
MySQL is quite heavy to implement database on front end as there is size and speed limitations. I'll prefer using it on back end only but if you want to use database on front-end you can use db.js. There is indexDB presently present in most of the modern browser. The db.js is a wrapper around that consumes it to implement database on front end. Here's sample provided on the documentation.
<script src='/scripts/db.js'></script>
var server;
db.open( {
server: 'my-app',
version: 1,
schema: {
people: {
key: { keyPath: 'id' , autoIncrement: true },
// Optionally add indexes
indexes: {
firstName: { },
answer: { unique: true }
}
}
}
} ).done( function ( s ) {
server = s
} );

How to get Electron + rxdb to work?

I want to learn and develop a desktop app by using electron + rxdb.
My file structure:
main.js (the main process of electron)
/js-server/db.js (all about rxdb database, include creation)
/js-client/ui.js (renderer process of electron)
index.html (html home page)
main.js code:
const electron = require('electron')
const dbjs = require('./js-server/db.js')
const {ipcMain} = require('electron')
ipcMain.on('search-person', (event, userInput) => {
event.returnValue = dbjs.searchPerson(userInput);
})
db.js code:
var rxdb = require('rxdb');
var rxjs = require('rxjs');
rxdb.plugin(require('pouchdb-adapter-idb'));
const personSchema = {
title: 'person schema',
description: 'describes a single person',
version: 0,
type: 'object',
properties: {
Name: {type: 'string',primary: true},
Age: {type: 'string'},
},
required: ['Age']
};
var pdb;
rxdb.create({
name: 'persondb',
password: '123456789',
adapter: 'idb',
multiInstance: false
}).then(function(db) {
pdb = db;
return pdb.collection({name: 'persons', schema: personSchema})
});
function searchPerson(userInput) {
pdb.persons.findOne().where('Name').eq(userInput)
.exec().then(function(doc){return doc.Age});
}
module.exports = {
searchPerson: searchPerson
}
ui.js code:
const {ipcRenderer} = require('electron');
function getFormValue() {
let userInput = document.getElementById('searchbox').value;
displayResults(ipcRenderer.sendSync("search-person",userInput));
document.getElementById('searchbox').value = "";
}
Whenever I run this app, I got these errors:
(node:6084) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 2): Error: RxError:
RxDatabase.create(): Adapter not added. (I am sure I've installed the pouched-adapter-idb module successfully)
Type error, cannot read property "persons" of undefined. (this error pops out when I search and hit enter to the form in index.html)
I am new to programming, especially js, I've been stuck on these errors for a week, just can't get it to work. Any help? Thanks.
The problem is that this line is in main.js:
const dbjs = require('./js-server/db.js')
Why? Because you're requiring RxDB inside the main process and using the IndexedDB adapter. IndexedDB is a browser API and thus can only be used in a rendering process. In Electron, the main process is a pure Node/Electron environment with no access to the Chromium API's.
Option #1
If you want to keep your database in a separate thread then consider spawning a new hidden browser window:
import {BrowserWindow} from 'electron'
const dbWindow = new BrowserWindow({..., show: false})
And then use IPC to communicate between the two windows similarly to how you have already done.
Option #2
Use a levelDB adapter that only requires NodeJS API's so you can keep your database in the main process.

Categories