pino logger as fastify plugin - javascript

I have been created my own options and stream for fastify logger:
const logger = pino(
{
level: 'info',
...ecsFormat,
},
pinoMultiStream.multistream([
{ stream: streamToElastic },
{
stream: pretty({
colorize: true,
sync: true,
ignore: 'pid',
}),
},
]),)
const fastify = Fastify({logger})
now I want to extract this options as fastify plugin, how can I do this functionality? If that’s impossible what can I do to extract this code?

You can't encapsulate your code into a Fastify plugin because Fastify's logger has been already created at that time.
In this case, you need to define your own logic to build the fastify server's configuration such as a decorator pattern.
The user experience you will get would be something like:
const decorateLogger = require('my-logger-module')
const applicationConfig = loadAppConfig()
decorateLogger(applicationConfig, options)
const app = Fastify(applicationConfig)

Related

Stream into file USING destination option in pino-multi-stream

Im using this code to stream into a file. But the created file is empty. Is there something wrong with my code?
const fileStream = pinoms.prettyStream(
{
prettyPrint: {
colorize: true,
levelFirst: true,
translateTime: "yyyy-dd-mm, h:MM:ss TT",
},
},
pinoms.destination({
dest: './my-file', // omit for stdout
minLength: 4096, // Buffer before writing
sync: true}) // Asynchronous logging)
)
const streams = [
{stream: fileStream}
]
const logger = pinoms(pinoms.multistream(streams))
logger.info('HELLO %s!', 'World')
In the documentation it says:
const prettyStream = pinoms.prettyStream(
{
prettyPrint:
{ colorize: true,
translateTime: "SYS:standard",
ignore: "hostname,pid" // add 'time' to remove timestamp
},
prettifier: require('pino-pretty') // not required, just an example of setting prettifier
// as well it is possible to set destination option
}
);
So it should be possible.
PS: I know there is the option to put a writestream with fs into it but I want to get the time formatted.
I found a practicable solution for me. Scince v 7 pino provides the multistream function by it selve. Now I can do all I wanted to do. Using destination and also make the timestamp pretty.
const streams = [
{stream: pino.destination('test.log')},
{stream: pretty({
colorize: true,
sync: true,
})}
]
const logger = pino({level: 'info', timestamp: pino.stdTimeFunctions.isoTime}, pino.multistream(streams))
logger.info('HELLO %s!', 'World')

How can I get dynamic data from within gatsby-config.js?

Consider the following code within gatsby-config.js:
module.exports = {
plugins: [
{
resolve: `gatsby-source-fetch`,
options: {
name: `brands`,
type: `brands`,
url: `${dynamicURL}`, // This is the part I need to be dynamic at run/build time.
method: `get`,
axiosConfig: {
headers: { Accept: "text/csv" },
},
saveTo: `${__dirname}/src/data/brands-summary.csv`,
createNodes: false,
},
},
],
}
As you can see above, the URL for the source plugin is something that I need to be dynamic. The reason for this is that the file URL will change every time it's updated in the CMS. I need to query the CMS for that field and get its CDN URL before passing to the plugin.
I tried adding the following to the top of gatsby-config.js but I'm getting errors.
const axios = require("axios")
let dynamicURL = ""
const getBrands = async () => {
return await axios({
method: "get",
url: "https://some-proxy-url-that-returns-json-with-the-csv-file-url",
})
}
;(async () => {
const brands = await getBrands()
dynamicURL = brands.data.summary.url
})()
I'm assuming this doesn't work because the config is not waiting for the request above to resolve and therefore, all we get is a blank URL.
Is there any better way to do this? I can't simply supply the source plugin with a fixed/known URL ahead of time.
Any help greatly appreciated. I'm normally a Vue.js guy but having to work with React/Gatsby and so I'm not entirely familiar with it.
I had similar requirement where I need to set siteId of gatsby-plugin-matomo dynamically by fetching data from async api. After searching a lot of documentation of gatsby build lifecycle, I found a solution.
Here is my approach -
gatsby-config.js
module.exports = {
siteMetadata: {
...
},
plugins: {
{
resolve: 'gatsby-plugin-matomo',
options: {
siteId: '',
matomoUrl: 'MATOMO_URL',
siteUrl: 'GATSBY_SITE_URL',
dev: true
}
}
}
};
Here siteId is blank because I need to put it dynamically.
gatsby-node.js
exports.onPreInit = async ({ actions, store }) => {
const { setPluginStatus } = actions;
const state = store.getState();
const plugin = state.flattenedPlugins.find(plugin => plugin.name === "gatsby-plugin-matomo");
if (plugin) {
const matomo_site_id = await fetchMatomoSiteId('API_ENDPOINT_URL');
plugin.pluginOptions = {...plugin.pluginOptions, ...{ siteId: matomo_site_id }};
setPluginStatus({ pluginOptions: plugin.pluginOptions }, plugin);
}
};
exports.createPages = async function createPages({ actions, graphql }) {
/* Create page code */
};
onPreInit is a gatsby lifecycle method which is executing just after plugin loaded from config. onPreInit lifecycle hook has some built in methods.
store is the redux store where gatsby is storing all required information for build process.
setPluginStatus is a redux action by which plugin data can be modified in redux store of gatsby.
Here the important thing is onPreInit lifecycle hook has to be called in async way.
Hope this helps someone in future.
Another approach that may work for you is using environment variables as you said, the URL is known so, you can add them in a .env file rather than a CSV.
By default, Gatsby uses .env.development for gatsby develop and a .env.production for gatsby build command. So you will need to create two files in the root of your project.
In your .env (both and .env.development and .env.production) just add:
DYNAMIC_URL: https://yourUrl.com
Since your gatsby-config.js is rendered in your Node server, you don't need to prefix them by GATSBY_ as the ones rendered in the client-side needs. So, in your gatsby-config.js:
module.exports = {
plugins: [
{
resolve: `gatsby-source-fetch`,
options: {
name: `brands`,
type: `brands`,
url: process.env.DYNAMIC_URL, // This is the part I need to be dynamic at run/build time.
method: `get`,
axiosConfig: {
headers: { Accept: "text/csv" },
},
saveTo: `${__dirname}/src/data/brands-summary.csv`,
createNodes: false,
},
},
],
It's important to avoid tracking those files in your Git repository since you don't want to expose this type of data.

Using the node.js google cloud speech to text, how can I get the status of a current job?

I managed to trigger a job with:
const config = {
languageCode: 'en-US',
enableSpeakerDiarization: true,
audioChannelCount: 2,
enableSeparateRecognitionPerChannel: true,
useEnhanced: true,
profanityFilter: false,
enableAutomaticPunctuation: true,
};
const audio = {
uri: `gs://${filePath}`
}
const requestObj = {
config: config,
audio: audio
}
return speechClient.longRunningRecognize(requestObj)
I get back an object with a name. I want to use that with https://cloud.google.com/speech-to-text/docs/reference/rest/v1/LongRunningRecognizeMetadata (via the node.js package) to get the current status.
How do I do it?
return speechClient.longrunning.Operation()
Seems not to exist
Looks like you can do it with:
return speechClient.operationsClient.getOperation({ name: googleName })
This is not super well documented

#sentry/node integration to wrap bunyan log calls as breadcrumbs

Sentry by defaults has integration for console.log to make it part of breadcrumbs:
Link: Import name: Sentry.Integrations.Console
How can we make it to work for bunyan logger as well, like:
const koa = require('koa');
const app = new koa();
const bunyan = require('bunyan');
const log = bunyan.createLogger({
name: 'app',
..... other settings go here ....
});
const Sentry = require('#sentry/node');
Sentry.init({
dsn: MY_DSN_HERE,
integrations: integrations => {
// should anything be handled here & how?
return [...integrations];
},
release: 'xxxx-xx-xx'
});
app.on('error', (err) => {
Sentry.captureException(err);
});
// I am trying all to be part of sentry breadcrumbs
// but only console.log('foo'); is working
console.log('foo');
log.info('bar');
log.warn('baz');
log.debug('any');
log.error('many');
throw new Error('help!');
P.S. I have already tried bunyan-sentry-stream but no success with #sentry/node, it just pushes entries instead of treating them as breadcrumbs.
Bunyan supports custom streams, and those streams are just function calls. See https://github.com/trentm/node-bunyan#streams
Below is an example custom stream that simply writes to the console. It would be straight forward to use this example to instead write to the Sentry module, likely calling Sentry.addBreadcrumb({}) or similar function.
Please note though that the variable record in my example below is a JSON string, so you would likely want to parse it to get the log level, message, and other data out of it for submission to Sentry.
{
level: 'debug',
stream:
(function () {
return {
write: function(record) {
console.log('Hello: ' + record);
}
}
})()
}

How to get Electron + rxdb to work?

I want to learn and develop a desktop app by using electron + rxdb.
My file structure:
main.js (the main process of electron)
/js-server/db.js (all about rxdb database, include creation)
/js-client/ui.js (renderer process of electron)
index.html (html home page)
main.js code:
const electron = require('electron')
const dbjs = require('./js-server/db.js')
const {ipcMain} = require('electron')
ipcMain.on('search-person', (event, userInput) => {
event.returnValue = dbjs.searchPerson(userInput);
})
db.js code:
var rxdb = require('rxdb');
var rxjs = require('rxjs');
rxdb.plugin(require('pouchdb-adapter-idb'));
const personSchema = {
title: 'person schema',
description: 'describes a single person',
version: 0,
type: 'object',
properties: {
Name: {type: 'string',primary: true},
Age: {type: 'string'},
},
required: ['Age']
};
var pdb;
rxdb.create({
name: 'persondb',
password: '123456789',
adapter: 'idb',
multiInstance: false
}).then(function(db) {
pdb = db;
return pdb.collection({name: 'persons', schema: personSchema})
});
function searchPerson(userInput) {
pdb.persons.findOne().where('Name').eq(userInput)
.exec().then(function(doc){return doc.Age});
}
module.exports = {
searchPerson: searchPerson
}
ui.js code:
const {ipcRenderer} = require('electron');
function getFormValue() {
let userInput = document.getElementById('searchbox').value;
displayResults(ipcRenderer.sendSync("search-person",userInput));
document.getElementById('searchbox').value = "";
}
Whenever I run this app, I got these errors:
(node:6084) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 2): Error: RxError:
RxDatabase.create(): Adapter not added. (I am sure I've installed the pouched-adapter-idb module successfully)
Type error, cannot read property "persons" of undefined. (this error pops out when I search and hit enter to the form in index.html)
I am new to programming, especially js, I've been stuck on these errors for a week, just can't get it to work. Any help? Thanks.
The problem is that this line is in main.js:
const dbjs = require('./js-server/db.js')
Why? Because you're requiring RxDB inside the main process and using the IndexedDB adapter. IndexedDB is a browser API and thus can only be used in a rendering process. In Electron, the main process is a pure Node/Electron environment with no access to the Chromium API's.
Option #1
If you want to keep your database in a separate thread then consider spawning a new hidden browser window:
import {BrowserWindow} from 'electron'
const dbWindow = new BrowserWindow({..., show: false})
And then use IPC to communicate between the two windows similarly to how you have already done.
Option #2
Use a levelDB adapter that only requires NodeJS API's so you can keep your database in the main process.

Categories