Blurring image with canvas nodeJS - javascript

It seems like ctx.filter = "blur(amount)" doesn't work. Here's my code:
const {
body
} = await request.get(
url
);
const data = await Canvas.loadImage(body);
ctx.filter = "blur(50px)"
ctx.drawImage(data, 0, 0, canvas.width, canvas.heigth)
const final = new Discord.MessageAttachment(
canvas.toBuffer(),
"blurred.png"
);
Yeah it's for Discord.

Indeed they still don't support this feature.
Here is the issue tracking the feature request.
Apparently, these filters are not built-in Cairo, the rendering engine used by node-canvas, however Skia, an other rendering engine that node-canvas could switch to in the near future apparently does. So there is hope it comes one day, but for the time being, we have to implement it ourselves.
For gaussian blur, you can look at the StackBlur.js project, which was the de-facto blur filter library before browsers implement it natively. Here is a glitch project as an example.

Related

The blur filter doesn't work on discord.js [duplicate]

It seems like ctx.filter = "blur(amount)" doesn't work. Here's my code:
const {
body
} = await request.get(
url
);
const data = await Canvas.loadImage(body);
ctx.filter = "blur(50px)"
ctx.drawImage(data, 0, 0, canvas.width, canvas.heigth)
const final = new Discord.MessageAttachment(
canvas.toBuffer(),
"blurred.png"
);
Yeah it's for Discord.
Indeed they still don't support this feature.
Here is the issue tracking the feature request.
Apparently, these filters are not built-in Cairo, the rendering engine used by node-canvas, however Skia, an other rendering engine that node-canvas could switch to in the near future apparently does. So there is hope it comes one day, but for the time being, we have to implement it ourselves.
For gaussian blur, you can look at the StackBlur.js project, which was the de-facto blur filter library before browsers implement it natively. Here is a glitch project as an example.

Different predictions if running in Node instead of Browser (using the same model_web - python converted model)

pretty new to ML and tensorflow!
I made an object detection model with http://cloud.annotations.ai that permits to train and convert a model in different formats, tfjs (model_web) too.
That website provides also boilerplates for running the model within a browser (react app)... just like you do - probably it is the same code, didn't spend enough time.
So I have this model running inside a browser, giving prediction about objects in a photo with pretty good results considering the amount of example I gave and the prediction score (0.89). the given bounding box is good too.
But, unfortunately, I didn't have "just one video" to analyze frame by frame inside a browser, I've got plenty of them. So I decided to switch to node.js, porting the code as is.
Guess what? TF.js relies on DOM and browser components, and almost none examples that works with Node exists. So not a big deal, just spent a morning figuring out all the missing parts.
Finally I'm able to run my model over videos that are splitted in frames, at a decent speed - although having the "Hello there, use tfjs-node to gain speed" banner when I'm already using tfjs-node - but results seems odd.
Comparing the same picture with the same model_web folder gave the same prediction but with lower score (0.80 instead of 0.89) and a different bounding box, with object not centered at all.
(TL;DR)
Does tfjs have different implementation of the libraries (tfjs and tfjs-node) that makes different use of the same model? I don't think it can be a problem of input because - after a long search and fight - i figure out two ways to give the image to tf.browser.getPixel in Node (and I'm still wondering why I have to use a "browser" method inside tfjs-node). Anyone made comparisons?
So... that's the code I used, for your reference:
model_web is being loaded with tf.loadGraphModel("file://path/to/model_web/model.json");
two different ways to convert a JPG and make it works with tf.browser.getPixel()
const inkjet = require('inkjet');
const {createCanvas, loadImage} = require('canvas');
const decodeJPGInkjet = (file) => {
return new Promise((rs, rj) => {
fs.readFile(file).then((buffer) => {
inkjet.decode(buffer, (err, decoded) => {
if (err) {
rj(err);
} else {
rs(decoded);
}
});
});
});
};
const decodeJPGCanvas = (file) => {
return loadImage(file).then((image) => {
const canvas = createCanvas(image.width, image.height);
const ctx = canvas.getContext('2d');
ctx.drawImage(image, 0, 0, image.width, image.height);
const data = ctx.getImageData(0, 0, image.width, image.height);
return {data: new Uint8Array(data.data), width: data.width, height: data.height};
});
};
and that's the code that use the loaded model to give predictions - same code for node and browser, found at https://github.com/cloud-annotations/javascript-sdk/blob/master/src/index.js - doesn't works on node as it is, I changed require("#tensorflow/tfjs"); with require("#tensorflow/tfjs-node"); and replaced fetch with fs.read
const runObjectDetectionPrediction = async (graph, labels, input) => {
const batched = tf.tidy(() => {
const img = tf.browser.fromPixels(input);
// Reshape to a single-element batch so we can pass it to executeAsync.
return img.expandDims(0);
});
const height = batched.shape[1];
const width = batched.shape[2];
const result = await graph.executeAsync(batched);
const scores = result[0].dataSync();
const boxes = result[1].dataSync();
// clean the webgl tensors
batched.dispose();
tf.dispose(result);
const [maxScores, classes] = calculateMaxScores(
scores,
result[0].shape[1],
result[0].shape[2]
);
const prevBackend = tf.getBackend();
// run post process in cpu
tf.setBackend("cpu");
const indexTensor = tf.tidy(() => {
const boxes2 = tf.tensor2d(boxes, [result[1].shape[1], result[1].shape[3]]);
return tf.image.nonMaxSuppression(
boxes2,
maxScores,
20, // maxNumBoxes
0.5, // iou_threshold
0.5 // score_threshold
);
});
const indexes = indexTensor.dataSync();
indexTensor.dispose();
// restore previous backend
tf.setBackend(prevBackend);
return buildDetectedObjects(
width,
height,
boxes,
maxScores,
indexes,
classes,
labels
);
};
Do different implementation of the libraries (tfjs and tfjs-node) that makes different use of the same model
If the same model is deployed both in the browser and in nodejs, the prediction will be the same thing.
If the predicted value are different, it might be related to the tensor used for the prediction. The processing from the image to the tensor might be different resulting in different tensors being used for the prediction thus causing the output to be different.
i figure out two ways to give the image to tf.browser.getPixel in Node (and I'm still wondering why I have to use a "browser" method inside tfjs-node)
The canvas package use the system graphic to create the browser like canvas environment that can be used by nodejs. This makes it possible to use tf.browser namespace especially when dealing with image conversion. However it is still possible to use directly nodejs buffer to create a tensor.

How to get time of page's first paint

While it is easy enough to get firstPaint times from dev tools, is there a way to get the timing from JS?
Yes, this is part of the paint timing API.
You probably want the timing for first-contentful-paint, which you can get using:
const paintTimings = performance.getEntriesByType('paint');
const fmp = paintTimings.find(({ name }) => name === "first-contentful-paint");
enter code here
console.log(`First contentful paint at ${fmp.startTime}ms`);
Recently new browser APIs like PerformanceObserver and PerformancePaintTiming have made it easier to retrieve First Contentful Paint (FCP) by Javascript.
I made a JavaScript library called Perfume.js which works with few lines of code
const perfume = new Perfume({
firstContentfulPaint: true
});
// ⚡️ Perfume.js: First Contentful Paint 2029.00 ms
I realize you asked about First Paint (FP) but would suggest using First Contentful Paint (FCP) instead.
The primary difference between the two metrics is FP marks the point
when the browser renders anything that is visually different from what
was on the screen prior to navigation. By contrast, FCP is the point
when the browser renders the first bit of content from the DOM, which
may be text, an image, SVG, or even a canvas element.
if(typeof(PerformanceObserver)!=='undefined'){ //if browser is supporting
const observer = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
console.log(entry.entryType);
console.log(entry.startTime);
console.log(entry.duration);
}
});
observer.observe({entryTypes: ['paint']});
}
this will help you just paste this code in starting of your js app before everything else.

How do I get microphone data from AudioContext

So, I just found out that you can record sound using javascript. That's just awesome!
I intantly created new project to do something on my own. However, as soon as I opened source code of the example script, I found out that there are no explanatory comments at all.
I started googling and found a long and interesting article about AudioContext that doesn't to be aware of the recording at all (it only mentions remixinf sounds) and MDN article, that contains all the information - succesfully hiding the one I'm after.
I'm also aware of existing frameworks that deal with the thing (somehow, maybe). But if I wanted to have a sound recorder I'd download one - but I'm really curious how the thing works.
Now not only that I'm not familiar with the coding part of the thing, I'm also curious how the whole thing will work - do I get intensity in specific time? Much like in any osciloscope?
Or can I already get spectral analysis for the sample?
So, just to avoid any mistakes: Please, could anyone explain the simplest and most straightforward way to get the input data using above-mentioned API and eventually provide a code with explanatory comments?
If you just want to use mic input as a source on WebAudio API, the following code worked for me. It was based on: https://gist.github.com/jarlg/250decbbc50ce091f79e
navigator.getUserMedia = navigator.getUserMedia
|| navigator.webkitGetUserMedia
|| navigator.mozGetUserMedia;
navigator.getUserMedia({video:false,audio:true},callback,console.log);
function callback(stream){
ctx = new AudioContext();
mic = ctx.createMediaStreamSource(stream);
spe = ctx.createAnalyser();
spe.fftSize = 256;
bufferLength = spe.frequencyBinCount;
dataArray = new Uint8Array(bufferLength);
spe.getByteTimeDomainData(dataArray);
mic.connect(spe);
spe.connect(ctx.destination);
draw();
}

enable webGL automatically?

I am using glfx.js jquery plugin for adjusting image's hue/saturation but most of the browser not supporting WebGL
is there a way to automatically enable webgl in browser if website uses it?
I know that you need to enable it manually in Safari, and there is a plugin for IE.
Or Is there any way so we can know at the time of page load that WebGl is disabled ?
There isn't a way to automatically enable WebGL in a browser. It's in the hands of the browser vendor to enable it by default - Safari and IE are the two latecomers in that area. WebGL is coming in IE11 (source), and I would imagine the next release of Safari will have it on by default (it's been over a year since the last release).
You can however detect if WebGL is enabled.
As you're probably aware, glfx.js spits out an error message if WebGL isn't enabled. It sounds like you're trying to take some other action if that's the case (no WebGL support).
Looking at the glfx.js documentation, they provide this example:
window.onload = function() {
// try to create a WebGL canvas (will fail if WebGL isn't supported)
try {
var canvas = fx.canvas();
} catch (e) {
alert(e);
// Here is where you want to take some other action
// For example, redirecting to another page with window.location.href = ...
return;
}
}
For those reading this not using glfx.js, theeasiest check is to use !!window.WebGLRenderingContext, but that doesn't always work (some browsers throw back false positives). Your best be is to create a canvas element and check for the WebGL context:
function webGLSupport()
{
var canvas = document.createElement( 'canvas' );
var webgl = false;
try
{
webgl = !!( canvas.getContext( 'webgl' ) || canvas.getContext( 'experimental-webgl' ) );
}
catch(e) {};
return webgl;
}

Categories