Puppeteer + Pixelmatch: Comparing screenshots in Mac and TravisCI Linux - javascript

I'm trying to implement JS testing by loading pages and taking screenshots of the elements with puppeteer. So far so good, everything works perfectly in my local (after I fixed a snag between a normal screen an a retina display) but when I ran the same testing on TravisCI I got small text differences that I can't get around, anyone has any clue what is going on?
This is how I configure my browser instance:
browser = await puppeteer.launch(({
headless: true,
args :[
'--hide-scrollbars',
'--enable-font-antialiasing',
'--force-device-scale-factor=1', '--high-dpi-support=1',
'--no-sandbox', '--disable-setuid-sandbox', // Props for TravisCI
]
}));
And here is how I compare the screenshots:
const compareScreenshots = (fileName) => {
return new Promise((resolve) => {
const base = fs.createReadStream(`${BASE_IMAGES_PATH}/${fileName}.png`).pipe(new PNG()).on('parsed', doneReading);
const live = fs.createReadStream(`${WORKING_IMAGES_PATH}/${fileName}.png`).pipe(new PNG()).on('parsed', doneReading);
let filesRead = 0;
function doneReading() {
// Wait until both files are read.
if (++filesRead < 2) {
return;
}
// Do the visual diff.
const diff = new PNG({width: base.width, height: base.height});
const mismatchedPixels = pixelmatch(
base.data, live.data, diff.data, base.width, base.height,
{threshold: 0.1});
resolve({
mismatchedPixels,
diff,
});
}
});
};
Here is an example of the diff that this is generating:

I had a similar problem. I put in a delay of 400ms before snapping the screenshot and it seems to have fixed the problem. If you come up with something better I'd love to know it.

Fonts can be rendered slightly differently on different OSes. This can cause the artifacts along the edges of the text that you are experiencing. You have a few options:
apply a slight Gaussian blur to the images before comparison (or use css blur). This will smooth away the differences between hard edges in the images.
increase the 'threshold' property to make the anti-aliasing filtering less sensitive.
allow a certain number of pixel differences in your comparison by using a percentage of the total image ( width * height * percentage_threshold ). This number will be influenced by how much text is on the screen at any given point.
use standardized Webfonts for all text - this could help get the fonts close to identical given that you are using the same browser on both systems.

I had a similar issue and I ended up with running my snapshot tests "locally" inside a docker container. I also mounted the project folder so whenever snapshots had to be updated they were updated inside the container but also in the host os.

Related

Missing images depending on scale in html2canvas (html2pdf)

I am trying to make a PDF of a reasonable amount of text and graphs in my html using html2pdf. So far so good, the PDF gets made and it looks fine. It is about 20 pages.
However, multiple graphs are missing. Some relevant information:
Most of the missing graphs are at the end of the document. However, the final graph is rendered, so it is not an explicit cut off at the end
The text accompanying the graphs is there, but the graph itself is not
The missing graphs are all of the same type. Other graphs of this type are rendered and look fine. It does not seem to be a problem with the type
If I reduce the scale on the html2canvas configuration to about 0.8, every graph gets rendered (but of course quality is reduced). I'd like the scale to be 2.
The fact that scale influences whether they are rendered or not, gives me the idea that something like timing / timeouts are a problem here. Larger scale means obviously longer rendering time, but it does not seem to wait for it to be done. Or something like that.
Below the majority of the code that makes the PDF.
The onClone is necessary for the graphs to be rendered correctly. If it is removed, the problem described above still occurs (but the graphs that áre rendered are ugly).
const options = {
filename: 'test.pdf',
margin: [15, 0, 15, 0],
image: { type: 'jpeg', quality: 1 }
html2canvas: {
scale: 2,
scrollY: 0,
onclone: (element) => {
const svgElements = element.body.querySelectorAll('svg');
Array.from(svgElements).forEach((item: any) => {
item.setAttribute('width', item.getBoundingClientRect().width.toString());
item.setAttribute('height', item.getBoundingClientRect().height.toString());
item.style.width = null;
item.style.height = null;
});
}
},
jsPDF: { orientation: 'portrait', format: 'a4' }
};
setTimeout(() => {
const pdfElement = document.getElementById('contentToConvert');
html2pdf().from(pdfElement).set(options).save()
.catch((err) => this.errorHandlerService.handleError(err))
}, 100);
It sounds like you may be exceeding the maximum canvas size on your browser. This varies by browser (and browser version). Try the demo from here to check out your browser's limit. If you can find 2 browsers with different limits (on my desktop, Safari and Chrome have the same, but the max area in FireFox is a bit lower - iPhone area much lower in Safari), try pushing your scale down on the one with the larger limit until it succeeds, and then see if that fails on the one with the lower limits. There are other limits in your browser (eg. max heap size) which may come into play. If this is the case, I don't have a good solution for you - its usually impractical to get clients to all reconfigure their browsers (and most of these limits are hard anyway). For obvious reasons, browsers don't allow the website to make arbitrary changes to memory limits. If you are using Node.js, you may have more success in dealing with memory limits. Either way (Node or otherwise), it's sometimes better to send things back to the server when you are pushing the limits of the client.

Different predictions if running in Node instead of Browser (using the same model_web - python converted model)

pretty new to ML and tensorflow!
I made an object detection model with http://cloud.annotations.ai that permits to train and convert a model in different formats, tfjs (model_web) too.
That website provides also boilerplates for running the model within a browser (react app)... just like you do - probably it is the same code, didn't spend enough time.
So I have this model running inside a browser, giving prediction about objects in a photo with pretty good results considering the amount of example I gave and the prediction score (0.89). the given bounding box is good too.
But, unfortunately, I didn't have "just one video" to analyze frame by frame inside a browser, I've got plenty of them. So I decided to switch to node.js, porting the code as is.
Guess what? TF.js relies on DOM and browser components, and almost none examples that works with Node exists. So not a big deal, just spent a morning figuring out all the missing parts.
Finally I'm able to run my model over videos that are splitted in frames, at a decent speed - although having the "Hello there, use tfjs-node to gain speed" banner when I'm already using tfjs-node - but results seems odd.
Comparing the same picture with the same model_web folder gave the same prediction but with lower score (0.80 instead of 0.89) and a different bounding box, with object not centered at all.
(TL;DR)
Does tfjs have different implementation of the libraries (tfjs and tfjs-node) that makes different use of the same model? I don't think it can be a problem of input because - after a long search and fight - i figure out two ways to give the image to tf.browser.getPixel in Node (and I'm still wondering why I have to use a "browser" method inside tfjs-node). Anyone made comparisons?
So... that's the code I used, for your reference:
model_web is being loaded with tf.loadGraphModel("file://path/to/model_web/model.json");
two different ways to convert a JPG and make it works with tf.browser.getPixel()
const inkjet = require('inkjet');
const {createCanvas, loadImage} = require('canvas');
const decodeJPGInkjet = (file) => {
return new Promise((rs, rj) => {
fs.readFile(file).then((buffer) => {
inkjet.decode(buffer, (err, decoded) => {
if (err) {
rj(err);
} else {
rs(decoded);
}
});
});
});
};
const decodeJPGCanvas = (file) => {
return loadImage(file).then((image) => {
const canvas = createCanvas(image.width, image.height);
const ctx = canvas.getContext('2d');
ctx.drawImage(image, 0, 0, image.width, image.height);
const data = ctx.getImageData(0, 0, image.width, image.height);
return {data: new Uint8Array(data.data), width: data.width, height: data.height};
});
};
and that's the code that use the loaded model to give predictions - same code for node and browser, found at https://github.com/cloud-annotations/javascript-sdk/blob/master/src/index.js - doesn't works on node as it is, I changed require("#tensorflow/tfjs"); with require("#tensorflow/tfjs-node"); and replaced fetch with fs.read
const runObjectDetectionPrediction = async (graph, labels, input) => {
const batched = tf.tidy(() => {
const img = tf.browser.fromPixels(input);
// Reshape to a single-element batch so we can pass it to executeAsync.
return img.expandDims(0);
});
const height = batched.shape[1];
const width = batched.shape[2];
const result = await graph.executeAsync(batched);
const scores = result[0].dataSync();
const boxes = result[1].dataSync();
// clean the webgl tensors
batched.dispose();
tf.dispose(result);
const [maxScores, classes] = calculateMaxScores(
scores,
result[0].shape[1],
result[0].shape[2]
);
const prevBackend = tf.getBackend();
// run post process in cpu
tf.setBackend("cpu");
const indexTensor = tf.tidy(() => {
const boxes2 = tf.tensor2d(boxes, [result[1].shape[1], result[1].shape[3]]);
return tf.image.nonMaxSuppression(
boxes2,
maxScores,
20, // maxNumBoxes
0.5, // iou_threshold
0.5 // score_threshold
);
});
const indexes = indexTensor.dataSync();
indexTensor.dispose();
// restore previous backend
tf.setBackend(prevBackend);
return buildDetectedObjects(
width,
height,
boxes,
maxScores,
indexes,
classes,
labels
);
};
Do different implementation of the libraries (tfjs and tfjs-node) that makes different use of the same model
If the same model is deployed both in the browser and in nodejs, the prediction will be the same thing.
If the predicted value are different, it might be related to the tensor used for the prediction. The processing from the image to the tensor might be different resulting in different tensors being used for the prediction thus causing the output to be different.
i figure out two ways to give the image to tf.browser.getPixel in Node (and I'm still wondering why I have to use a "browser" method inside tfjs-node)
The canvas package use the system graphic to create the browser like canvas environment that can be used by nodejs. This makes it possible to use tf.browser namespace especially when dealing with image conversion. However it is still possible to use directly nodejs buffer to create a tensor.

Get the pixel screen size in Spark AR studio (for Facebook)

I am starting to work with Spark AR studio and I looking for to get the screen size in pixel to compare the coordinate obtained by the gesture.location on Tap.
TouchGestures.onTap().subscribe((gesture) => {
// ! The location is always specified in the screen coordinates
Diagnostics.log(`Screen touch in pixel = { x:${gesture.location.x}, y: ${gesture.location.y} }`);
// ????
});
The gesture.location is in pixel (screen coordinate) and would like to compare it with the screen size to determine which side of the screen is touched.
Maybe using the Camera.focalPlane could be a good idea...
Update
I tried two new things to have the screen size:
const CameraInfo = require('CameraInfo');
Diagnostics.log(CameraInfo.previewSize.height.pinLastValue());
const focalPlane = Scene.root.find('Camera').focalPlane;
Diagnostics.log(focalPlane.height.pinLastValue());
But both return 0
This answer might be a bit late but it might be a nice addition for people looking for a solution where the values can easily be used in script, I came across this code(not mine, forgot to save a link):
var screen_height = 0;
Scene.root.find('screenCanvas').bounds.height.monitor({fireOnInitialValue: true}).subscribe(function (height) {
screen_height = height.newValue;
});
var screen_width = 0;
Scene.root.find('screenCanvas').bounds.width.monitor({fireOnInitialValue: true}).subscribe(function (width) {
screen_width = width.newValue;
});
This worked well for me since I couldn't figure out how to use Diagnostics.log with the data instead of Diagnostics.watch.
Finally,
Using the Device Info in the Patch Editor and passing these to the script works!
First, add a variable "to script" in the editor:
Then, create that in patch editor:
And you can grab that with this script:
const Patches = require('Patches');
const screenSize = Patches.getPoint2DValue('screenSize');
My mistake was to use Diagnostic.log() to check if my variable worked well.
Instead use Diagnostic.watch():
Diagnostic.watch('screenSize.x', screenSize.x);
Diagnostic.watch('screenSize.y', screenSize.y);
Screen size is available via the Device Info patch output, after dragging it to patch editor from the Scene section.
Now in the open beta (as of this post) you can drag Device from the scene sidebar into the patch editor to get a patch that outputs screen size, screen scale, and safe area inserts as well as the self Object.
The Device patch
The device size can be used in scripts using CameraInfo.previewSize.width and CameraInfo.previewSize.height respectively. For instance, if you wanted to get 2d points representing the min/max points on the screen, this'd do the trick.
const CameraInfo = require('CameraInfo')
const Reactive = require('Reactive')
const min = Reactive.point2d(
Reactive.val(0),
Reactive.val(0)
)
const max = Reactive.point2d(
CameraInfo.previewSize.width,
CameraInfo.previewSize.height
)
(The point I want to emphasize being that CameraInfo.previewSize.width and CameraInfo.previewSize.height are ScalarSignals, not number literals.)
Edit: Here's a link to the documentation: https://sparkar.facebook.com/ar-studio/learn/documentation/reference/classes/camerainfomodule

How to get time of page's first paint

While it is easy enough to get firstPaint times from dev tools, is there a way to get the timing from JS?
Yes, this is part of the paint timing API.
You probably want the timing for first-contentful-paint, which you can get using:
const paintTimings = performance.getEntriesByType('paint');
const fmp = paintTimings.find(({ name }) => name === "first-contentful-paint");
enter code here
console.log(`First contentful paint at ${fmp.startTime}ms`);
Recently new browser APIs like PerformanceObserver and PerformancePaintTiming have made it easier to retrieve First Contentful Paint (FCP) by Javascript.
I made a JavaScript library called Perfume.js which works with few lines of code
const perfume = new Perfume({
firstContentfulPaint: true
});
// ⚡️ Perfume.js: First Contentful Paint 2029.00 ms
I realize you asked about First Paint (FP) but would suggest using First Contentful Paint (FCP) instead.
The primary difference between the two metrics is FP marks the point
when the browser renders anything that is visually different from what
was on the screen prior to navigation. By contrast, FCP is the point
when the browser renders the first bit of content from the DOM, which
may be text, an image, SVG, or even a canvas element.
if(typeof(PerformanceObserver)!=='undefined'){ //if browser is supporting
const observer = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
console.log(entry.entryType);
console.log(entry.startTime);
console.log(entry.duration);
}
});
observer.observe({entryTypes: ['paint']});
}
this will help you just paste this code in starting of your js app before everything else.

PIXI.js AnimatedSprite lag on first play

I need some help understanding what the best practice is for creating a PIXI.extras.AnimatedSprite from spritesheet(s). I am currently loading 3 medium-sized spritesheets for 1 animation, created by TexturePacker, I collect all the frames and then play. However the first time playing the animation is very unsmooth, and almost jumps immediately to the end, from then on it plays really smooth. I have read a bit and I can see 2 possible causes. 1) The lag might be caused by the time taken to upload the textures to the GPU. There is a PIXI plugin called prepare renderer.plugins.prepare.upload which should enable me to upload them before playing and possibly smoothen out the initial loop. 2) Having an AnimatedSprite build from more than one texture/image is not ideal and could be the cause.
Question 1: Should I use the PIXI Prepare plugin, will this help, and if so how do I actually use it. Documentation for it is incredibly limited.
Question 2: Is having frames across multiple textures a bad idea, could it be the cause & why?
A summarised example of what I am doing:
function loadSpriteSheet(callback){
let loader = new PIXI.loaders.Loader()
loader.add('http://mysite.fake/sprite1.json')
loader.add('http://mysite.fake/sprite2.json')
loader.add('http://mysite.fake/sprite3.json')
loader.once('complete', callback)
loader.load()
}
loadSpriteSheet(function(resource){
// helper function to get all the frames from multiple textures
let frameArray = getFrameFromResource(resource)
let animSprite = new PIXI.extras.AnimatedSprite(frameArray)
stage.addChild(animSprite)
animSprite.play()
})
Question 1
So I have found a solution, possibly not the solution but it works well for me. The prepare plugin was the right solution but never worked. WebGL needs the entire texture(s) uploaded not the frames. The way textures are uploaded to the GPU is via renderer.bindTexture(texture). When the PIXI loader receives a sprite atlas url e.g. my_sprites.json it automatically downloads the image file and names it as mysprites.json_image in the loaders resources. So you need to grab that, make a texture and upload it to the GPU. So here is the updated code:
let loader = new PIXI.loaders.Loader()
loader.add('http://mysite.fake/sprite1.json')
loader.add('http://mysite.fake/sprite2.json')
loader.add('http://mysite.fake/sprite3.json')
loader.once('complete', callback)
loader.load()
function uploadToGPU(resourceName){
resourceName = resourceName + '_image'
let texture = new PIXI.Texture.fromImage(resourceName)
this.renderer.bindTexture(texture)
}
loadSpriteSheet(function(resource){
uploadToGPU('http://mysite.fake/sprite1.json')
uploadToGPU('http://mysite.fake/sprite2.json')
uploadToGPU('http://mysite.fake/sprite3.json')
// helper function to get all the frames from multiple textures
let frameArray = getFrameFromResource(resource)
let animSprite = new PIXI.extras.AnimatedSprite(frameArray)
this.stage.addChild(animSprite)
animSprite.play()
})
Question 2
I never really discovered and answer but the solution to Question 1 has made my animations perfectly smooth so in my case, I see no performance issues.

Categories