While it is easy enough to get firstPaint times from dev tools, is there a way to get the timing from JS?
Yes, this is part of the paint timing API.
You probably want the timing for first-contentful-paint, which you can get using:
const paintTimings = performance.getEntriesByType('paint');
const fmp = paintTimings.find(({ name }) => name === "first-contentful-paint");
enter code here
console.log(`First contentful paint at ${fmp.startTime}ms`);
Recently new browser APIs like PerformanceObserver and PerformancePaintTiming have made it easier to retrieve First Contentful Paint (FCP) by Javascript.
I made a JavaScript library called Perfume.js which works with few lines of code
const perfume = new Perfume({
firstContentfulPaint: true
});
// ⚡️ Perfume.js: First Contentful Paint 2029.00 ms
I realize you asked about First Paint (FP) but would suggest using First Contentful Paint (FCP) instead.
The primary difference between the two metrics is FP marks the point
when the browser renders anything that is visually different from what
was on the screen prior to navigation. By contrast, FCP is the point
when the browser renders the first bit of content from the DOM, which
may be text, an image, SVG, or even a canvas element.
if(typeof(PerformanceObserver)!=='undefined'){ //if browser is supporting
const observer = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
console.log(entry.entryType);
console.log(entry.startTime);
console.log(entry.duration);
}
});
observer.observe({entryTypes: ['paint']});
}
this will help you just paste this code in starting of your js app before everything else.
Related
pretty new to ML and tensorflow!
I made an object detection model with http://cloud.annotations.ai that permits to train and convert a model in different formats, tfjs (model_web) too.
That website provides also boilerplates for running the model within a browser (react app)... just like you do - probably it is the same code, didn't spend enough time.
So I have this model running inside a browser, giving prediction about objects in a photo with pretty good results considering the amount of example I gave and the prediction score (0.89). the given bounding box is good too.
But, unfortunately, I didn't have "just one video" to analyze frame by frame inside a browser, I've got plenty of them. So I decided to switch to node.js, porting the code as is.
Guess what? TF.js relies on DOM and browser components, and almost none examples that works with Node exists. So not a big deal, just spent a morning figuring out all the missing parts.
Finally I'm able to run my model over videos that are splitted in frames, at a decent speed - although having the "Hello there, use tfjs-node to gain speed" banner when I'm already using tfjs-node - but results seems odd.
Comparing the same picture with the same model_web folder gave the same prediction but with lower score (0.80 instead of 0.89) and a different bounding box, with object not centered at all.
(TL;DR)
Does tfjs have different implementation of the libraries (tfjs and tfjs-node) that makes different use of the same model? I don't think it can be a problem of input because - after a long search and fight - i figure out two ways to give the image to tf.browser.getPixel in Node (and I'm still wondering why I have to use a "browser" method inside tfjs-node). Anyone made comparisons?
So... that's the code I used, for your reference:
model_web is being loaded with tf.loadGraphModel("file://path/to/model_web/model.json");
two different ways to convert a JPG and make it works with tf.browser.getPixel()
const inkjet = require('inkjet');
const {createCanvas, loadImage} = require('canvas');
const decodeJPGInkjet = (file) => {
return new Promise((rs, rj) => {
fs.readFile(file).then((buffer) => {
inkjet.decode(buffer, (err, decoded) => {
if (err) {
rj(err);
} else {
rs(decoded);
}
});
});
});
};
const decodeJPGCanvas = (file) => {
return loadImage(file).then((image) => {
const canvas = createCanvas(image.width, image.height);
const ctx = canvas.getContext('2d');
ctx.drawImage(image, 0, 0, image.width, image.height);
const data = ctx.getImageData(0, 0, image.width, image.height);
return {data: new Uint8Array(data.data), width: data.width, height: data.height};
});
};
and that's the code that use the loaded model to give predictions - same code for node and browser, found at https://github.com/cloud-annotations/javascript-sdk/blob/master/src/index.js - doesn't works on node as it is, I changed require("#tensorflow/tfjs"); with require("#tensorflow/tfjs-node"); and replaced fetch with fs.read
const runObjectDetectionPrediction = async (graph, labels, input) => {
const batched = tf.tidy(() => {
const img = tf.browser.fromPixels(input);
// Reshape to a single-element batch so we can pass it to executeAsync.
return img.expandDims(0);
});
const height = batched.shape[1];
const width = batched.shape[2];
const result = await graph.executeAsync(batched);
const scores = result[0].dataSync();
const boxes = result[1].dataSync();
// clean the webgl tensors
batched.dispose();
tf.dispose(result);
const [maxScores, classes] = calculateMaxScores(
scores,
result[0].shape[1],
result[0].shape[2]
);
const prevBackend = tf.getBackend();
// run post process in cpu
tf.setBackend("cpu");
const indexTensor = tf.tidy(() => {
const boxes2 = tf.tensor2d(boxes, [result[1].shape[1], result[1].shape[3]]);
return tf.image.nonMaxSuppression(
boxes2,
maxScores,
20, // maxNumBoxes
0.5, // iou_threshold
0.5 // score_threshold
);
});
const indexes = indexTensor.dataSync();
indexTensor.dispose();
// restore previous backend
tf.setBackend(prevBackend);
return buildDetectedObjects(
width,
height,
boxes,
maxScores,
indexes,
classes,
labels
);
};
Do different implementation of the libraries (tfjs and tfjs-node) that makes different use of the same model
If the same model is deployed both in the browser and in nodejs, the prediction will be the same thing.
If the predicted value are different, it might be related to the tensor used for the prediction. The processing from the image to the tensor might be different resulting in different tensors being used for the prediction thus causing the output to be different.
i figure out two ways to give the image to tf.browser.getPixel in Node (and I'm still wondering why I have to use a "browser" method inside tfjs-node)
The canvas package use the system graphic to create the browser like canvas environment that can be used by nodejs. This makes it possible to use tf.browser namespace especially when dealing with image conversion. However it is still possible to use directly nodejs buffer to create a tensor.
I used https://github.com/Autodesk-Forge/viewer-react-express-headless as a starting point for my Forge React application and I modified viewer = new Autodesk.Viewing.Viewer3D(viewerElement, {}); to viewer = new Autodesk.Viewing.Private.GuiViewer3D(viewerElement, {}); to change it back from a headless to a classic viewer.
I can load my model but it appears without edges and when I go to Settings -> Performance -> Display edges it is off by default, and when I try to set it back the edges stay invisible.
From my non wokring viewer:
When I try the same operation with the same model loaded on Autodesk Viewer it works as expected and I can toggle the visibility of the edges.
From the Autodesk Viewer
I found another seemingly related question on stackoverflow, but I tried viewer.js?v=v4.2, viewer.js?v=v5.0 and viewer.js?v=v6.3.1 and I still have the invisible edges issue.
I also posted a Github Issue
Thank you for your help.
Alexis
ok, if you are creating the viewer instance via
Autodesk.Viewing.Private.GuiViewer3D directly, rather than the
Autodesk.Viewing.ViewingApplication, then there is a magic configuration parameter that you will need to apply when initializing the Forge viewer, so that the lines will appear...
To fix it, an extra option
isAEC: true must be passed into the
modelOptions in your code, see below:
var modelOptions = {
placementTransform: mat,
globalOffset:{x:0,y:0,z:0},
sharedPropertyDbPath: doc.getPropertyDbPath(),
isAEC: true //!<<< Here is the missing line
};
viewer.loadModel(svfUrl, modelOptions, onLoadModelSuccess, onLoadModelError);
I'm trying to implement JS testing by loading pages and taking screenshots of the elements with puppeteer. So far so good, everything works perfectly in my local (after I fixed a snag between a normal screen an a retina display) but when I ran the same testing on TravisCI I got small text differences that I can't get around, anyone has any clue what is going on?
This is how I configure my browser instance:
browser = await puppeteer.launch(({
headless: true,
args :[
'--hide-scrollbars',
'--enable-font-antialiasing',
'--force-device-scale-factor=1', '--high-dpi-support=1',
'--no-sandbox', '--disable-setuid-sandbox', // Props for TravisCI
]
}));
And here is how I compare the screenshots:
const compareScreenshots = (fileName) => {
return new Promise((resolve) => {
const base = fs.createReadStream(`${BASE_IMAGES_PATH}/${fileName}.png`).pipe(new PNG()).on('parsed', doneReading);
const live = fs.createReadStream(`${WORKING_IMAGES_PATH}/${fileName}.png`).pipe(new PNG()).on('parsed', doneReading);
let filesRead = 0;
function doneReading() {
// Wait until both files are read.
if (++filesRead < 2) {
return;
}
// Do the visual diff.
const diff = new PNG({width: base.width, height: base.height});
const mismatchedPixels = pixelmatch(
base.data, live.data, diff.data, base.width, base.height,
{threshold: 0.1});
resolve({
mismatchedPixels,
diff,
});
}
});
};
Here is an example of the diff that this is generating:
I had a similar problem. I put in a delay of 400ms before snapping the screenshot and it seems to have fixed the problem. If you come up with something better I'd love to know it.
Fonts can be rendered slightly differently on different OSes. This can cause the artifacts along the edges of the text that you are experiencing. You have a few options:
apply a slight Gaussian blur to the images before comparison (or use css blur). This will smooth away the differences between hard edges in the images.
increase the 'threshold' property to make the anti-aliasing filtering less sensitive.
allow a certain number of pixel differences in your comparison by using a percentage of the total image ( width * height * percentage_threshold ). This number will be influenced by how much text is on the screen at any given point.
use standardized Webfonts for all text - this could help get the fonts close to identical given that you are using the same browser on both systems.
I had a similar issue and I ended up with running my snapshot tests "locally" inside a docker container. I also mounted the project folder so whenever snapshots had to be updated they were updated inside the container but also in the host os.
I need some help understanding what the best practice is for creating a PIXI.extras.AnimatedSprite from spritesheet(s). I am currently loading 3 medium-sized spritesheets for 1 animation, created by TexturePacker, I collect all the frames and then play. However the first time playing the animation is very unsmooth, and almost jumps immediately to the end, from then on it plays really smooth. I have read a bit and I can see 2 possible causes. 1) The lag might be caused by the time taken to upload the textures to the GPU. There is a PIXI plugin called prepare renderer.plugins.prepare.upload which should enable me to upload them before playing and possibly smoothen out the initial loop. 2) Having an AnimatedSprite build from more than one texture/image is not ideal and could be the cause.
Question 1: Should I use the PIXI Prepare plugin, will this help, and if so how do I actually use it. Documentation for it is incredibly limited.
Question 2: Is having frames across multiple textures a bad idea, could it be the cause & why?
A summarised example of what I am doing:
function loadSpriteSheet(callback){
let loader = new PIXI.loaders.Loader()
loader.add('http://mysite.fake/sprite1.json')
loader.add('http://mysite.fake/sprite2.json')
loader.add('http://mysite.fake/sprite3.json')
loader.once('complete', callback)
loader.load()
}
loadSpriteSheet(function(resource){
// helper function to get all the frames from multiple textures
let frameArray = getFrameFromResource(resource)
let animSprite = new PIXI.extras.AnimatedSprite(frameArray)
stage.addChild(animSprite)
animSprite.play()
})
Question 1
So I have found a solution, possibly not the solution but it works well for me. The prepare plugin was the right solution but never worked. WebGL needs the entire texture(s) uploaded not the frames. The way textures are uploaded to the GPU is via renderer.bindTexture(texture). When the PIXI loader receives a sprite atlas url e.g. my_sprites.json it automatically downloads the image file and names it as mysprites.json_image in the loaders resources. So you need to grab that, make a texture and upload it to the GPU. So here is the updated code:
let loader = new PIXI.loaders.Loader()
loader.add('http://mysite.fake/sprite1.json')
loader.add('http://mysite.fake/sprite2.json')
loader.add('http://mysite.fake/sprite3.json')
loader.once('complete', callback)
loader.load()
function uploadToGPU(resourceName){
resourceName = resourceName + '_image'
let texture = new PIXI.Texture.fromImage(resourceName)
this.renderer.bindTexture(texture)
}
loadSpriteSheet(function(resource){
uploadToGPU('http://mysite.fake/sprite1.json')
uploadToGPU('http://mysite.fake/sprite2.json')
uploadToGPU('http://mysite.fake/sprite3.json')
// helper function to get all the frames from multiple textures
let frameArray = getFrameFromResource(resource)
let animSprite = new PIXI.extras.AnimatedSprite(frameArray)
this.stage.addChild(animSprite)
animSprite.play()
})
Question 2
I never really discovered and answer but the solution to Question 1 has made my animations perfectly smooth so in my case, I see no performance issues.
I am making a HTML5 game with Phaser, I want to take palyer name as input and save it for later use how can I do it with Phaser?
I've never seen a Flash game with HTML overlayed to handle text. I expect as Javscript game engines grow, they will ultimately find they also need to handle rendering text inside the canvas. The good news is that the work has already begun. https://github.com/goldfire/CanvasInput
<script src="CanvasInput.min.js"></script>
<script>
var bmd = this.add.bitmapData(400, 50);
var myInput = this.game.add.sprite(15, 15, bmd);
myInput.canvasInput = new CanvasInput({
canvas: bmd.canvas,
});
myInput.inputEnabled = true;
myInput.input.useHandCursor = true;
</script>
A Phaser and CanvasInput demo can be seen here, http://codepen.io/jdnichollsc/pen/waVMdB?editors=001
One downside is that you must use canvas rendering in Phaser using Phaser.CANVAS and not Phaser.AUTO.
UPDATE: someone made this too https://github.com/orange-games/phaser-input
there are a lot of options you can do but what about windows prompt
var player = prompt("Please enter your name", "name");
then you can save it via local storage
localStorage.setItem("playerName", player);
if you want to use it later
localStorage.getItem("playerName");
In my game I used modal (http://getbootstrap.com/javascript/#modals) to allow user input name. I also implemented leaderboard. To save users and scores (+times) I integrated with parse (https://www.parse.com/)
Working example: http://majery.pl/memo/
Consider using the CanvasInput library. It creates native canvas-based inputs and is very easy utilize. You add it to your create method and continue from there. Here's an example of it being used with Phaser.