takePicture = async function() {
if (this.camera) {
const options = { quality: 0.5, base64: true, pauseAfterCapture: true };
const data = await this.camera.takePictureAsync(options);
this.setState({path: data.uri});
}
}
takePicture is my function to click the image. When I don't use pauseAfterCapture in options, then it takes 3 seconds for the image to get captured, while camera is still active in those 3 seconds. And when I use pauseAfterCapture it takes me around 1.5 seconds to capture image with the camera being active during those 1.5 seconds.
I've also used skipProcessing which helps me with fast capture but I don't want to lose other information like base64, width, quality, mirrorImage, exif, etc like mentioned on react-native-camera github page.
Is this has something to do with takePictureAsync taking time to resolve? If yes, then how do I manage it?
Also, if this question has no solution, how can I use ActivityIndicator when the image is getting captured.
P.S. - I know this question has been asked a lot but I'm unable to find any solution for this. I hope we all can come up with a solution so that it can help other people in future.
Related
I'm trying to implement JS testing by loading pages and taking screenshots of the elements with puppeteer. So far so good, everything works perfectly in my local (after I fixed a snag between a normal screen an a retina display) but when I ran the same testing on TravisCI I got small text differences that I can't get around, anyone has any clue what is going on?
This is how I configure my browser instance:
browser = await puppeteer.launch(({
headless: true,
args :[
'--hide-scrollbars',
'--enable-font-antialiasing',
'--force-device-scale-factor=1', '--high-dpi-support=1',
'--no-sandbox', '--disable-setuid-sandbox', // Props for TravisCI
]
}));
And here is how I compare the screenshots:
const compareScreenshots = (fileName) => {
return new Promise((resolve) => {
const base = fs.createReadStream(`${BASE_IMAGES_PATH}/${fileName}.png`).pipe(new PNG()).on('parsed', doneReading);
const live = fs.createReadStream(`${WORKING_IMAGES_PATH}/${fileName}.png`).pipe(new PNG()).on('parsed', doneReading);
let filesRead = 0;
function doneReading() {
// Wait until both files are read.
if (++filesRead < 2) {
return;
}
// Do the visual diff.
const diff = new PNG({width: base.width, height: base.height});
const mismatchedPixels = pixelmatch(
base.data, live.data, diff.data, base.width, base.height,
{threshold: 0.1});
resolve({
mismatchedPixels,
diff,
});
}
});
};
Here is an example of the diff that this is generating:
I had a similar problem. I put in a delay of 400ms before snapping the screenshot and it seems to have fixed the problem. If you come up with something better I'd love to know it.
Fonts can be rendered slightly differently on different OSes. This can cause the artifacts along the edges of the text that you are experiencing. You have a few options:
apply a slight Gaussian blur to the images before comparison (or use css blur). This will smooth away the differences between hard edges in the images.
increase the 'threshold' property to make the anti-aliasing filtering less sensitive.
allow a certain number of pixel differences in your comparison by using a percentage of the total image ( width * height * percentage_threshold ). This number will be influenced by how much text is on the screen at any given point.
use standardized Webfonts for all text - this could help get the fonts close to identical given that you are using the same browser on both systems.
I had a similar issue and I ended up with running my snapshot tests "locally" inside a docker container. I also mounted the project folder so whenever snapshots had to be updated they were updated inside the container but also in the host os.
While it is easy enough to get firstPaint times from dev tools, is there a way to get the timing from JS?
Yes, this is part of the paint timing API.
You probably want the timing for first-contentful-paint, which you can get using:
const paintTimings = performance.getEntriesByType('paint');
const fmp = paintTimings.find(({ name }) => name === "first-contentful-paint");
enter code here
console.log(`First contentful paint at ${fmp.startTime}ms`);
Recently new browser APIs like PerformanceObserver and PerformancePaintTiming have made it easier to retrieve First Contentful Paint (FCP) by Javascript.
I made a JavaScript library called Perfume.js which works with few lines of code
const perfume = new Perfume({
firstContentfulPaint: true
});
// ⚡️ Perfume.js: First Contentful Paint 2029.00 ms
I realize you asked about First Paint (FP) but would suggest using First Contentful Paint (FCP) instead.
The primary difference between the two metrics is FP marks the point
when the browser renders anything that is visually different from what
was on the screen prior to navigation. By contrast, FCP is the point
when the browser renders the first bit of content from the DOM, which
may be text, an image, SVG, or even a canvas element.
if(typeof(PerformanceObserver)!=='undefined'){ //if browser is supporting
const observer = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
console.log(entry.entryType);
console.log(entry.startTime);
console.log(entry.duration);
}
});
observer.observe({entryTypes: ['paint']});
}
this will help you just paste this code in starting of your js app before everything else.
I need some help understanding what the best practice is for creating a PIXI.extras.AnimatedSprite from spritesheet(s). I am currently loading 3 medium-sized spritesheets for 1 animation, created by TexturePacker, I collect all the frames and then play. However the first time playing the animation is very unsmooth, and almost jumps immediately to the end, from then on it plays really smooth. I have read a bit and I can see 2 possible causes. 1) The lag might be caused by the time taken to upload the textures to the GPU. There is a PIXI plugin called prepare renderer.plugins.prepare.upload which should enable me to upload them before playing and possibly smoothen out the initial loop. 2) Having an AnimatedSprite build from more than one texture/image is not ideal and could be the cause.
Question 1: Should I use the PIXI Prepare plugin, will this help, and if so how do I actually use it. Documentation for it is incredibly limited.
Question 2: Is having frames across multiple textures a bad idea, could it be the cause & why?
A summarised example of what I am doing:
function loadSpriteSheet(callback){
let loader = new PIXI.loaders.Loader()
loader.add('http://mysite.fake/sprite1.json')
loader.add('http://mysite.fake/sprite2.json')
loader.add('http://mysite.fake/sprite3.json')
loader.once('complete', callback)
loader.load()
}
loadSpriteSheet(function(resource){
// helper function to get all the frames from multiple textures
let frameArray = getFrameFromResource(resource)
let animSprite = new PIXI.extras.AnimatedSprite(frameArray)
stage.addChild(animSprite)
animSprite.play()
})
Question 1
So I have found a solution, possibly not the solution but it works well for me. The prepare plugin was the right solution but never worked. WebGL needs the entire texture(s) uploaded not the frames. The way textures are uploaded to the GPU is via renderer.bindTexture(texture). When the PIXI loader receives a sprite atlas url e.g. my_sprites.json it automatically downloads the image file and names it as mysprites.json_image in the loaders resources. So you need to grab that, make a texture and upload it to the GPU. So here is the updated code:
let loader = new PIXI.loaders.Loader()
loader.add('http://mysite.fake/sprite1.json')
loader.add('http://mysite.fake/sprite2.json')
loader.add('http://mysite.fake/sprite3.json')
loader.once('complete', callback)
loader.load()
function uploadToGPU(resourceName){
resourceName = resourceName + '_image'
let texture = new PIXI.Texture.fromImage(resourceName)
this.renderer.bindTexture(texture)
}
loadSpriteSheet(function(resource){
uploadToGPU('http://mysite.fake/sprite1.json')
uploadToGPU('http://mysite.fake/sprite2.json')
uploadToGPU('http://mysite.fake/sprite3.json')
// helper function to get all the frames from multiple textures
let frameArray = getFrameFromResource(resource)
let animSprite = new PIXI.extras.AnimatedSprite(frameArray)
this.stage.addChild(animSprite)
animSprite.play()
})
Question 2
I never really discovered and answer but the solution to Question 1 has made my animations perfectly smooth so in my case, I see no performance issues.
see the example here
I guess it depends on your machine, but for me, after the first song the framerate just drops like crazy. I make sure there are not more sprites than necessary (4: 2 images and 2 displacement maps).
Is this a pixi thing, perhaps WebGL? I'm not sure how to improve it or where to look for a better performance.
Ok. I found the issue. You are adding displacementTexture to the stage (stage.addChild(displacementTexture) again and again and you are never removing it really. So your totalSpritesOnStage do not work correctly.
How about adding something like this:
if (stage.children.length > 4) {
// let's destroy the sprite now
stage.removeChildren(4);
It would quickly looking seem to work with that too, though I didn't check the functionality very thoroughly.
Also this bothered me personally, as the sounds were downloaded again and again :)
function preload(song) {
console.log('preloading song: ' + currentSong);
console.log(song.filename);
if (allSounds[song]) {
sound = allSounds[song];
sound.setVolume(volume);
sound.play();
return;
}
allSounds[song] = sound = new p5.SoundFile('songs/' + song.filename,
onMusicLoaded,
h.onError
);
// The volume is reset (to 1) when a new song is loaded. so we force it
sound.setVolume(volume);
}
I've built this HTML5 video player that I am loading into a canvas to manipulate and back onto a canvas to display it. The video starts out quite slow and the frame rate only gets worse each time it is played. All I am currently manipulating in the video now is the color value when the video is paused, but will eventually be using real time manipulation throughout videos that will be posted in the future.
I used the below tutorial to learn this trick https://www.youtube.com/watch?v=zjQzP3mOXdc
Here is the relevant code, but there may possibly be interference coming from elsewhere so feel free to check the source code at the link at the bottom
var v = document.getElementById('video');
var color = "#DA7AC1";
var processes={
timerCallback:function() {
if (this.v2.paused || this.v2.ended) {
return;
}
this.ctxIn.drawImage(this.v2,0,0,this.width,this.height);
this.pixelScan();
var self=this;
setTimeout(function() {
self.timerCallback();
}, 0);
},
doLoad:function(){
this.v2=document.getElementById("video");
this.cIn=document.getElementById("cIn");
this.ctxIn=this.cIn.getContext("2d");
this.cOut=document.getElementById("cOut");
this.ctxOut=this.cOut.getContext("2d");
var self=this;
this.v2.addEventListener("playing", function() {
self.width=self.v2.videoWidth;
self.height=self.v2.videoHeight;
cIn.width=self.v2.videoWidth;
cIn.height=self.v2.videoHeight;
cOut.width=self.v2.videoWidth;
cOut.height=self.v2.videoHeight;
self.timerCallback();
}, false);
},
pixelScan: function() {
var frame = this.ctxIn.getImageData(0,0,this.width,this.height);
for(var i=0; i<frame.data.length;i+=4) {
var grayscale=frame.data[i]*.3+frame.data[i+1]*.59+frame.data[i+2]*.11;
frame.data[i]=grayscale;
frame.data[i+1]=grayscale;
frame.data[i+2]=grayscale;
}
this.ctxOut.putImageData(frame,0,0);
return;
}
}
http://coreytegeler.com/ethan/
Any and all help would be greatly appreciated! Thanks!
Reason 1
Try to adjust your timer avoiding 0 as timeout value:
setTimeout(function() {
self.timerCallback();
}, 34);
34ms is plenty as video frame rate is typically never more than 30 FPS (NTSC) or 25 FPS (PAL), ie 1000 / 30. If you use 0 you risk stacking up your calls which means the browser will be busy trying to empty the event queue.
If you use anything lower than 33-34ms you end up having the same frame processed twice or more which of course is unnecessary (your video is actually 29.97 FPS/NTSC so you might want to consider keeping 34ms).
Reason 2
The video resolution is also full HD (1920x1080) which is a bit too much for canvas and JS to process in real-time (for a typcial consumer computer). Try to reduce the video size so a normal spec'ed computer will be able to process the data.
Reason 3 (in part)
You don't need two on-screen canvases or even an on-screen video. Try to create these tags dynamically and not inserting them into the DOM. Use a single canvas on-screen and draw the result to that (you can putImageData from one canvas to another).
Reason 4 (in part)
Ideally, replace setTimeout with a requestAnimationFrame approach as this improves the synchronization and efficiency considerably. You can implement a toggle to reduce the FPS to for example 30 as you don't need to process each frame twice (ref. 30 FPS video frame rate).
Update
To create these elements dynamically (ref reason 3) you can do something like this:
var canvas = document.createElement('canvas'),
video = document.createElement('video'),
ctx = canvas.getContext('2d');
video.preload = 'auto';
video.addEventListener('canplay', start, false);
if (video.canPlayType('video/mp4')) {
video.src = 'videoUrl.mp4';
} else if ...etc.
Then when the video has loaded enough data (on metadata or canplay) you set the off-screen (and on-screen) canvas element to the size of the video:
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
Then when playing process its buffer and copy to the on-screen canvas you defined before.
You don't have have an off-screen canvas - I merely mention this as you in your original code used and in and out canvas IIRC. You can simply use a single on-screen canvas and the off-screen video and draw to the video frame to the canvas, process it and put back the processed data. Should work fine too in this case.
I ran a profile in chrome and it points to line 46 as taking up the most CPU.
setTimeout(function() {
self.timerCallback();
}, 0);
Perhaps increasing the timeout will stop it from lagging.
I had the same issues and tried a number of fixes. I was using Premier Elements which didn't export to mp4 and using HandBrake to convert the format. I also Tried FFMpeg to do the conversion, but neither worked.
What I did was switch to Kdenlive as my video editor, it exported directly to MP4, and that video worked perfectly.
So, if you are have this slow render issue, it is probably an issues with the video encoding. Easiest fix is to get a high quality video editor like Premier Pro, Final Cut, or Kdenlive. Kdenlive is free but it has a huge learning curve and poor public documentation.