I've set up a split-screen canvas, currently with only one canvas on the left side of the screen. I want to display my webcam output on the left-hand side, and my current solution is to match the dimensions of the canvas with the webcam output. However, although this fits the canvas it also shrinks the width dimension of the output. I was wondering if there's any way I can map the capture directly to the canvas rather than have to match the dimensions and the positions on the screen. I've tried looking for the available functions for a createCapture object but there doesn't seem to be any useful functions. Any help would be appreciated. I have shown my code below:
function setup() {
createCanvas(windowWidth / 2, windowHeight);
webcam = createCapture(VIDEO, function(stream) {
recorder = new MediaRecorder(stream, {});
});
webcam.size(windowWidth / 2, windowHeight);
}
function draw() {
clear();
image(webcam, 0, 0, windowWidth, windowHeight);
}
Related
I am currently trying to figure out how to change the color of a bitmapped image within indesign using basil.js. Ideally I would like to place the image and use some sort fo post styling to change the color.
var myImage = image('image_0009_10.psd', 0, 0, width, height);
property(myImage, "fillColor", "RISOBlue");
Right now I am using fillColor but that only changes the color of the frame that the bitmap live within. Anyone got any ideas as to how to edit the contents of a graphic frame? Specifically a bitmap?
fabianmoronzirfas is correct that you have to target the graphic of the image frame, I just want to suggest a slightly different syntax, which is a bit more basil-like to achieve the same thing:
// #include ~/Documents/basiljs/basil.js
function draw() {
var myImage = image('~/Desktop/someImage.psd', 0, 0);
var myGraphics = graphics(myImage);
property(myGraphics[0], 'fillColor', color(0, 0, 255));
}
Note the use of the graphics() function to get the actual graphics within an image rectangle.
Welcome to STO 👋 🎉 🎈
You are currently setting the fillColor for the Rectangle that contains the image. You will have to select the image explicitly since it is a child object of that Rectangle. See:
Rectangle
Images
Image
The code below is tested with InDesign 14.0.1 and the current Basil.js Develop version 2.0.0-beta
// #include "./basil.js"
function setup() {
var doc = app.activeDocument;
var imageFile = file(new File($.fileName).parent + "/img.bmp");
var img = image(imageFile, 0, 0, 100, 100);
img.images[0].fillColor = doc.swatches[4];
}
graphics() is perfect for this (as #mdomino mentioned) – but can also just grab that property of the image:
var myImage = image('image_0009_10.psd', 0, 0, width, height);
property(myImage.graphics[0], "fillColor", "RISOBlue");
Running inspect(myImage) will give a long laundry list of available properties.
I was trying to make a basic media recorder with the MediaRecorder API which is fairly straight forward: get the stream from getDisplayMedia, then record it.
The problem: This only records the maximum screen size, but no more. So if my screen is 1280/720, it will not record 1920/1080.
This may seem quite obvious, but my intent is that it should record the smaller resolution inside of the bigger one. For example:
With the red rectangle representing what my actual screen is recording, and the surrounding black rectangle is simply black space, but the entire video is now a higher resolution, 1920/1080, which is useful for youtube, since youtube scales down anything that is in between 720 and 1080 resolution, which is a problem.
Anyway I tried simply adding the stream from getDisplayMedia to a video element video vid.srcObject = stream, then made a new canvas with the resolution 1920/1080, and in the animate loop just did ctx.drawImage(vid, offsetX, offsetY), and outside of the loop, where the MediaRecorder was made, simply did newStream = myCanvas.captureStream() as per the documentation of the API, and passed that to the MediaRecorder; however, the problem is that because of the huge canvas overhead, everything is really slow and the framerate is absolutely terrible (don't have video example, but just test it yourself).
So is there some way to optimize the canvas to not affect the framerate (tried looking into OffscreenCanvas but I couldn't find a way to get the stream from it itself to use with MediaRecorder, so it didn't really help), or is there a better way to capture and record the canvas, or is there a better way to record the screen within a larger resolution, in client-size JavaScript? If not with client-size JavaScript, is there some kind of real-time video encoder (ffmpeg is too slow) that could be run on the server, and each frame of the canvas could be sent to the server and saved there? Is there some better way to make a video recorder with any kind of JavaScript -- client or server or both?
Don't know what your code looks like, but I managed to get a smooth experience with this piece of code:
(You will also find very good example here: https://mozdevs.github.io/MediaRecorder-examples/)
<!doctype html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<script src="script.js"></script>
</head>
<body>
<canvas id="canvas" style="background: black"></canvas>
</body>
// DISCLAIMER: The structure of this code is largely based on examples
// given here: https://mozdevs.github.io/MediaRecorder-examples/.
window.onload = function () {
navigator.mediaDevices.getDisplayMedia({
video: true
})
.then(function (stream) {
var video = document.createElement('video');
// Use "video.srcObject = stream;" instead of "video.src = URL.createObjectURL(stream);" to avoid
// errors in the examples of https://mozdevs.github.io/MediaRecorder-examples/
// credits to https://stackoverflow.com/a/53821674/5203275
video.srcObject = stream;
video.addEventListener('loadedmetadata', function () {
initCanvas(video);
});
video.play();
});
};
function initCanvas(video) {
var canvas = document.getElementById('canvas');
// Margins around the video inside the canvas.
var xMargin = 100;
var yMargin = 100;
var videoWidth = video.videoWidth;
var videoHeight = video.videoHeight;
canvas.width = videoWidth + 2 * xMargin;
canvas.height = videoHeight + 2 * yMargin;
var context = canvas.getContext('2d');
var draw = function () {
// requestAnimationFrame(draw) will render the canvas as fast as possible
// if you want to limit the framerate a particular value take a look at
// https://stackoverflow.com/questions/19764018/controlling-fps-with-requestanimationframe
requestAnimationFrame(draw);
context.drawImage(video, xMargin, yMargin, videoWidth, videoHeight);
};
requestAnimationFrame(draw);
}
I wanted to display same video in two area of the application. So using canvas its working fine but the quality of original video is getting dropped but canvas video quality is fine.
var canvas = document.getElementById('shrinkVideo');
var context = canvas.getContext('2d');
var video = document.getElementById('mainVideo');
video.addEventListener('play', () => {
// canvas.width = 270;
// canvas.height = 480;
this.draw(video, context, canvas.width,canvas.height);
}, false);
draw(v, c, w, h) {
if (v.paused || v.ended) return false;
c.drawImage(v, 0, 0, w, h);
setTimeout(this.draw, 20, v, c, w, h);
}
This is my code to sync two video's and it is working fine but 'mainVideo' quality gets dropped.
But if I remove all the canvas code and just play 'mainVideo' the quality is maintained but using canvas its quality get dropped.
Expected Result This is output of the video when canvas code is not added
Actual Result This is output I am getting after adding the canvas code
Thanks In Advance
I came to this answer because I thought I was experiencing the same issue.
I have 1080p source on a element (HD content from a HDMI capture device, which registers as a webcam in the browser)
I had a 1920x1080 canvas and I was using ctx.drawImage(video, 0, 0, 1920, 1080) - as mentioned by a commenter above, I think I've found it crucial that you draw only in good multiples of the original height/width values.
I tried with-and-without imageSmoothingEnabled and various imageSmoothingQuality settings in Chrome/Brave; ultimately, I saw no vast difference with these settings.
My canvas on my webapp was still coming out extremely blurry -- unable to read even
~24pt font on the screen, basically couldn't use the video at all
I was frustrated by my blurry video so I recreated a full test in a "clean suite" here and now I experience no scaling issues anymore -- I don't know what my main application is doing differently yet, but in this example, you can attach any 1080p/720p device and see it scaled quite nicely to 1080p (change the resolution in the JS file if you want to scale to 720p)
https://playcode.io/543520
const WIDTH = 1920;
const HEIGHT = 1080;
const video = document.getElementById('video1'); // Video element
const broadcast = document.getElementById('broadcast'); // Canvas element
broadcast.width = video.width = WIDTH;
broadcast.height = video.height = HEIGHT;
let ctx = broadcast.getContext('2d')
onFrame = function() {
ctx.drawImage(video, 0, 0, broadcast.width, broadcast.height)
window.requestAnimationFrame(onFrame);
}
navigator.mediaDevices
.getUserMedia({ video: true })
.then(stream => {
window.stream = stream
console.log('got UM')
video.srcObject = stream;
window.onFrame();
})
Below you can see my viewport, with a Video and Canvas element (both 1080p, scaled to ~45% so they fit), using requestAnimationFrame to draw my content. The content is zoomed out, so you can see anti-aliasing, but if you load the example and click on the Canvas, it goes to Fullscreen, and the quality is pretty good - I played a 1080p Youtube video on my source machine, and couldn't see any difference on my full screen 1080p canvas element.
What I am doing here is that I am covering the entire web page loaded with the layer of a canvas. Now on that canvas user clicks and that click creates a dot. At that point createDot() function gets called.
Inside createDot() the dot is drawn on to the canvas and then the screenshot request is sent to the background script.
Now the problem is that when I click on the canvas and the screenshot is taken, the dot does not appear in the screenshot.
Now what's weird is that when i click a 2nd dot onto the same canvas and when the screenshot comes, it has now the first dot that I clicked but not the 2nd one.
So what is happening is that screenshot is not capturing the current dot but only the previous dots.
I checked that the canvas drawing functions are all blocking so the request cannot be sent to the background script before the drawing is complete. I also confirmed the same by logging to the console.
content script:
function createDot(canvas) {
var context = canvas.getContext("2d");
var canvasOffset = $("#canvas").offset();
var offsetX = canvasOffset.left;
var offsetY = canvasOffset.top;
var pointX , pointY;
var point_X,point_Y;
function handleMouseDown(e) {
//coordinates of the click in dom.
pointX = parseInt(e.pageX - offsetX);
pointY = parseInt(e.pageY - offsetY);
//making the dot
var radius = 10;
context.beginPath();
context.arc(pointX, pointY, radius, 0, 2 * Math.PI);
context.fillStyle = 'red';
context.fill();
context.lineWidth = 2;
context.strokeStyle = '#003300';
context.stroke();
console.log("drawn everything");
takeDotScreenshot(); //gets called when the canvas has finished its job of drawing.
}
$("#canvas").mousedown(function (e) {
handleMouseDown(e);
});
}
function takeDotScreenshot() {
console.log("sending request to Background script");
chrome.runtime.sendMessage({"capture":true},function(response) {
var img_src=response.screenshot;
});
//logic for using that screenshot and appending it on to the content page...
}
Background script:
chrome.runtime.onMessage.addListener(function(request,sender,sendResponse) {
if(request.capture) {
chrome.tabs.query({active: true, currentWindow: true}, function(tabs) {
chrome.tabs.captureVisibleTab(null, {}, function(dataUrl) {
if(dataUrl) {
sendResponse({"screenshot":dataUrl});
}
});
});
}
return true;
});
UPDATE:
If I use setTimeout of 1ms for calling the takeDotScreenshot function, everything works fine. WHY!
setTimeout(takeDotScreenshot, 1);
There is a very detailed, great answer why setTimeout helps here:
https://stackoverflow.com/a/4575011/10574621
The DOM-Update-Event for your canvas (which actually make the dot visible) could be behind your take-screenshot-event, even if the canvas recieved all data needed from your stroke()-event.
Screen shot is a snap shot of what the user can see. When inside a rendering function the canvas is not presented for display until the execution stack is empty (returned all the way out of the functions)
Depending on how you render (using requestAnimationFrame or directly from mouse, keyboard, timer events) the canvas will still take time to be presented to the display.
When you set the timeout to zero you can add a call onto the call stack that runs before the canvas is presented to the display. Any code you run will block the page thus preventing the canvas from being displayed and hence no screen shot.
I would set the timeout from render to capture at at least 17ms (just over 60fps). Nobody can notice that delay but gives plenty of time for the canvas to be presented to the display
I am attempting to use a chrome extension to take a screenshot of the current page, and then draw some shapes on it. After I have done that to the image, I turn the whole thing into a canvas that way it is all together, like the divs I have drawn on it are now baked into the 'image', and they are one in the same. After doing this, I want to turn the canvas back into a png image I can use to push to a service I have, but when I go to use the canvas.toDataURL() in order to do so, the image source that it creates is completely transparent. If I do it as a jpeg, it is completely black.
I read something about the canvas being 'dirtied' because I have drawn an image to it, and that this won't work in Chrome, but that doesn't make sense to me as I have gotten it to work before, but I am unable to use my previous method. Below is the code snippet that isn't working. I am just making a canvas element, and then I am drawing an image before that.
var passes = rectangles.length;
var run = 0;
var context = hiDefCanvas.getContext('2d');
while (run < passes) {
var rect = rectangles[run];
// Set the stroke and fill color
context.strokeStyle = 'rgba(0,255,130,0.7)';
context.fillStyle = 'rgba(0,0,255,0.1)';
context.rect(rect.left, rect.top, rect.width, rect.height);
context.setLineDash([2,1]);
context.lineWidth = 2;
run++;
} // end of the while loop
screencapImage.className = 'hide';
context.fill();
context.stroke();
console.log(hiDefCanvas.toDataURL());
And the image data that it returns is: data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAACWAAAAVGCAYAAAAaGIAxAAAgAElEQ…ECBAgQIECAAAECBAgQIECAAAECBAgQIECAAAECBAgQIECAQBVKBUe32pNYAAAAAElFTkSuQmCC which is a blank, transparent image.
Is there something special I need to do with Chrome? Is there something that I am missing? Thanks, I appreciate the time and help.
Had the same problem, and found the solution here:
https://bugzilla.mozilla.org/show_bug.cgi?id=749824
"I can confirm, that it works if you set preserveDrawingBuffer.
var glContextAttributes = { preserveDrawingBuffer: true };
var gl = canvas.getContext("experimental-webgl", glContextAttributes);"
After getting the context with preserveDrawingBuffer, toDataURL just works as expected, no completely transparent or black image.
Having a similar problem I found a solution, thanks to the following post
https://github.com/iddan/react-native-canvas/issues/29.
These methods return a promise, so you would need to wait for it to resolve before the variable can be populated.
My solution was to set asyn function and await for the result:
const canvas = <HTMLCanvasElement>document.getElementById("myCanvas")[0];
let ctx = canvas.getContext('2d');
let img = new Image();
img.onload = async (e) => { ctx.drawImage(img, 0, 0); ctx.font = "165px Arial";ctx.fillStyle = "white"; b64Code = await (<any>canvas).toDataURL(); }