I am working on Image Processing projects.
I want to capture the images(Face) from the web-camera using web browser and i am not allowed to install any additional plug-ins.
Currently, i am using Flex/Flash to capture the images.As per the requirement, i am also not allowed to use Flex/Flash.
Is there any package to capture images from webcamera?
Note: Actually, i want to apply some image processing algorithms like face Detection , filters at browser end. I will be good if the same package used for image capturing has some image processing algorithms.
Updated by author of the post:
Can anybody tell me the possibility with webgl?
If you wanted to let users take a snapshot of themselves with the webcam, that's possible with capture=camera in one line using HTML5'sgetUserMedia/Stream API. here is how:
<input type="file" accept="image/*;capture=camera">
Supported in latest chrome, firefox & opera. for others, flash or other plugins are the only option.
Link: can I use webcam
As Gaurish mentions, getUserMedia is what you can use for this, although its cross-browser support is not guaranteed.
To see how you can use this, and canvas to get the image, take a look at Filtering a webcam using getUserMedia and HTML5 canvas which uses it.
Related
I am trying to build screensharing over the browser. I am trying to find the best native implementation and did some initial online research
MediaDevices.getUserMedia() - available in FF. In chrome its a little weird
WebRTC Tab Content Capture - I see its in proposal stage
Screensharing a browser tab in HTML5 - A blog explaining other methods
Researching above everything seems to be around 2012 time frame and I want to know what is the latest?
Question: Which current technologies/javascript API can i use and what is its support across browsers
Screensharing is alive and kicking in Firefox, but atm requires the user modifying about:config. See my answer to another question for how. I believe they're working on removing that obstacle.
Chrome is similar but not quite the same, and AFAIK requires the user to install an extension.
I don't believe other browsers support this natively yet.
You can save html document onto <canvas> or <foreignObject> of <svg> element, then send data URL, ArrayBuffer or Blob of <canvas> or <svg>; or alternatively, send html document as encoded string.
I want to implement a video texture in Three.js following the method used in this example: http://stemkoski.github.io/Three.js/Video.html . However, I'd also like to use my site with the Google Cardboard Chrome API, as detailed here: https://vr.chromeexperiments.com/ , yet when I test the program on Chrome for Android, I get a 'S3TC textures not supported' error. Is there a way around this error?
It's not an error, it just that S3TC is not supported. It's a file format error - like asking a program that only supports JPG to read an PNG. You need to either convert the video to an accepted format, or perhaps see if there's a way of getting the Chrome API to turn on OpenGL extensions, the one you want is EXT_texture_compression_s3tc - if you can access this then you can read in S3TC files. S3TC is usually targeted on mobile devices - where file size is critical. If it's just for a web browser you might want to investigate useing a more standardized video format - like H.264
You can also looks for a library that supports the formats you want to use and if they can uncode a frame, you can directly pass it to the graphics API.
I use WebRTC in a scenario in which the client video stream is recorded on a third-party server https://tokbox.com/. I would like to put some kind of watermark in the recorded video.
Investigation brought me to this page http://w3c.github.io/webrtc-pc/#mediastreamtrack and it seems that it is technically possible since it says that:
A MediaStream acquired using getUserMedia() is, by default, accessible to an application. This means that the application is able to access the contents of tracks, modify their content, and send that media to any peer it chooses.
This is exactly what I need, but I didn't find any examples or explanation of this function. I'd like to get some advice from WebRTC experts.
You need to use a canvas to route the video from getUserMedia to, modify it there, and then use canvas.captureStream() to turn it back into a MediaStream. This is great - except that canvas.captureStream(), while agreed to in the WG hasn't actually been included in the spec yet. (There's a pull request with the proposed verbiage that Mozilla wrote.)
As far as implementations: the initial implementation of captureStream() just landed in Firefox Nightly (41), and it's still behind a pref until a bug or two is fixed. You can enable it with canvas.capturestream.enabled in about:config. You can see a demo at Mozilla's test page for captureStream().
Doing it without canvas.captureStream() would be tough; you're best way would be to do getUserMedia->canvas-> and then use video.captureStream() (or captureStreamUntilEnded()) - however, video.captureStream is also waiting for formal acceptance. Mozilla has had video.captureStream() for some time, however, and I think it works in FF 38 (current release).
http://www.html5rocks.com/en/tutorials/dnd/basics/
Following this example, I need to retrieve the URL of an image instead of he image content itself, even if it's coming from the Desktop (because the webapp I'm writing is not necessarily meant to work online).
You can use data URL's, refer to this for an example http://www.html5rocks.com/en/tutorials/file/dndfiles/#toc-selecting-files-dnd
AFAIK, the filepath of the image won't be available as it would be a security issue. It used to be possible on older versions of Firefox, but that's not going to help if you're looking for HTML5 DND support.
It might be worthwhile to look into the Filesystem API to support offline use. (By which I mean, after an image is dropped, take the raw image data and use FileWriter to store it in the filesystem sandbox.)
We developed a complex reporting application using flot however something that I did not foresee is that we were going to have an issue with the output, although everything looks and works cool the Business users cannot copy the chart in their presentations because the output is a canvas element.
I've looked for several approaches I even found a way to convert the canvas to an image but this only works on HTML5 supported browsers, unfortunately our users use IE7 I know this is very old but there is nothing I can do (trust me we tried) so I have to come up with a solution to export the graph to an image format.
My last attempt was using fxcanvas and flashcanvas to emulate the toDataURL method but turns out there is a 32kb buffer where my images are at least 300kb.
Business users (upper management) is pushing for a solution and they clearly don't understand there are technical boundaries here, I'm open to any solution that does not involve the following:
Upgrade / change browse
Install plugins like chrome tab or so.
Install cab files on the server or users machine
However I'm open to any Active X solution or any export option that does not require installing 3rd party programs (except for MS libraries where they dont' have to perform any addiontal step like registering libraries).
The basic way to do it is using canvas.toDataURL("image/png"); but I also found this link for you
http://nihilogic.dk/labs/canvas2image/
I've not tested it.
If you want the users to get the same result (or at least as close as possible), why not try relying a bit on the backend?(assuming there is one) Just generate the image on the backend and output a standard jpg or png.
I can't really think of another solution, even the SVG is compatible only on IE7+, the closest solution without using backend would be using flash, as it's a third party but it's pretty known and it's built-in on the modern browsers.
Relying on the client side always involve: Asking them to upgrade the browser/ Using a third party plugin, so keep an eye on that in the future projects planning, the compatibility issues are one of the most annoying in my opinion in web development