I am new to frontend/javascript development. Is there any simple solution to stream a HTML5 MediaDevice stream to flask? I have a simple flask server running, which does image processing via OpenCV and outputs the result as a jpeg stream.
Currently, I am using a USB camera as input device, but I want to use a web based solution. Overall, the input should be captured in the browser, send to the server for processing, and finally transferred back to the client.
https://www.kirupa.com/html5/accessing_your_webcam_in_html5.htm
well, this will give you a little idea how to proceed, I found your question while searching solution for myself.
further steps will include sending frames to the flask server -> processing image -> returning image -> displaying frames by javascript/HTML as shown in the above article.
hope this will help
Related
I need some easy to make cross platform apps. So i think electron is a way to go since im already familiar with html, css and javascript.
What node modules that i need to do this? I read the electron docs and it only says about screen recording but not screenshot.
And what i need to do for encoding the image to png then send it to server as a file with http post?
The app im making is a simple anti cheat solution.
How can I capture an image from a client side web cam and save it to the server please?
I am using Python with Flask, running on a Heroku instance.
I have made my code run perfectly with OpenCV (CV2) locally on my laptop
The problem is CV2 cannot see the client side webcam. There are no webcams in the data center and the data center is not where the user is !
The original code I wrote can be seen below:
https://github.com/iainonline/ImageRecognitionAndWebsiteLookup
The part I am struggling with is capturing a snapshot from the users webcam, so I can do the text recognition on the saved file.
I want to be able to have the webcam capture a snapshot and create a file 'image.jpg' on the server.
# The name of the image file to annotate
file_name = os.path.abspath('image.jpg')
# Loads the image into memory
with io.open(file_name, 'rb') as image_file:
content = image_file.read()
I have been able to find Javascript that will initialise the users webcam and take a snapshot.
You can see that running here:
https://iainonline.github.io/
The code is here:
https://github.com/iainonline/iainonline.github.io
The problem I have is I cannot seem to find a way to take the snapshot from the webcam and save that as a file to the Heroku server. I have searched really hard for a solution but I cannot find this exact problem replicated anywhere else. I am open to using something other than Javascript if that is simpler.
Thank you for reading this far and any help is greatly appreciated.
I found an answer which works locally. However I cannot yet make it work on Heroku. See the stack overflow link below:-
How do i take picture from client side(html) and save it to server side(Python)
I am working on video-conferencing with WebRTC (javascript/php). I want to record whole screen i.e. all videos in single video and store it on server. I am able to record single video at a time but not all videos at a time(whole screen). Can I achieve it?
And one big issue is Remote Audio recording! Is there any solution to record remote audio??
I have taken the code from here.
I do not think php is going to make a difference here, I can see only two ways.
The Easy Way:
Use an MCU for recording( even as an alternative for mesh network for conferences). You can try Kurento, Licode or Intel CS.
The Hard Way:
if firefox browser: use MediaRecorder api to record each remote stream, send them to server and merge them together( may be with ffmpeg) and provide a link to user to see/download...
if chrome browser: you can record through each remote video stream canvas( what happens in RecordRTC internally), simultaneously request the remote peers to record their own audio on their side, upload them all to the server and provide link... yeah, good luck with syncing them all.
I am working on an application that involves sending the sound output from a series of webpages to a central page that plays all the outputs simultaneously. Right now, I'm using p5 (the Processing JavaScript library, which is basically just a wrapper on the WebAudio API), node.js, and socket.io.
I know it's possible to stream the microphone input to a server and then back to the client, but how would I do this with audio created in the browser with the WebAudio API? How do I turn the AudioContext output into binary data, stream it through the socket, and then play it back client-side? Any help would be greatly appreciated!
I am trying to capture the microphone and send the recording to my server.. I tried this method here but it records only a big WAV and the upload can be slow sometimes.
Is there a way to capture the voice and compress it on the client side?
Best method would be to send the recording while recording, but I have no Idea if this is possible. (It works for YouTube Live Webcam recording, it must work for Audio only too..)
Hey check out this post where i replied to a guy with a similar question as you.
How do I embed a Flash audio recorder in my site
i dont know about client side compressing (i have looked into it before and couldnt find anything). But i know you can severely reduce the size of the file by limiting the rate of recording via these numbers here, where if i recall correctly 16 is 16khz recording
recorder = new MicRecorder(wavencoder,null,50,16);
also sending to the server is not that hard, just look up how to post data, because the wav file is essentially binary data
You can compress the file on the clientside using libmp3lame.js: https://github.com/akrennmair/libmp3lame-js
There is already a gitHub project that uses this library to record audio and save it in MP3 format directly in the browser:
https://github.com/nusofthq/Recordmp3js