I use this qr code scanner:
https://github.com/mebjas/html5-qrcode
I use this code to scan qr code:
function onScanSuccess(decodedText, decodedResult) {
// handle the scanned code as you like, for example:
document.getElementById('text').value = decodedText;
console.log(`Code matched = ${decodedText}`, decodedResult);
}
function onScanFailure(error) {
// handle scan failure, usually better to ignore and keep scanning.
// for example:
console.warn(`Code scan error = ${error}`);
}
let html5QrcodeScanner = new Html5QrcodeScanner(
"reader", { fps: 10, qrbox: 250 }, /* verbose= */ false);
html5QrcodeScanner.render(onScanSuccess, onScanFailure);
It is possible to set default camera to back smartfone camera. I want to run scanner with setted back camera on startup.
Thanks, for some help.
I accomplish it using this bit of code when I launch the scanner:
facingMode: { exact: "environment"}
Maybe this will help point you in the right direction. Below is the way the setting is used when I launch the scanner:
/** load scanner using back camera **/
html5QrCode.start({ facingMode: { exact: "environment"} }, config, qrCodeSuccessCallback);
Related
As part of a react web app, we use the Zxing library to perform barcode and qr code scans. However, we encounter a problem with the iphone 13 which sets the zoom to x1 by default, which results in a blurred image when we get closer to the elements to be scanned. We would like to configure the zoom to x0.5 (as is possible in the native iphone app), but I can't find a solution compatible with ios. If you have any ideas, I'm all ears.
Thanks in advance.
`
if(!navigator?.mediaDevices?.getUserMedia){
onError && onError('Cannot stream camera')
return
}
let userMediaStream: MediaStream
navigator.mediaDevices.getUserMedia({ audio: false, video: { facingMode: 'environment'}})
.then(stream => {
userMediaStream = stream
if(!videoRef?.current){
onError && onError('video ref missing')
return
}
videoRef.current.srcObject = stream
})
return () => {
if(userMediaStream) {
userMediaStream.getTracks().forEach(t => t.stop())
}
}
`
I've already tried listing the supportedConstraints:
`
const constraintList = new Array();
const supportedConstraints = navigator.mediaDevices.getSupportedConstraints();
for (const constraint of Object.keys(supportedConstraints)) {
constraintList.push(constraint);
}
console.log(constraintList);
`
But I get no element allowing me to modify the zoom or the focus: ['aspectRatio', 'deviceId', 'echoCancellation', 'facingMode', 'frameRate', 'groupId', 'height', 'sampleRate', ' sampleSize', 'volume', 'width']
am developing a 4 peers webrtc video chat!
everything is fine at this point , so i add a screen sharing future to the website!
when ever i press screenshare , the connection becomes so slow ! i thought it's because 4 peers connection , but this happens only when i share my screen .
i tried to use RemoveStream function that sends the camera stream, but the streams still lagging .
this is the function that runs after i press screenshare button
async function startCapture() {
var audioStream = await navigator.mediaDevices.getUserMedia({audio:true});
var audioTrack = audioStream.getAudioTracks()[0];
let captureStream = null;
try {
captureStream = await navigator.mediaDevices.getDisplayMedia(gdmOptions);
captureStream.addTrack( audioTrack );
} catch(err) {
console.error("Error: " + err);
}
// return captureStream;
if(rtcPeerConn){
rtcPeerConn.removeStream(myStream);
rtcPeerConn.addStream(captureStream);
}
if(rtcPeerConn1){
rtcPeerConn1.removeStream(myStream);
rtcPeerConn1.addStream(captureStream);
}
if(rtcPeerConn2){
rtcPeerConn2.removeStream(myStream);
rtcPeerConn2.addStream(captureStream);
}
if(rtcPeerConn3){
rtcPeerConn3.removeStream(myStream);
rtcPeerConn3.addStream(captureStream);
}
myStream.getTracks().forEach(function(track) {
track.stop();
});
myStream = captureStream;
success(myStream);
}
i even tried to remove tracks from the first stream like this
async function startCapture() {
myStream.getTracks().forEach(function(track) {
track.stop();
});
var audioStream = await navigator.mediaDevices.getUserMedia({audio:true});
var audioTrack = audioStream.getAudioTracks()[0];
let captureStream = null;
try {
captureStream = await navigator.mediaDevices.getDisplayMedia(gdmOptions);
captureStream.addTrack( audioTrack );
} catch(err) {
console.error("Error: " + err);
}
if(rtcPeerConn){
rtcPeerConn.removeStream(myStream);
rtcPeerConn.addStream(captureStream);
}
if(rtcPeerConn1){
rtcPeerConn1.removeStream(myStream);
rtcPeerConn1.addStream(captureStream);
}
if(rtcPeerConn2){
rtcPeerConn2.removeStream(myStream);
rtcPeerConn2.addStream(captureStream);
}
if(rtcPeerConn3){
rtcPeerConn3.removeStream(myStream);
rtcPeerConn3.addStream(captureStream);
}
myStream = captureStream;
success(myStream);
}
as you see i used removeStream function to avoid sending useless streams , but still nothing changed.
What are the constraints you are placing on getDisplayMedia? Perhaps you are sending "too much" video content, and thus slowing everything down.
[edit]
According to your comment, you are recording audio from the screen, and also audio from the mic. Perhaps remove the audio track from the screen recording?
You can also use options to reduce the size of the video: (this requires using getUserMedia instead of getDisplayMedia)
video:{
width: { min: 100, ideal: width, max: 1920 },
height: { min: 100, ideal: height, max: 1080 },
frameRate: {ideal: framerate}
}
Perhaps a lower framerate? Try reducing the size and see if that helps too :)
when the user clicks the button, The app should start listening the audio and stop the recording when the user is silent(say for 20 seconds after silence) , now store that audio file in a wave format(eg: test.wav) in react native ?
As you already know that there is no inbuilt feature to record audio in React Native, However you can always use third part Library/package. There are number of them available on NPM. Here is one of them react native audio record
import AudioRecord from 'react-native-audio-record';
const options = {
sampleRate: 16000, // default 44100
channels: 1, // 1 or 2, default 1
bitsPerSample: 16, // 8 or 16, default 16
audioSource: 6, // android only
wavFile: 'test.wav' // default 'audio.wav'
};
AudioRecord.init(options);
//Start Recording
AudioRecord.start();
//Stop Recording
AudioRecord.stop();
Hope this help
If you are also interested in creating file from this recording in the user device, i recommend using react-native-audio
i personally tried in android and it worked pretty smoothly, example of code:
/* on top of component:*/
import {AudioRecorder, AudioUtils} from 'react-native-audio';
/*all your other imports*/
class Example extends Component {
constructor(props) {
super(props);
this.AudioRecorder = AudioRecorder
}
startRecord(){
let folder = AudioUtils.DocumentDirectoryPath;
let audioPath = folder +'/myFile.wav';
let options= { SampleRate: 22050,
Channels: 1,
AudioQuality: "Low",
AudioEncoding: "wav",
MeteringEnabled: true,};
this.AudioRecorder.prepareRecordingAtPath(audioPath,options).then(
((success)=>{ this.AudioRecorder.startRecording((success)=>{
}).catch((err)=>{})
}).catch(err=>{})
}
stopRecord(){
this.audioRecorder.stop()
}
}
you will probably also need user permission to use the device microphone, for this i recommend react-native-permissions
I created a sample application using samsung tv sdk and added video player with the bellow code.but it does not playing my video
SceneScene1.prototype.initialize = function () {
alert("SceneScene1.initialize()");
// this function will be called only once when the scene manager show this scene first time
// initialize the scene controls and styles, and initialize your variables here
// scene HTML and CSS will be loaded before this function is called
sf.service.VideoPlayer.init({
onend:function(){
sf.service.VideoPlayer.setFullScreen(false);
}
});
sf.service.VideoPlayer.setKeyHandler(sf.key.RETURN,
function(){
sf.service.VideoPlayer.setFullScreen(false);
});
var vLeft = parseInt($("#svecVideo_y5ww").css('left'));
var vTop = parseInt($("#svecVideo_y5ww").css('top'));
var vHeight = parseInt($("#svecVideo_y5ww").css('height'));
var vWidth = parseInt($("#svecVideo_y5ww").css('width'));
sf.service.VideoPlayer.setPosition({
left:vLeft,
top:vTop,
width:vWidth,
height:vHeight
});
sf.service.VideoPlayer.show();
sf.service.VideoPlayer.play({
url: 'http://media.w3.org/2010/05/sintel/trailer.mp4',
fullScreen: false,
title: 'Samsung movie',
startTime: 5,
liveStream: false,
timeString: true,
authHeader: 'none'
});
};
and if we try to inspect using web inspector.it shows the bellow errors
Service is unavailable due to network or service interference.
The file can't be played because the format isn't supported.
Unable to play the file. Please check it and try again later.
i tried with different files,and shows the same error
I have recently been learning about Firefox OS/B2G. I am aware of the extensive set of APIs in place that are able to fetch images from the wallpaper gallery, change settings and set reminders (to name a few). However, I'm completely stumped as to how to how to change the wallpaper, or, indeed, if this is even possible. Apologies if this is a silly question. Many thanks in advance.
You can do this by using a share activity
// imgToShare is the image you want to set as wallpaper
var shareImage = document.querySelector("#share-image"),
imgToShare = document.querySelector("#image-to-share");
if (shareImage && imgToShare) {
shareImage.onclick = function () {
if(imgToShare.naturalWidth > 0) {
// Create dummy canvas
var blobCanvas = document.createElement("canvas");
blobCanvas.width = imgToShare.width;
blobCanvas.height = imgToShare.height;
// Get context and draw image
var blobCanvasContext = blobCanvas.getContext("2d");
blobCanvasContext.drawImage(imgToShare, 0, 0);
// Export to blob and share through a Web Activitiy
blobCanvas.toBlob(function (blob) {
new MozActivity({
name: "share",
data: {
type: "image/*",
number: 1,
blobs: [blob]
}
});
});
}
else {
alert("Image failed to load, can't be shared");
}
};
}
You can test a live example with the Firefox OS boilerplate https://github.com/robnyman/Firefox-OS-Boilerplate-App/ .