The video stops and does not resume after track.stop() - javascript

I'm creating simple video-chat one-to-one using PeerJS and React. Everything is working fine except the camera muting. When participant A muting camera, it's muting on both clients but when participant A unmutes it, participant B can see only the static image instead of the opponent video.
I have a global state webcamState which changed when the corresponding button was clicked:
const videoRef = useRef<any>(null);
const webcamState = useSelector(getWebcamState);
const [stream, setStream] = useState<MediaStream | null>(null);
const [isWebcamLoading, setIsWebcamLoading] = useState(true);
const loadUserMedia = () => {
setIsWebcamLoading(true);
getUserMedia(
{ video: webcamState, audio: micState },
(newStream: MediaStream) => {
videoRef.current.srcObject = newStream;
setStream(newStream);
setIsWebcamLoading(false);
},
(err: any) => {
setIsWebcamLoading(false);
},
);
};
useEffect(() => {
if (videoRef?.current?.srcObject) {
videoRef.current.srcObject.getVideoTracks().forEach((track: any) => (track.enabled = webcamState));
if (!webcamState) {
videoRef.current.srcObject.getVideoTracks().forEach((track: any) => track.stop());
videoRef.current.pause();
} else {
loadUserMedia();
}
}
}, [webcamState]);
Then this stream exporting from this hook and passed into another to initialize Peer call (and peer answer as well):
export interface IPeerOptions {
opponentVideoRef: React.MutableRefObject<null>;
currentVideoRef: React.MutableRefObject<null>;
currentStream: MediaStream;
userId: string;
}
export const initializePeerCall = (options: IPeerOptions): Peer => {
const call = options.peer.call(options.opponentId, options.currentStream);
call.on('stream', stream => {
options.opponentVideoRef = setVideo(options.opponentVideoRef, stream);
});
call.on('error', err => {
console.log(err);
});
call.on('close', () => {
console.log('Connection closed');
});
return options.peer;
};
No errors appear in the console
But if I will remove the following line: videoRef.current.srcObject.getVideoTracks().forEach((track: any) => track.stop()); everything will work fine as expected
Maybe anyone faced something similar?
UPD: I tried this but the result was the same:
useEffect(() => {
if (videoRef?.current?.srcObject) {
videoRef.current.srcObject.getVideoTracks().forEach((track: any) => (track.enabled = webcamState));
if (!webcamState) {
videoRef.current.srcObject.getVideoTracks().forEach((track: any) => track.stop());
loadUserMedia();
} else {
loadUserMedia();
}
}
}, [webcamState]);
Notification:
const onOpponentVideoStatusChanged = (newStatus: boolean) => {
setOpponentState(prev => {
return { microphoneState: !!prev?.microphoneState, webcamState: newStatus };
});
options.opponentVideoRef.current.srcObject.getVideoTracks().forEach((track: any) => (track.enabled = newStatus));
};
As I understand after long investigation, user B still getting the same stream after user A created a new one. How can I fix it?

If you turn track off using track.stop(), you can not resume it.
I did get new stream when I have to resume it.
React.useEffect(()=>{
if(mediaStream) { // mediaStream could be state locally or globally.
const videoTrack = mediaStream.getVideoTracks();
videoTrack.forEach((t) => {
t.enabled = false;
t.stop();
});
}
// get MediaStream again with contraints
}, [isAudioOnly]);

Related

How to update RTK Query cache when Firebase RTDB change event fired (update, write, create, delete)

I am using redux-tookit, rtk-query (for querying other api's and not just Firebase) and Firebase (for authentication and db).
The code below works just fine for retrieving and caching the data but I wish to take advantage of both rtk-query caching as well as Firebase event subscribing, so that when ever a change is made in the DB (from any source even directly in firebase console) the cache is updated.
I have tried both updateQueryCache and invalidateTags but so far I am not able to find an ideal approach that works.
Any assistance in pointing me in the right direction would be greatly appreciated.
// firebase.ts
export const onRead = (
collection: string,
callback: (snapshort: DataSnapshot) => void,
options: ListenOptions = { onlyOnce: false }
) => onValue(ref(db, collection), callback, options);
export async function getCollection<T>(
collection: string,
onlyOnce: boolean = false
): Promise<T> {
let timeout: NodeJS.Timeout;
return new Promise<T>((resolve, reject) => {
timeout = setTimeout(() => reject('Request timed out!'), ASYNC_TIMEOUT);
onRead(collection, (snapshot) => resolve(snapshot.val()), { onlyOnce });
}).finally(() => clearTimeout(timeout));
}
// awards.ts
const awards = dbApi
.enhanceEndpoints({ addTagTypes: ['Themes'] })
.injectEndpoints({
endpoints: (builder) => ({
getThemes: builder.query<ThemeData[], void>({
async queryFn(arg, api) {
try {
const { auth } = api.getState() as RootState;
const programme = auth.user?.unit.guidingProgramme!;
const path = `/themes/${programme}`;
const themes = await getCollection<ThemeData[]>(path, true);
return { data: themes };
} catch (error) {
return { error: error as FirebaseError };
}
},
providesTags: ['Themes'],
keepUnusedDataFor: 1000 * 60
}),
getTheme: builder.query<ThemeData, string | undefined>({
async queryFn(slug, api) {
try {
const initiate = awards.endpoints.getThemes.initiate;
const getThemes = api.dispatch(initiate());
const { data } = (await getThemes) as ApiResponse<ThemeData[]>;
const name = slug
?.split('-')
.map(
(value) =>
value.substring(0, 1).toUpperCase() +
value.substring(1).toLowerCase()
)
.join(' ');
return { data: data?.find((theme) => theme.name === name) };
} catch (error) {
return { error: error as FirebaseError };
}
},
keepUnusedDataFor: 0
})
})
});

audio file doesn't work on ios iphone neither safari nor chrome

I'm creating a chat with the ability to send voice notes.
and voice notes work perfectly on desktop and android but on ios things start to crash
once the audio files load, the chrome console on ios shows an error
mediaError {code:4, message:Unsupported source type, MEDIA_ERR_ABORTED:1, MEDIA_ERR_NETWORK:2, MEDIA_ERR_DECODE:3}
and if I click on the play button it gives the error DOMException
This is the function that records audio
const recordAudio = async (_) => {
const stream = await navigator.mediaDevices.getUserMedia({ audio: true })
const mediaRecorder = new MediaRecorder(stream, {
mimeType:'audio/mp4',
audioBitrate: '128000',
})
mediaRecorder.start()
const audioChunks = []
mediaRecorder.addEventListener('dataavailable', (event) => {
audioChunks.push(event.data)
})
mediaRecorder.addEventListener('stop', () => {
const audioBlob = new Blob(audioChunks, { type: 'audio/mp3' })
composeMessage('audio', audioBlob)
setIsRecording(false)
})
setTimeout(() => {
mediaRecorder.stop()
}, 30000)
}
The function that creates audio file
const createAudioFile = () => {
const audio = new Audio()
audio.setAttribute('preload', 'metadata')
const source = document.createElement('source')
source.setAttribute('src', URL)
source.setAttribute('type', 'audio/mp3')
audio.appendChild(source)
setAudioFile(audio)
}
and this is the function that triggers the audio file
const playAudioHandler = () => {
const playPromise = audioFile.play()
if (playPromise !== undefined) {
playPromise
.then((_) => {
audioFile.play()
setIsPlaying(true)
})
.catch((error) => {
pauseAudioHandler()
})
}
}

How to fix React Native Agora asynchronous error

I was given this Eslint error:
Assignments to the '_engine' variable from inside React Hook useCallback will be lost after each render. To preserve the value over time, store it in a useRef Hook and keep the mutable value in the '.current' property. Otherwise, you can move this variable directly inside useCallback.eslint(react-hooks/exhaustive-deps)
from this code:
const RtcEngineInit = useCallback(async () => {
const {appId} = appInit;
_engine = await RtcEngine.create(appId);
await _engine.enableAudio();
_engine.addListener('UserOffline', (uid: any, reason: any) => {
console.log('UserOffline', uid, reason);
const {peerIds} = appInit;
setAppInit((prevState) => ({
...prevState,
peerIds: peerIds.filter((id) => id !== uid),
}));
});
_engine.addListener(
'JoinChannelSuccess',
(channel: any, uid: any, elapsed: any) => {
console.log('JoinChannelSuccess', channel, uid, elapsed);
setAppInit((prevState) => ({
...prevState,
joinSucceed: true,
}));
},
);
}, []);
React.useEffect(() => {
RtcEngineInit();
}, [RtcEngineInit]);
could someone explain me why this is happening and help me to solve that? thanks.
As the error suggests, You should not have the RTC Engine inside the render loop. All the statements inside the render loop get executed again. To avoid this. You can have the RTC engine inside a useRef hook.
const App: React.FC = () => {
let engine = useRef<RtcEngine | null>(null);
const appid: string = 'APPID';
const channelName: string = 'channel-x';
const [joinSucceed, setJoinSucceed] = useState<boolean>(false);
const [peerIds, setPeerIds] = useState<Array<number>>([]);
useEffect(() => {
/**
* #name init
* #description Function to initialize the Rtc Engine, attach event listeners and actions
*/
async function init() {
if (Platform.OS === 'android') {
//Request required permissions from Android
await requestCameraAndAudioPermission();
}
engine.current = await RtcEngine.create(appid);
engine.current.enableVideo();
engine.current.addListener('UserJoined', (uid: number) => {
//If user joins the channel
setPeerIds((pids) =>
pids.indexOf(uid) === -1 ? [...pids, uid] : pids,
); //add peer ID to state array
});
engine.current.addListener('UserOffline', (uid: number) => {
//If user leaves
setPeerIds((pids) => pids.filter((userId) => userId !== uid)); //remove peer ID from state array
});
engine.current.addListener('JoinChannelSuccess', () => {
//If Local user joins RTC channel
setJoinSucceed(true); //Set state variable to true
});
}
init();
}, []);
return <UI />
};
export default App;
The full example at:
https://github.com/technophilic/Agora-RN-Quickstart/blob/sdk-v3-ts/src/App.tsx

Autodesk.Viewing.OBJECT_TREE_CREATED_EVENT only emit at the first initialization in Forge viewer

export const initForgeViewer = (urn: string, renderingHTMLElemet: HTMLElement): Promise<any> => {
const forgeOptions = getForgeOptions(urn)
return new Promise((resolve, reject) => {
Autodesk.Viewing.Initializer(forgeOptions, () => {
const viewerConfig = {
extensions: ["ToolbarExtension"],
sharedPropertyDbPath: undefined,
canvasConfig: undefined, // TODO: Needs documentation or something.
startOnInitialize: true,
experimental: []
}
const viewer = new Autodesk.Viewing.Private.GuiViewer3D(renderingHTMLElemet, viewerConfig)
const avd = Autodesk.Viewing.Document
viewer.setTheme('light-theme')
viewer.start()
avd.load(forgeOptions.urn, (doc: any) => { // Autodesk.Viewing.Document
const viewables = avd.getSubItemsWithProperties(doc.getRootItem(), { type: 'geometry', role: '3d' }, true)
if (viewables.length === 0) {
reject(viewer)
return
} else {
const initialViewable = viewables[0]
const svfUrl = doc.getViewablePath(initialViewable)
const modelOptions = { sharedPropertyDbPath: doc.getPropertyDbPath() }
viewer.loadModel(svfUrl, modelOptions, (model: any) => { // Autodesk.Viewing.Model
this.loadedModel = model
resolve(viewer)
})
}
})
})
})
}
I am using the above code to initialise Forge viewer. But I realise that Autodesk.Viewing.OBJECT_TREE_CREATED_EVENT only emit at the first time I initialize the Forge viewer. If I clean the viewer in the following way and initialize it again. The OBJECT_TREE_CREATED_EVENT would be fired
this.viewer.finish()
this.viewer.removeEventListener(Autodesk.Viewing.OBJECT_TREE_CREATED_EVENT,this.onObjectTreeReady)
this.viewer = null
So I can assume you're completely destroying the viewer and creating it again, including all events, right? Please use the following:
viewer.tearDown()
viewer.finish()
viewer = null
Tested using v6

PeerConnection create answer

I would like create WebRTC connection on 2 devices.
On my first device (Initiator), i create my Offer with this method :
createOffer() {
const { pc } = this;
pc.onnegotiationneeded = () => {
pc.createOffer()
.then(offer => pc.setLocalDescription(offer))
.then(() => {
this.setState({
offer: JSON.stringify(pc.localDescription)
});
});
};
}
And on my second device (Receiver), i create the answer :
createAnswer() {
const { pc } = this;
const { dataOffer } = this.state;
if (dataOffer) {
const sd = new RTCSessionDescription(JSON.parse(dataOffer));
pc.setRemoteDescription(sd)
.then(() => pc.createAnswer())
.then(answer => {
this.setState({
offer: JSON.stringify(answer)
});
return pc.setLocalDescription(answer);
});
}
}
But, after send Answer on first device, i've this error : PeerConnection cannot create an answer in a state other than have-remote-offer or have-local-pranswer.
When Initiator receive the answer, i run createAnswer method with answer data, That may be the problem ?
I don't really understand what method/event i need use after receive the answer :/
EDIT with new method for Initiator device :
receiveAnswer() {
const { pc } = this;
const { dataOffer } = this.state;
const sd = new RTCSessionDescription(JSON.parse(dataOffer));
pc.setRemoteDescription(sd);
}
But the connection state stay on checking :(
You can see my componentDidMount :
componentDidMount() {
const { pc } = this;
pc.oniceconnectionstatechange = () => {
this.setState({
connectionState: pc.iceConnectionState
});
};
pc.onaddstream = ({ stream }) => {
if (stream) {
this.setState({
receiverVideoURL: stream.toURL()
});
}
};
}

Categories