Use separate channels or streams for emit in socket.io - javascript

I have a multi client chat application where clients can share both texts and images.
But I'm facing some issue like when user sends and image and the image is quite large and send a text just after it, the users have to wait until the image is fully recieved.
Is there a way to separately emit and recieve the text and image data? Like text is recieved but the image is still being recieved.
Currently I'm using one emitter for both the text and image.
socket.emit('message', data, type, uId, msgId);
And if I have to use another protocol like UDP or WebRTC then which one would be the best? As far I know UDP cannot be used in the browser scripts.

So, what I've figuared out is,
Split the large image data into many [eg. 100] parts. Then use socket.emit();
let image = new Image();
image.src = selectedImage;
image.onload = async function() {
let resized = resizeImage(image, image.mimetype); //resize it before sending.
//store image in 100 parts
let partSize = resized.length / 100;
let partArray = [];
socket.emit('fileUploadStart', 'image', resized.length, id);
for (let i = 0; i < resized.length; i += partSize) {
partArray.push(resized.substring(i, i + partSize));
socket.emit('fileUploadStream', resized.substring(i, i + partSize), id);
await sleep(100); //wait a bit for other events to be sent while image is being sent.
}
socket.emit('fileUploadEnd', id);
Then finally gather the image parts back together.
I've used Map() on the server side and it's pretty fast.
Server:
const imageDataMap = new Map();
socket.on('fileUploadStart', (...) => {
imageDataMap.set(id, {metaData});
});
socket.on('fileUploadStream', (...) => {
imageDataMap.get(id).data += chunk;
});
socket.on('fileUploadEnd', (...) => {
//emit the data to other clients
});

Related

How to write BLE write characteristic over 512B

I have a client attempting to send images to a server over BLE.
Client Code
//BoilerPlate to setup connection and whatnot
sendFile.onclick = async () => {
var fileList = document.getElementById("myFile").files;
var fileReader = new FileReader();
if (fileReader && fileList && fileList.length) {
fileReader.readAsArrayBuffer(fileList[0]);
fileReader.onload = function () {
var imageData = fileReader.result;
//Server doesn't get data if I don't do this chunking
imageData = imageData.slice(0,512);
const base64String = _arrayBufferToBase64(imageData);
document.getElementById("ItemPreview").src = "data:image/jpeg;base64," + base64String;
sendCharacteristic.writeValue(imageData);
};
}
};
Server Code
MyCharacteristic.prototype.onWriteRequest = function(data, offset, withoutResponse, callback) {
//It seems this will not print out if Server sends over 512B.
console.log(this._value);
};
My goal is to send small images (Just ~6kb)...These are still so small that'd I'd still prefer to use BLE over a BT Serial Connection. Is the only way this is possible is to perform some chunking and then streaming the chunks over?
Current 'Chunking' Code
const MAX_LENGTH = 512;
for (let i=0; i<bytes.byteLength; i+= MAX_LENGTH) {
const end = (i+MAX_LENGTH > bytes.byteLength) ? bytes.byteLength : i+MAX_LENGTH;
const chunk = bytes.slice(i, end);
sendCharacteristic.writeValue(chunk);
await sleep(1000);
}
The above code works, however it sleeps in between sends. I'd rather not do this because there's no guarantee a previous packet will be finished sending and I could sleep longer than needed.
I'm also perplexed on how the server code would then know the client has finished sending all bytes and can then assemble them. Is there some kind of pattern to achieving this?
BLE characteristic values can only be 512 bytes, so yes the common way to send larger data is to split it into multiple chunks. Use "Write Without Response" for best performance (MTU-3 must be at least as big as your chunk).

File uploads through socket.io (JavaScript & FileReader)

I am creating a chat app (in React Native), but for now, I have made some tests in vanilla JavaScript. The server is a NodeJS-server.
It works with sending text messages, but now I have some questions about sending photos/videos/audio files. I'm doing a lot of research online on what's the best method to do this.
I came up with the idea to use the FileReader API and split up the file into chunks, and sending chunk by chunk via the socket.emit()-function.
This is my code so far (simplified):
Please note that I will create a React Native app, but for now (for testing), I've just created a HTML-file with an upload form.
// index.html
// the page where my upload form is
var reader = {};
var file = {};
var sliceSize = 1000 * 1024;
var socket = io('http://localhost:8080');
const startUpload = e => {
e.preventDefault();
reader = new FileReader();
file = $('#file)[0].files[0]
uploadFile(0)
}
$('#start-upload').on('click', startUpload)
const uploadFile = start => {
var slice = start + sliceSize + 1;
var blob = file.slice(start, slice)
reader.on('loadend', e => {
if (slice < file.size) {
socket.emit('message', JSON.stringify({
fileName: file.name,
fileType: file.type,
fileChunk: e.target.result
})
} else {
console.log('Upload completed!')
}
})
reader.readAsDataURl(blob)
}
// app.js
// my NodeJS server-file
var file;
var files = {};
io.on('connection', socket => {
console.log('User connected!');
// when a message is received
socket.on('message', data => {
file = JSON.parse(data)
if (!files[file.fileName]) {
// this is the first chunk received
// create a new string
files[file.fileName] = '';
}
// append the binary data
files[file.fileName] = files[file.fileName] + file.fileChunk;
})
// on disconnect
socket.on('disconnect', () => {
console.log('User disconnected!');
})
})
I did not include any checks for file type (I'm not at that point yet), I first want to make sure that this is the right thing to do.
Stuff I need to do:
Send a message (like socket.emit('uploaddone', ...)) from the client to the server to notify the server that the upload is done (and the server can emit the complete file to another user).
My questions are:
Is it okay to send chunks of binary data (base64) over a socket, or would it take up to much bandwidth?
Will I lose some quality (photos/videos/audio files) when splitting them up into chunks?
If there is a better way to do this, please let me know. I'm not asking for working code examples, just some guidance in the good direction.
You can send raw bytes over WebSocket, base64 has 33% size overhead.
Also you won't have to JSON.stringify all (and maybe large) body and parse it on client-side.
Will I lose some quality
No, underlying protocol (TCP) delivers data in-order and without corruption.
I realize this answer is a couple of months late, but just for future reference you should look into using the acknowledgment option with socket.io here
// with acknowledgement
let message = JSON.stringify({
fileName: file.name,
fileType: file.type,
fileChunk: e.target.result
})
socket.emit("message", message, (ack) => {
// send next chunk...
});

How to stream live video frames from client to flask server and back to the client?

I am trying to build a client server architecture where I am capturing the live video from user's webcam using getUserMedia(). Now instead of showing video directly in <video> tag, I want to send it to my flask server, do some processing on frames and throw it back to my web page.
I have used socketio for creating a client-server connection.
This is the script in my index.html. Please pardon my mistakes or any wrong code.
<div id="container">
<video autoplay="true" id="videoElement">
</video>
</div>
<script type="text/javascript" charset="utf-8">
var socket = io('http://127.0.0.1:5000');
// checking for connection
socket.on('connect', function(){
console.log("Connected... ", socket.connected)
});
var video = document.querySelector("#videoElement");
// asking permission to access the system camera of user, capturing live
// video on getting true.
if (navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia({ video: true })
.then(function (stream) {
// instead of showing it directly in <video>, I want to send these frame to server
//video_t.srcObject = stream
//this code might be wrong, but this is what I want to do.
socket.emit('catch-frame', { image: true, buffer: getFrame() });
})
.catch(function (err0r) {
console.log(err0r)
console.log("Something went wrong!");
});
}
// returns a frame encoded in base64
const getFrame = () => {
const canvas = document.createElement('canvas');
canvas.width = video_t.videoWidth;
canvas.height = video_t.videoHeight;
canvas.getContext('2d').drawImage(video_t, 0, 0);
const data = canvas.toDataURL('image/png');
return data;
}
// receive the frame from the server after processed and now I want display them in either
// <video> or <img>
socket.on('response_back', function(frame){
// this code here is wrong, but again this is what something I want to do.
video.srcObject = frame;
});
</script>
In my app.py -
from flask import Flask, render_template
from flask_socketio import SocketIO, emit
app = Flask(__name__)
socketio = SocketIO(app)
#app.route('/', methods=['POST', 'GET'])
def index():
return render_template('index.html')
#socketio.on('catch-frame')
def catch_frame(data):
## getting the data frames
## do some processing
## send it back to client
emit('response_back', data) ## ??
if __name__ == '__main__':
socketio.run(app, host='127.0.0.1')
I also have thought to do this by WebRTC, but I am only getting code for peer to peer.
So, can anyone help me with this?
Thanks in advance for help.
So, what I was trying to do is to take the real time video stream captured by the client's webcam and process them at backend.
My backend code is written in Python and I am using SocketIo to send the frames from frontend to backend. You can have a look at this design to get a better idea about what's happening -
image
My server(app.py) will be running in backend and client will be accessing index.html
SocketIo connection will get establish and video stream captured using webcam will be send to server frames by frames.
These frames will be then processed at the backend and emit back to the client.
Processed frames coming form the server can be shown in img tag.
Here is the working code -
app.py
#socketio.on('image')
def image(data_image):
sbuf = StringIO()
sbuf.write(data_image)
# decode and convert into image
b = io.BytesIO(base64.b64decode(data_image))
pimg = Image.open(b)
## converting RGB to BGR, as opencv standards
frame = cv2.cvtColor(np.array(pimg), cv2.COLOR_RGB2BGR)
# Process the image frame
frame = imutils.resize(frame, width=700)
frame = cv2.flip(frame, 1)
imgencode = cv2.imencode('.jpg', frame)[1]
# base64 encode
stringData = base64.b64encode(imgencode).decode('utf-8')
b64_src = 'data:image/jpg;base64,'
stringData = b64_src + stringData
# emit the frame back
emit('response_back', stringData)
index.html
<div id="container">
<canvas id="canvasOutput"></canvas>
<video autoplay="true" id="videoElement"></video>
</div>
<div class = 'video'>
<img id="image">
</div>
<script>
var socket = io('http://localhost:5000');
socket.on('connect', function(){
console.log("Connected...!", socket.connected)
});
const video = document.querySelector("#videoElement");
video.width = 500;
video.height = 375; ;
if (navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia({ video: true })
.then(function (stream) {
video.srcObject = stream;
video.play();
})
.catch(function (err0r) {
console.log(err0r)
console.log("Something went wrong!");
});
}
let src = new cv.Mat(video.height, video.width, cv.CV_8UC4);
let dst = new cv.Mat(video.height, video.width, cv.CV_8UC1);
let cap = new cv.VideoCapture(video);
const FPS = 22;
setInterval(() => {
cap.read(src);
var type = "image/png"
var data = document.getElementById("canvasOutput").toDataURL(type);
data = data.replace('data:' + type + ';base64,', ''); //split off junk
at the beginning
socket.emit('image', data);
}, 10000/FPS);
socket.on('response_back', function(image){
const image_id = document.getElementById('image');
image_id.src = image;
});
</script>
Also, websockets runs on secure origin.
I had to tweak your solution a bit :-
I commented the three cv variables and the cap.read(src) statement, modified the following line
var data = document.getElementById("canvasOutput").toDataURL(type);
to
var video_element = document.getElementById("videoElement")
var frame = capture(video_element, 1)
var data = frame.toDataURL(type);
Using the capture function from here :- http://appcropolis.com/blog/web-technology/using-html5-canvas-to-capture-frames-from-a-video/
I'm not sure if this is the right way to do it but it happened to work for me.
Like I said I'm not super comfortable with javascript so instead of manipulating the base64 string in javascript, I'd much rather just send the whole data from javascript and parse it in python this way
# Important to only split once
headers, image = base64_image.split(',', 1)
My takeaway from this, at the risk of sounding circular, is that you can't directly pull an image string out of a canvas that is containing a video element, you need to create a new canvas onto which you draw a 2D image of the frame you capture from the video element.

Stream audio over websocket with low latency and no interruption

I'm working on a project which requires the ability to stream audio from a webpage to other clients. I'm already using websocket and would like to channel the data there.
My current approach uses Media Recorder, but there is a problem with sampling which causes interrupts. It registers 1s audio and then send's it to the server which relays it to other clients. Is there a way to capture a continuous audio stream and transform it to base64?
Maybe if there is a way to create a base64 audio from MediaStream without delay it would solve the problem. What do you think?
I would like to keep using websockets, I know there is webrtc.
Have you ever done something like this, is this doable?
--> Device 1
MediaStream -> MediaRecorder -> base64 -> WebSocket -> Server --> Device ..
--> Device 18
Here a demo of the current approach... you can try it here: https://jsfiddle.net/8qhvrcbz/
var sendAudio = function(b64) {
var message = 'var audio = document.createElement(\'audio\');';
message += 'audio.src = "' + b64 + '";';
message += 'audio.play().catch(console.error);';
eval(message);
console.log(b64);
}
navigator.mediaDevices.getUserMedia({
audio: true
}).then(function(stream) {
setInterval(function() {
var chunks = [];
var recorder = new MediaRecorder(stream);
recorder.ondataavailable = function(e) {
chunks.push(e.data);
};
recorder.onstop = function(e) {
var audioBlob = new Blob(chunks);
var reader = new FileReader();
reader.readAsDataURL(audioBlob);
reader.onloadend = function() {
var b64 = reader.result
b64 = b64.replace('application/octet-stream', 'audio/mpeg');
sendAudio(b64);
}
}
recorder.start();
setTimeout(function() {
recorder.stop();
}, 1050);
}, 1000);
});
Websocket is not the best. I solved by using WebRTC instead of websocket.
The solution with websocket was obtained while recording 1050ms instead of 1000, it causes a bit of overlay but still better than hearing blanks.
Although you have solved this through WebRTC, which is the industry recommended approach, I'd like to share my answer on this.
The problem here is not websockets in general but rather the MediaRecorder API. Instead of using it one can use PCM audio capture and then submit the captured array buffers into a web worker or WASM for encoding to MP3 chunks or similar.
const context = new AudioContext();
let leftChannel = [];
let rightChannel = [];
let recordingLength = null;
let bufferSize = 512;
let sampleRate = context.sampleRate;
const audioSource = context.createMediaStreamSource(audioStream);
const scriptNode = context.createScriptProcessor(bufferSize, 1, 1);
audioSource.connect(scriptNode);
scriptNode.connect(context.destination);
scriptNode.onaudioprocess = function(e) {
// Do something with the data, e.g. convert it to WAV or MP3
};
Based on my experiments this would give you "real-time" audio. My theory with the MediaRecorder API is that it does some buffering first before emitting out anything that causes the observable delay.

Nativescript get base64 data from native image

I try to use https://github.com/bradmartin/nativescript-drawingpad for saving a signature to my backend. But I am simply not capable to find a solution to get some "useful" data from getDrawing(), which returns a native image Object, for example UIImage on iOS.
I would love to "convert" the image data to some base64 (png, or whatever) string and send it to my server.
I tried someting like:
var ImageModule = require("ui/image");
var ImageSourceModule = require("image-source");
elements.drawingpad.getDrawing().then(function(a){
var image = ImageSourceModule.fromNativeSource( a );
api.post("sign", image.toBase64String());
});
I also tried to post simply a like seen in the demo stuff.
I would really love to see a demo of how to get my hands on the "image data" itself.
thanks!
Thanks to #bradmartin I found the solution:
var image = ImageSourceModule.fromNativeSource(a);
var base64 = image.toBase64String('png');
Actually after lots of digging with tons of error, I finally figured it out.
First you have to require the nativescript-imagepicker source module and also the nativescript image source.
var imagepicker = require("nativescript-imagepicker");
var ImageSourceModule = require("tns-core-modules/image-source");
a case where you want to update a user profile and also send a base64 string to your backend for processing
function changeProfileImage(args) {
var page = args.object;
var profile = page.getViewById("profile-avatar");
var context = imagepicker.create({ mode: "single" });
context.authorize().then(function() {
return context.present();
}).then(function(selection) {
profile.background = `url(${selection[0]._android})`;
profile.backgroundRepeat = `no-repeat`;
profile.backgroundSize = `cover`;
ImageSourceModule.fromAsset(selection[0]).then(image => {
var base64 = image.toBase64String('png');
// console.log(base64);
uploadMediaFile(base64);
});
}).catch(function (e) {
// process error
console.log(e);
});
}

Categories