What to do when Arduino sends empty ArrayBuffer to NW.js? - javascript

I've been playing around with the MPR121-Shield Capacitive Touch Sensor by Adafruit. In Arduino's IDE, there is an example of code you can simply download and run, and it works perfectly : when I touch on the pin 11, for example, it returns ''11 touched'', and when I release it, it returns ''11 pressed''. Great!
Now the problem comes when I try to transfer that data to NW.js. By using Chrome's serial port in NW.js, I can connect to the port in which my Arduino is connected, and try to read whichever data the Arduino is sending. However, as I try to read the data, the only thing I receive is an ArrayBuffer filled with bytes of 0. I am really not sure what is happening here, as both devices work perfectly when I run it in Arduino's IDE, but it returns basically nothing with chrome.serialport.
Does anyone have a tip or an idea of what's going on here? If I do a console.log(info.data), I only get an ArrayBuffer with empty bites.
Thanks
Here is my code :
const ab2str = require('arraybuffer-to-string');
nw.Window.get().showDevTools();
let buffer = "";
chrome.serial.getDevices(devices => {
devices.forEach(device => console.log(device));
});
// let port = "COM3";
let port = "/dev/cu.usbmodem142401";
chrome.serial.connect(port, {bitrate: 1000000}, () => {
console.log("Serialport connected:" + port);
chrome.serial.onReceive.addListener(onDataReceived);
});
function onDataReceived(info) {
let lines = ab2str(info.data).split("\n");
lines[0] = buffer + lines[0];
buffer = lines.pop();
lines.forEach(line => {
const [type, value] = line.split("=");
console.log(type, value);
});
}

The Tx and Rx baud rates has to be the same to properly decode the information, and the arduino IDE handles that for you in the first case, but you will need to handle it manually for the second case. In serial port communication, single bit is transferred at a time unlike in parallel ports where you will have all bits availed at the same time for reading. So, in serial ports, the rate at which the information is transmitted(Tx) should be the same as the rate at which the information is received(Rx), otherwise bits could be lost and you may get a wrong information. The arduino IDE handles most of these issues for you, if I'm not wrong the IDE allows you to change the baud rate, but the default is 9600.

Related

Recording and uploading audio from javascript

I am trying to record and upload audio from javascript. I can successfullly record audio Blobs from a MediaRecorder. My understanding is that after recording several chunks into blobs, I would concatenate them as a new Blob(audioBlobs) and upload that. Unfortunately, the result on the server-side keeps being more or less gibberish. I'm currently running a localhost connection, so converting to uncompressed WAV isn't a problem (might be come one later, but that's a separate issue). Here is what I have so far
navigator.mediaDevices.getUserMedia({audio: true, video: false})
.then(stream => {
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.start(1000);
const audioChunks = [];
mediaRecorder.addEventListener("dataavailable", event => {
audioChunks.push(event.data);
});
function sendData () {
const audioBlob = new Blob(audioChunks);
session.call('my.app.method', [XXXXXX see below XXXXXX])
}
})
The session object here is an autobahn.js websockets connection to a python server (using soundfile. I tried a number of arguments in the place that was labelled by XXXXX in the code.
Just pass the audioBlob. In that case, the python side just receives an empty dictionary.
Pass audioBlob.text() in that case, I get something that looks somewhat binary (starts with OggS), but it can't be decoded.
Pass audioBlob.arrayBuffer(). In that case the python side receives an empty dictionary.
A possible solution could be to convert the data to WAV on the serverside (just changing the mime-type on the blob doesn't work) or to find a way to interpret the .text() output on the server side.
The solution was to use recorder.js and then use the getBuffer method in there to get the wave data as a Float32Array.

Playback when using Web Audio API skips at the beginning of every chunk

I've been building a music app and today I finally got around to the point where I started trying to work playing the music into it.
As an outline of how my environment is set up, I am storing the music files as MP3s which I have uploaded into a MongoDB database using GridFS. I then use a socket.io server to download the chunks from the MongoDB database and send them as individual emits to the front end where the are processed by the Web Audio API and scheduled to play.
When they play, they are all in the correct order but there is this very tiny glitch or skip at the same spots every time (presumably between chunks) that I can't seem to get rid of. As far as I can tell, they are all scheduled right up next to each other so I can't find a reason why there should be any sort of gap or overlap between them. Any help would be appreciated. Here's the code:
Socket Route
socket.on('stream-audio', () => {
db.client.db("dev").collection('music.files').findOne({"metadata.songId": "3"}).then((result) =>{
const bucket = new GridFSBucket(db.client.db("dev"), {
bucketName: "music"
});
bucket.openDownloadStream(result._id).on('data',(chunk) => {
socket.emit('audio-chunk',chunk)
});
});
});
Front end
//These variable are declared as object variables, hence all of the "this" keywords
context: new (window.AudioContext || window.webkitAudioContext)(),
freeTime: null,
numChunks: 0,
chunkTracker: [],
...
this.socket.on('audio-chunk', (chunk) => {
//Keeping track of chunk decoding status so that they don't get scheduled out of order
const chunkId = this.numChunks
this.chunkTracker.push({
id: chunkId,
complete: false,
});
this.numChunks += 1;
//Callback to the decodeAudioData function
const decodeCallback = (buffer) => {
var shouldExecute = false;
const trackIndex = this.chunkTracker.map((e) => e.id).indexOf(chunkId);
//Checking if either it's the first chunk or the previous chunk has completed
if(trackIndex !== 0){
const prevChunk = this.chunkTracker.filter((e) => e.id === (chunkId-1))
if (prevChunk[0].complete) {
shouldExecute = true;
}
} else {
shouldExecute = true;
}
//THIS IS THE ACTUAL WEB AUDIO API STUFF
if (shouldExecute) {
if (this.freeTime === null) {
this.freeTime = this.context.currentTime
}
const source = this.context.createBufferSource();
source.buffer = buffer
source.connect(this.context.destination)
if (this.context.currentTime >= this.freeTime){
source.start()
this.freeTime = this.context.currentTime + buffer.duration
} else {
source.start(this.freeTime)
this.freeTime += buffer.duration
}
//Update the tracker of the chunks that this one is complete
this.chunkTracker[trackIndex] = {id: chunkId, complete: true}
} else {
//If the previous chunk hasn't processed yet, check again in 50ms
setTimeout((passBuffer) => {
decodeCallback(passBuffer)
},50,buffer);
}
}
decodeCallback.bind(this);
this.context.decodeAudioData(chunk,decodeCallback);
});
Any help would be appreciated, thanks!
As an outline of how my environment is set up, I am storing the music files as MP3s which I have uploaded into a MongoDB database using GridFS.
You can do this if you want, but these days we have tools like Minio, which can make this easier using more common APIs.
I then use a socket.io server to download the chunks from the MongoDB database and send them as individual emits to the front end
Don't go this route. There's no reason for the overhead of web sockets, or Socket.IO. A normal HTTP request would be fine.
where the are processed by the Web Audio API and scheduled to play.
You can't stream this way. The Web Audio API doesn't support useful streaming, unless you happened to have raw PCM chunks, which you don't.
As far as I can tell, they are all scheduled right up next to each other so I can't find a reason why there should be any sort of gap or overlap between them.
Lossy codecs aren't going to give you sample-accurate output. Especially with MP3, if you give it some arbitrary number of samples, you're going to end up with at least one full MP3 frame (~576 samples) output. The reality is that you need data ahead of the first audio frame for it to work properly. If you want to decode a stream, you need a stream to start with. You can't independently decode MP3 this way.
Fortunately, the solution also simplifies what you're doing. Simply return an HTTP stream from your server, and use an HTML audio element <audio> or new Audio(url). The browser will handle all the buffering. Just make sure your server handles range requests, and you're good to go.

How can I parse a UDP packet using a c header file in node.js

I'm trying to connect a server that reads Telemtry Data from a racing simulator.
I have been researching on how to listen for the UDP packet and have so far successfully be able to listen to the client and receive the packets and then log the ArrayBuffer.
I've come across this StackOvervflow answer that shows how to parse it
javascript ArrayBuffer, what's it for?
I was wondering however if there is any other way of parsing the C++ Header File in an automatic manner or if I have to sanitize the data myself
Here's a sample of my express node.js server
server.on("message", (buffer, rinfo) =>{
console.log("received udp message")
console.log(buffer)
})
Here's a sample of the c++ header file
enum EUDPStreamerPacketHandlerType
{
eCarPhysics = 0,
eRaceDefinition = 1,
eParticipants = 2,
eTimings = 3,
eGameState = 4,
eWeatherState = 5, // not sent at the moment, information can be found in the game state packet
eVehicleNames = 6, //not sent at the moment
eTimeStats = 7,
eParticipantVehicleNames = 8
};
I was wondering if someone could point me in the right direction, my google searches haven't turned into much, any help would be appreciated
Here is the python code
https://github.com/tyretrack/server/blob/91a0aba1ade8d3a45b53e5af432fb05a55703730/tyretrack/pcars/v2.py
It would be up to you to convert it to your desired javascript code.

How to message child process in Firefox add-on like Chrome native messaging

I am trying to emulate Chrome's native messaging feature using Firefox's add-on SDK. Specifically, I'm using the child_process module along with the emit method to communicate with a python child process.
I am able to successfully send messages to the child process, but I am having trouble getting messages sent back to the add-on. Chrome's native messaging feature uses stdin/stdout. The first 4 bytes of every message in both directions represents the size in bytes of the following message so the receiver knows how much to read. Here's what I have so far:
Add-on to Child Process
var utf8 = new TextEncoder("utf-8").encode(message);
var latin = new TextDecoder("latin1").decode(utf8);
emit(childProcess.stdin, "data", new TextDecoder("latin1").decode(new Uint32Array([utf8.length])));
emit(childProcess.stdin, "data", latin);
emit(childProcess.stdin, "end");
Child Process (Python) from Add-on
text_length_bytes = sys.stdin.read(4)
text_length = struct.unpack('i', text_length_bytes)[0]
text = sys.stdin.read(text_length).decode('utf-8')
Child Process to Add-on
sys.stdout.write(struct.pack('I', len(message)))
sys.stdout.write(message)
sys.stdout.flush()
Add-on from Child Process
This is where I'm struggling. I have it working when the length is less than 255. For instance, if the length is 55, this works:
childProcess.stdout.on('data', (data) => { // data is '7' (55 UTF-8 encoded)
var utf8Encoded = new TextEncoder("utf-8).encode(data);
console.log(utf8Encoded[0]); // 55
}
But, like I said, it does not work for all numbers. I'm sure I have to do something with TypedArrays, but I'm struggling to put everything together.
The problem here, is that Firefox is trying to read stdout as UTF-8 stream by default. Since UTF-8 doesn't use the full first byte, you get corrupted characters for example for 255. The solution is to tell Firefox to read in binary encoding, which means you'll have to manually parse the actual message content later on.
var childProcess = spawn("mybin", [ '-a' ], { encoding: null });
Your listener would then work like
var decoder = new TextDecoder("utf-8");
var readIncoming = (data) => {
// read the first four bytes, which indicate the size of the following message
var size = (new Uint32Array(data.subarray(0, 4).buffer))[0];
//TODO: handle size > data.byteLength - 4
// read the message
var message = decoder.decode(data.subarray(4, size));
//TODO: do stuff with message
// Read the next message if there are more bytes.
if(data.byteLength > 4 + size)
readIncoming(data.subarray(4 + size));
};
childProcess.stdout.on('data', (data) => {
// convert the data string to a byte array
// The bytes got converted by char code, see https://dxr.mozilla.org/mozilla-central/source/addon-sdk/source/lib/sdk/system/child_process/subprocess.js#357
var bytes = Uint8Array.from(data, (c) => c.charCodeAt(0));
readIncoming(bytes);
});
Maybe is this similar to this problem:
Chrome native messaging doesn't accept messages of certain sizes (Windows)
Windows-only: Make sure that the program's I/O mode is set to O_BINARY. By default, the I/O mode is O_TEXT, which corrupts the message format as line breaks (\n = 0A) are replaced with Windows-style line endings (\r\n = 0D 0A). The I/O mode can be set using __setmode.

Large file upload with WebSocket

I'm trying to upload large files (at least 500MB, preferably up to a few GB) using the WebSocket API. The problem is that I can't figure out how to write "send this slice of the file, release the resources used then repeat". I was hoping I could avoid using something like Flash/Silverlight for this.
Currently, I'm working with something along the lines of:
function FileSlicer(file) {
// randomly picked 1MB slices,
// I don't think this size is important for this experiment
this.sliceSize = 1024*1024;
this.slices = Math.ceil(file.size / this.sliceSize);
this.currentSlice = 0;
this.getNextSlice = function() {
var start = this.currentSlice * this.sliceSize;
var end = Math.min((this.currentSlice+1) * this.sliceSize, file.size);
++this.currentSlice;
return file.slice(start, end);
}
}
Then, I would upload using:
function Uploader(url, file) {
var fs = new FileSlicer(file);
var socket = new WebSocket(url);
socket.onopen = function() {
for(var i = 0; i < fs.slices; ++i) {
socket.send(fs.getNextSlice()); // see below
}
}
}
Basically this returns immediately, bufferedAmount is unchanged (0) and it keeps iterating and adding all the slices to the queue before attempting to send it; there's no socket.afterSend to allow me to queue it properly, which is where I'm stuck.
Use web workers for large files processing instead doing it in main thread and upload chunks of file data using file.slice().
This article helps you to handle large files in workers. change XHR send to Websocket in main thread.
//Messages from worker
function onmessage(blobOrFile) {
ws.send(blobOrFile);
}
//construct file on server side based on blob or chunk information.
I believe the send() method is asynchronous which is why it will return immediately. To make it queue, you'd need the server to send a message back to the client after each slice is uploaded; the client can then decide whether it needs to send the next slice or a "upload complete" message back to the server.
This sort of thing would probably be easier using XMLHttpRequest(2); it has callback support built-in and is also more widely supported than the WebSocket API.
In order to serialize this operation you need the server to send you a signal every time a slice is received & written (or an error occurs), this way you could send the next slice in response to the onmessage event, pretty much like this:
function Uploader(url, file) {
var fs = new FileSlicer(file);
var socket = new WebSocket(url);
socket.onopen = function() {
socket.send(fs.getNextSlice());
}
socket.onmessage = function(ms){
if(ms.data=="ok"){
fs.slices--;
if(fs.slices>0) socket.send(fs.getNextSlice());
}else{
// handle the error code here.
}
}
}
You could use https://github.com/binaryjs/binaryjs or https://github.com/liamks/Delivery.js if you can run node.js on the server.
EDIT : The web world, browsers, firewalls, proxies, changed a lot since this answer was made. Right now, sending files using websockets
can be done efficiently, especially on local area networks.
Websockets are very efficient for bidirectional communication, especially when you're interested in pushing information (preferably small) from the server. They act as bidirectional sockets (hence their name).
Websockets don't look like the right technology to use in this situation. Especially given that using them adds incompatibilities with some proxies, browsers (IE) or even firewalls.
On the other end, uploading a file is simply sending a POST request to a server with the file in the body. Browsers are very good at that and the overhead for a big file is really near nothing. Don't use websockets for that task.
I think this socket.io project has a lot of potential:
https://github.com/sffc/socketio-file-upload
It supports chunked upload, progress tracking and seems fairly easy to use.

Categories