I'm trying to communicate with a bluetooth LE device, but have been told I need to "authenticate" before being able to read / write data. The hardware dev has told me that the device sends a key to the recipient, and I need to reply with 12000000000000000000000000. He has tested this successfully with the NRF Connect desktop app (but I need to replicate this in react native).
I've tried sending 12000000000000000000000000 (converted to base64) to the device's notify characteristic as soon as I connect to it using the code below:
const Buffer = require("buffer").Buffer;
const loginString = "12000000000000000000000000";
const hexToBase64 = Buffer.from(loginString).toString("base64");
characteristics[0].writeWithResponse(hexToBase64).then(()=>...)
However, I keep getting "GATT exception from MAC address C7:7A:16:6B:1F:56, with type BleGattOperation{description='CHARACTERISTIC_WRITE'}" even though the code executes properly (no catch error).
I've looked through the react-native-ble-plx docs and still haven't found a solution to my problem, any help would be apreciated!
In case the BLE device runs a Authorization Control Service(ASC)(UUID 0x183D), your application would play a Client-role. In ACS, there are two characteristics that a client is able to write to: "ACS Data In"(UUID 0x2B30) and "ACS Control Point"(UUID 0x2B3D), while only the "ACS Data Out Notify" characteristic(UUID 0x2B31) has a Notify-property which would be initiated by the Server but enabled by the Client. Basically, the data structure in these characteristics are little-endian in the payload, and converting the key to little-endian before the write operation may work. These are what I known from my recently studying on BLE documents and may these help.
The data type for writing to a characteristic is typically an array of bytes.
Try converting the string to a byte array, something like this:
const loginString = "12000000000000000000000000";
const byteArray = Array.from(loginString, c => c.charCodeAt(0));
characteristics[0].writeWithResponse(byteArray).then(() => ...)
Related
There is a very good utility called ttyd, which allows you to run a console application on your computer and display this console in the browser.
After startup, the utility starts an http web server on the specified port and when accessing localhost, a website with a web application that connects using web sockets to localhost:<port>/ws, and already with the help of them there is communication between the web application and the ttyd agent running on the computer.
I want to implement a client for ttyd in c#. I studied with the help of chrome tools what data the web application sends before receiving the data output to the console. This is just a string: {"authToken":"","columns":211,"rows":46} and tried to repeat the same actions in the c# client. But for some reason, no data from ttyd is returned to me.
Comparing the data output by ttyd to its console in the OS itself, it can be seen that it does not even create a process when accessing from my client.
Here is the code I use with the Websocket.Client package
var exitEvent = new ManualResetEvent(false);
var url = new Uri("ws://localhost:7681/ws");
using (var client = new WebsocketClient(url))
{
client.ReconnectTimeout = TimeSpan.FromSeconds(30);
client.ReconnectionHappened.Subscribe(info =>
Console.WriteLine($"Reconnection happened, type: {info.Type}"));
client.MessageReceived.Subscribe(msg => Console.WriteLine($"Message received: {msg}"));
client.Start();
Task.Run(() => client.Send("{\"AuthToken\":\"\",\"columns\":211,\"rows\":46}"));
exitEvent.WaitOne();
}
I have absolutely no idea how to get ttyd to send data to my client. Do you have any idea what action the browser is doing I'm missing in my c# client?
I tried different libraries for web sockets in c#, and also used postman with copying all the headers that the original web application sends to the ttyd agent, but this does not change anything. That is, ttyd, something is fundamentally interfering, as if my web client is not doing something that the browser is doing.
I have a LM068 BLE serial adapter which I'm trying to communicate with from a web app. I have tested it from the nRF Connect app on Android, where the communication works fine.
The characteristic has the properties "notify" and "writeWithoutResponse".
When calling characteristic.startNotifications() I get an error "GATT Error: invalid attribute length.".
Calling characteristic.writeValue() successfully sends the data, and I can see the incoming data in my serial monitor. When sending data from the serial terminal, the characteristicvaluechanged event never fires. Notifications works from the nRF Connect app.
This is part of my current code:
const characteristic = await service.getCharacteristic(characteristicName)
try {
await characteristic.startNotifications()
} catch (e) {
console.log(e.message)
// GATT Error: invalid attribute length.
}
const encoder = new TextEncoder('utf-8')
characteristic.writeValue(encoder.encode('test')) // Works
characteristic.addEventListener('characteristicvaluechanged', handleValueChanged) // Never gets called
So it turns out that the way I was testing the web app was the issue. I didn't have a BLE dongle for my workstation, so I was using my phone to access my development server. Of course web bluetooth needs to be either run on localhost or from https, so I simply ran the development server on https and accessed it on the network from my phone (like https://192.168.0.x). I proceeded even though chrome deemed it unsafe, but apparently only part of web bluetooth works this way.
Pairing and writeWithoutResponse works with an unsigned certificate. Notifications does not.
I'm leaving this here in case anyone else makes the same mistake.
I am working on a project where we plan on controlling a rover through a web-based application. I am using UV4L and its modules on the Raspberry Pi. I have the streaming side set up well, but now I am trying to send data back to the Pi.
I have taken this joystick and put into the demo webpage.
What I want to do is take the X and Y value that this joystick produces and send it back to the Pi and have it print the values. The way I have been attempting to do this is to turn the X and Y values into a JSON and read the JSON with Python. I am relatively new to programming, and have been thrown into the proverbial deep end.
I was trying to use an example I found in another stackoverflow question this is what I produced butchering the code:
var xhr = new XMLHttpRequest();
var url= “webappurl”;
xhr.open(“POST”, url, true);
xhr.setRequestHeader(“Content-Type”, “json”);
xhr.onload= function () {
if (xhr.readyState === 4 && xhr.status === 200) {
var json =JSON.parse(xhr.responseText);
console.log(json.x +”, “ + json.y);
}
};
var data = JSON.stringify({x, y});
xhr.send(data);
Then I did this on the Python Side:
import requests
import simplejson
r = requests.get('webappurl')
c = r.content
j = simplejson.loads(c)
print(j)
The problem I have been having is that everything I find online has a different recommendation on how to do this and I haven't been able to find something in other people's projects I could utilise for our purposes or have the knowledge to adapt, and I need to keep it as direct/simple as possible.
I am under the impression that the joystick may already be built with functions/variables that can be used to trigger or post.
Any recommendations for the best way to go about this or the correct code to do this would be appreciated - I also have the WebRTC data channels available but I don't know if I need to use them to do this.
I also wondered if there was means to send the variable values over the websocket and use python to parse the websocket.
Thank you for your time,
Since you are developing a web application, it seems natural to stay with WebRTC. UV4L supports two-way audio, video and data channels. Here is how data channels work on the UV4L side.
Furthermore, the built-in WebRTC demo page that you can fetch, as an example, from the /stream/webrtc URL on the uv4l server certainly embeds some javascript code using data channels from the client side. You can find some code in this other demo web app here as well.
I have a Java Spring Application with a Tomcat server that listen on kafka topic. I want to display all messages in a real-time mode on the web page. Therefore, when a kafka messages is arrived in the backend I want to see it on my web page. I don't know a good approach to push kafka message directly to the front-end and display it on web page. Is someone could help my with a solution and some examples that could help? Thanks!
I have implemented a system like this in Java for my last employer, albeit not with Spring/Tomcat. It was consuming messages from Kafka and serving them on a web socket to be displayed in the browser. The approach I followed was to use akka-stream-kafka and akka-http for web-socket support. The benefit of that is both are based on akka-streams which makes it an easy fit for streaming data.
While you can embed akka-http in your spring app running inside tomcat, it may not feel the most natural choice any more as spring framework already has its own support for both kafka and websockets. However, if you're not familiar with either, then jumping on the akka approach may be easiest and the core logic goes along these lines (I can't share the code from work so have just put this together from the examples in the docs, not tested):
public Route createRoute(ActorSystem system) {
return path("ws", () -> {
ConsumerSettings<byte[], String> consumerSettings = ConsumerSettings.create(system, new ByteArrayDeserializer(), new StringDeserializer())
.withBootstrapServers("localhost:9092")
.withGroupId(UUID.randomUUID().toString()) //this is so that each client gets all messages. To be able to resume from where a client left off in case of disconnects, you can generate in on the client side and pass in the request
.withProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest")
return handleWebSocketMessages(
Flow.fromSinkAndSourceCoupled(
Sink.ignore(),
Consumer.committableSource(consumerSettings, Subscriptions.topics("topic1"))
.map(msg -> TextMessage.create(msg.record().value()))
)
);
}
}
To expose this route you can follow the minimalistic example, the only difference being the route you define needs the ActorSystem:
final Http http = Http.get(system);
final ActorMaterializer materializer = ActorMaterializer.create(system);
final Flow<HttpRequest, HttpResponse, NotUsed> routeFlow = createRoute(system).flow(system, materializer);
final CompletionStage<ServerBinding> binding = http.bindAndHandle(routeFlow,
ConnectHttp.toHost("localhost", 8080), materializer);
Once you have your messages published to the websocket, the front end will code will of course depend on your UI framework of choice, the simplest code to consume ws messages from javascript is:
this.connection = new WebSocket('ws://url-to-your-ws-endpoint');
this.connection.onmessage = evt => {
// display the message
To easily display the message in the UI, you want the format to be something convenient, like JSON. If your Kafka messages are not JSON already, that's where the Deserializers in the first snippet come in, you can convert it to a convenient JSON string in the Deserializer or do it later on in the .map() called on the Source object.
Alternatively, if polling is an option you can also consider using the off-the-shelf Kafka Rest Proxy, then you only need to build the front-end.
I am using MongoDB 3.6.2's change streams(with Mongo NodeJS driver 3.0.1) to try to implement resumeable streams of data to the browser. So at some point in my code I am doing a JSON.stringify on the resume token that I get back during an update(ie the _id for the update from the change stream). I send this across the wire to the front end app and then when there is a disconnect and subsequent reconnect, this information is sent back to the server to let it know where to resume from. However, I can not seemingly simply supply this JSON object back to the driver to resume from as I get an invalid type for the resume token as a runtime error.
An example of what the stringify is resulting in:
{"_data":"glpeTK8AAAABRmRfaWQAZFoygBEXtikxY6F/zgBaEAQkFlJHID5PgaLDUFQD2jMyBA=="}
The actual resume token appears to be a specialized buffer object in the form:
{
_data: {
buffer: Buffer(49),
position = 49,
sub_type = 0,
_bsontype = "Binary"
}
}
My problem is, of course, in getting the string back into an actual resume token. The Buffer(49) itself seems to be getting converted into a base64 string which is then assigned to _data. I am uncertain what the other fields are. I have not been able to find much documentation on this sort of marshalling/unmarshalling of the tokens to handle resumptions of the streamed data to the client(given multiple node servers for scaling, simply keeping the token on the server is not really a good option, since that server may go down and the client tries to reconnect, so it having the token that relates to where it left off and the next server it connects to picking up from there is optimal).
In general it seems the resume tokens have been locked down hard from the developer, it contains valuable information that I could use (what collection we are on, timestamp for the update, etc), but none of this is made available to me(although it is apparently a feature they will be adding for 3.7). Likewise I can't even get a resume token for the current moment in time for a given collection(very useful if I've read a collection in and haven't had any updates, but don't want to read it in fully again if I disconnect/reconnect just because no updates have occurred to the collection). But hopefully some of these facilities will be getting added as Mongo realizes their usefulness.
I have tested successfully using the resume token's to resume a stream if there is no marshalling/unmarshalling involved (ie the token sits as an object on the server and is not converted to a wire-acceptable form). But this is not very useful in a scaled environment.
Just in case anyone else has this problem I thought I would post my current solution, though I still invite better solutions.
Through the magic of BSON, I simply serialize the resume token, convert that buffer to base64, and send that to the browser. Then when the browser sends it back after a disconnect/reconnect, I simply make a buffer from the base64, and use bson to deserialize that buffer. The resulting token works like a charm.
Ie, my marshalling of the update token looks like this code:
b64String = bson.serialize(resumeToken).toString('base64');
And, my unmarshalling of the base64 token sent after a disconnect/reconnect looks like this code:
token = bson.deserialize(Buffer.from(b64String, 'base64'));
Alternatively, you can utilise MongoDB Extended JSON library: npm module mongodb-extjson to stringify and parse the token.
For example:
const EJSON = require("mongodb-extjson");
resumeToken = EJSON.stringify(changeStreamDoc._id);
and to resume:
changeStream = collection.watch([], { resumeAfter: EJSON.parse(resumeToken) });
Tested on mongodb-extjson version 2.1.0 and MongoDB v3.6.3