I used protostuff to transform to byte array a json input i have. The code in java is:
LinkedBuffer buffer = LinkedBuffer.allocate(1024);
Schema<String> orderSchema = RuntimeSchema.getSchema(String.class);
int i = 1 ;
for(String p:poligonsStr) {
buffer.clear();
byteslist.add(ProtostuffIOUtil.toByteArray(p, orderSchema, buffer));
}
The problem is I don't know the algorithm that is used and how I can decode with the JavaScript client (Node.js). Also I saw there is a very good algorithm called Smile implemented for protostuff in project com.dyuproject.protostuff but I would like to know how to get schema with that library- I didn't manage that yet.
I would like to know what's the best to use: ProtostuffIOUtil or SmileIOUtil?
And how to use? And how to decode with JavaScript?
protostuff binary encoding is different from protobuf, and as far as I know there is no JavaScript library to decode protostuff-encoded data at the moment.
smile is not supported by web browsers out of the box, but there are libraries that can decode it.
As for me, there are two optiomal ways how you can encode data on server using Protostuff library, and decode it using JavaScript on client side:
Use protobuf encoding, it is good if size of encoded data is important. On server side, you should use ProtobufIOUtil to serialize your data to protobuf binary format. On client side, you can use https://github.com/dcodeIO/ProtoBuf.js/ to decode binary data from server.
Use JSON encoding, it is native format for JavaScript and usually it will be parsed faster than binary protobuf-encoded data. On server side, you should use JsonIOUtil (from protostuff-json module) to serialize your data to JSON text format. On client side, it is supported out of the box.
Here is an example how to serialize your POJO into protobuf binary using Protostuff: HelloService.java
Related
What is the correct way to insert Buffer as BLOB just 1-to-1 without serializing to string/hex? Is it possible at all from Node.js?
At my backend I already have Buffers with binary data and want to store them with minimum overheads using prepared statements. I'm getting ER_TRUNCATED_WRONG_VALUE_FOR_FIELD errno:1366 Incorrect string value error. It looks like the library just assumes there is a valid utf8 string in a buffer.
I'm writing a browser game and am looking for a way to send raw array buffers to and from the node.js server.
I don't like the idea of sending JSON strings over WebSockets because:
I have to specify keys when the receiver already knows the format
There is no validation or checking if you send malformed structure (not required though)
Parsing JSON strings in to objects is inherently slower than reading binary
Wasted bandwidth from the entire payload being a string (instead of packed ints, for example)
Ideally, I would be able to have a schema for every message type, construct that message, and send its raw array buffer to the server. Something like this:
schema PlayerAttack {
required int32 damage;
required int16[] coords;
required string name;
}
var message = new PlayerAttack(5, [24, 32], "John");
websockets.send(message.arrayBuffer());
And it would then arrive at the Node.js server as a Buffer with the option to decode in to an object.
Google's Protocol Buffers almost fits this use-case, but are too slow and have too much overhead (7x slower than JSON.parse from my benchmark and includes features like tagging which I have no use for).
I have a system with two process, one in Java and one in Node.js. The node.js process is a web front-end. It ingests data and sends it to a queue for process, where it is consumed by the Java process. The data is a string of user data collected from browser code, and I create a string of the data in Node.js using JSON.stringify(data), and push it to kinesis, the aws version of kafka. The java process receives the raw bytes, creates a string object, and then parses the json.
My question is this: I am not sure what decoding I should instruct Java to use on the raw bytes. Right now, it "just works" with the default decoding, but I feel this is a bad idea, since the default decoding could be platform-dependent. Should I use Buffer on the Node.js side, and encode the string into UTF-8 before I push it to the queue? That way I could explicitly set UTF-8 as the decoding on the java side? Is this a best practice? Any advice much appreciated.
I have a very long list of points (lat/long) in my C# program and I need to pass it to my webgl code. I have done it before using json but I thought I could reduce the bandwidth if I sent the data in binary format. Based on my research, on the client side I should use XMLHttpRequest with arraybuffer as response type. I just do not know what to do on the server side. I mean how to prepare the binary data in C# that could be interpreted as arraybuffer in javascript.
I am new to web programming, so if I am not clear on any part, please let me know.
I am not an expert, but I found this solution to work:
On the server side, convert the array of numbers to a byte array and then send the byte array to javascript as a binary file (Set the Mime type to "application/octet-stream")
I currently have a python and C version of wsproxy (WebSockets to plain TCP socket proxy) in noVNC. I would like to create a version of wsproxy using node.js. A key factor (and the reason I'm not just using existing node WebSocket code) is that until the WebSocket standard has binary encoding, all traffic between wsproxy and the browser/client must be encoded (and base64 decode/encode is fast and easy in the browser).
Buffer types have base64 encoding support but this is from a Buffer to a string and vice versa. How can I base64 encode/decode between two buffers without having to convert to a string first?
Constraints:
Direct Buffer to Buffer (unless you can show Buffer->string->Buffer is just as fast).
Since node has built-in base64 support I would like to use that and not external modules.
In place encode/decode within a single Buffer is acceptable.
Here is a discussion of base64 support in node, but from what I can see doesn't answer my question.
You should be able to do this using streams, but first read through this blog about UTF-8 decoding because you will likely encounter similar issues. I'm not suggesting that you do UTF-8 encode/decode if you don't need it, but that you look at how this code handled the issue of a single character spread across multiple bytes that were separated by a chunk boundary.