Wrapping a bidirectional stream in NodeJS - javascript

I would like to wrap a socket into another object which:
transforms output - e.g. turning strings into Base64
transforms input - e.g. turning Base64 into strings
(Note: my use case is not Base64 but is isomorphic to that and would significantly complicate the question.)
It is trivial to do this in the two direction separately - e.g. pipe socket into a Base64 decoder and write into a Base64 encoder which pipes into the socket.
I would like to generate a single new object from a socket, which could be written to and read from (via data events), yet perform the required transformations for both directions.
The solution needs to support Node 0.8.X and 0.10.X.

This seems to be one approach:
https://github.com/ajlopez/ObjectStream/blob/master/lib/objectstream.js

Related

How to Hash Buffer with Expo. Crypto. digestStringAsync()

https://docs.expo.io/versions/latest/sdk/crypto/
Info: app is in react-native & Expo
Expo now provides a hashing method but only accepts String types for input.
Nodejs provides hashing of buffers/strings/arrays and can return the hashed value as a buffer as well.
The issue I am having is that I need to hash a buffer(uint8array) in my app with sha256. So I must find a way to convert it to a string, this causes issues because if I convert the buffer to anything else ex.(hexString,base64) my hash result is not equivalent to the exact value that nodejs crypto would provide.
I have also tried converting the buffer to utf-8/16 strings but this causes issues because the chars in this format can have multiple bytes(issues with the number being above 127). So this path was incorrect as well.
My hashes must be equivalent to the traditional nodejs createHash() results.
I am hitting a wall on this issue and not not find a way around it.

Share Array Reference between JavaScript and ActionScript

I have been working with the WebcamJS library to stream video from the camera in the browser, but I have run into a major performance bottleneck. Since I am using Internet Explorer 11 (and cannot switch to a different browser), this library reverts to a Flash fallback for accessing the camera.
The ActionScript callback that returns the image is prohibitively slow, due to its many steps. When it returns the image, it first encodes its byte array as a PNG or JPG, and then to a base 64 string. This string is then passed using ExternalInterface to JavaScript, which decodes the image through a data URI. Given that all I need is the byte array in JavaScript, these extra steps seem wasteful.
I have had to tackle a similar problem before, in C++/Python. Rather than repeatedly pass the array data back and forth between the two languages, I used Python to pass a NumPy array reference at the start of the program. Then, they could both access the same data from then on without any extra communication.
Now that you understand my situation, here is the question: is it possible to pass a JavaScript Array or ArrayBuffer by reference to ActionScript? In that case, I could have ActionScript modify the JavaScript array directly, rather than waste time converting, encoding, and decoding the image for each frame.
(WebcamJS: https://github.com/jhuckaby/webcamjs)
Just for completeness, SharedObjects in flash store data, serialised with the AMF protocol, on the file system (in a very specific, sandboxed and locked place) where Javascript has no way to access to read the data.
Have you tried to simply call the ExternalInterface method and pass an array of bytes as an argument? it would be passed by value, automatically converted from the Actionscript data structure to the Javascript one, but you'd skip all the encoding steps and it should be fast enough ...

Efficient way to create a buffer with a schema and send it over WebSockets in Javascript?

I'm writing a browser game and am looking for a way to send raw array buffers to and from the node.js server.
I don't like the idea of sending JSON strings over WebSockets because:
I have to specify keys when the receiver already knows the format
There is no validation or checking if you send malformed structure (not required though)
Parsing JSON strings in to objects is inherently slower than reading binary
Wasted bandwidth from the entire payload being a string (instead of packed ints, for example)
Ideally, I would be able to have a schema for every message type, construct that message, and send its raw array buffer to the server. Something like this:
schema PlayerAttack {
required int32 damage;
required int16[] coords;
required string name;
}
var message = new PlayerAttack(5, [24, 32], "John");
websockets.send(message.arrayBuffer());
And it would then arrive at the Node.js server as a Buffer with the option to decode in to an object.
Google's Protocol Buffers almost fits this use-case, but are too slow and have too much overhead (7x slower than JSON.parse from my benchmark and includes features like tagging which I have no use for).

deserialize protostuff byte array with javascript

I used protostuff to transform to byte array a json input i have. The code in java is:
LinkedBuffer buffer = LinkedBuffer.allocate(1024);
Schema<String> orderSchema = RuntimeSchema.getSchema(String.class);
int i = 1 ;
for(String p:poligonsStr) {
buffer.clear();
byteslist.add(ProtostuffIOUtil.toByteArray(p, orderSchema, buffer));
}
The problem is I don't know the algorithm that is used and how I can decode with the JavaScript client (Node.js). Also I saw there is a very good algorithm called Smile implemented for protostuff in project com.dyuproject.protostuff but I would like to know how to get schema with that library- I didn't manage that yet.
I would like to know what's the best to use: ProtostuffIOUtil or SmileIOUtil?
And how to use? And how to decode with JavaScript?
protostuff binary encoding is different from protobuf, and as far as I know there is no JavaScript library to decode protostuff-encoded data at the moment.
smile is not supported by web browsers out of the box, but there are libraries that can decode it.
As for me, there are two optiomal ways how you can encode data on server using Protostuff library, and decode it using JavaScript on client side:
Use protobuf encoding, it is good if size of encoded data is important. On server side, you should use ProtobufIOUtil to serialize your data to protobuf binary format. On client side, you can use https://github.com/dcodeIO/ProtoBuf.js/ to decode binary data from server.
Use JSON encoding, it is native format for JavaScript and usually it will be parsed faster than binary protobuf-encoded data. On server side, you should use JsonIOUtil (from protostuff-json module) to serialize your data to JSON text format. On client side, it is supported out of the box.
Here is an example how to serialize your POJO into protobuf binary using Protostuff: HelloService.java

Base64 decode from Buffer to Buffer efficiently in node (node.js)

I currently have a python and C version of wsproxy (WebSockets to plain TCP socket proxy) in noVNC. I would like to create a version of wsproxy using node.js. A key factor (and the reason I'm not just using existing node WebSocket code) is that until the WebSocket standard has binary encoding, all traffic between wsproxy and the browser/client must be encoded (and base64 decode/encode is fast and easy in the browser).
Buffer types have base64 encoding support but this is from a Buffer to a string and vice versa. How can I base64 encode/decode between two buffers without having to convert to a string first?
Constraints:
Direct Buffer to Buffer (unless you can show Buffer->string->Buffer is just as fast).
Since node has built-in base64 support I would like to use that and not external modules.
In place encode/decode within a single Buffer is acceptable.
Here is a discussion of base64 support in node, but from what I can see doesn't answer my question.
You should be able to do this using streams, but first read through this blog about UTF-8 decoding because you will likely encounter similar issues. I'm not suggesting that you do UTF-8 encode/decode if you don't need it, but that you look at how this code handled the issue of a single character spread across multiple bytes that were separated by a chunk boundary.

Categories