Websocket: Browser dosn't seem to receive data from python server - javascript

I'm working on creating a websocket server via python (I'm kinda new to python) and I've made a significant progress, but I am unable to send data to the web browser. I can establish a connection and receive data from the browser, but I cannot send back data. The browser just ignores it. I would assume that if the browser received a package that didn't follow the specifications, it would terminate the connection, but the connection stays active.
Here is the method I am using to encode the data into the frame:
def encode_message(data):
frame = "\x81"
size = len(data)
if size * 8 <= 125:
frame += chr(size)
else:
raise Exception("Uh, oh. Strings larger than 125 bits are not supported")
return frame + data
I am sending the data using sock.sendall(framed_data). What could be the problem? The data for a message like "yo" ends up being 10000001 00000010 01111001 01101111 (spaces added for improved readability). Why doesn't the browser accept a message like this? Doesn't it follow the guidelines outlined in the specification? I am trying to support the most recent websocket version which I believe to be version 13. I am using python version 2.7.3.
I have tried to look at python websocket libraries' source code, but all of them seem to implement a deprecated version of the websocket protocol that has been shown to have vulnerabilities.
Here is the code that calls the function above:
def send(data):
frame = encode_message(data)
print "Sending all..."
sock.sendall(frame) #Socket that handles all communications with client
print "Frame sent :)"
return
I also downloaded wireshark to sniff the packages sent between the server and the socket. The packages sent by my server are identical to those sent from a server that is accepted by the browser. I couldn't see any difference at all. (I looked directly at the hex source)

The second byte of your transmitted message (and the length check in your code) looks wrong. The length of a message is in bytes, not bits.
From RFC6455 ยง5.2 (my emphasis)
Payload length: 7 bits, 7+16 bits, or 7+64 bits
The length of the "Payload data", in bytes: if 0-125, that is the
payload length.
The reason that nothing is received in the browser is that your message claims to have a 16 byte body. The browser will read the 2 additional bytes you send then block waiting for another 14 bytes that it expects but you don't send.
If you change the second byte to the number of bytes in the message - 0x2 or 00000010 binary - then things should work.

I finally figured out the problem! It took hours of unfun debugging and messing with my code. After closely examining the packages sent back and forth between the server and client I finally realized that there was a problem with my server's connection upgrade response. Whenever it computed a hash, it also added a \n to the end of it. That resulted in a \n\r\n at the end of one of the lines. The client interpreted that as the end of that transmission and everything that followed was parsed using WebSocket protocol. I had another line after that in the header, so it totally messed up my communications with the client. I could still read from the client, but if I tried to write to the client, the data would get messed up.

Related

Sending Raw data to embedded System with websocket in node.js

I try to connect my PC with node.js and Websocket to a embedded system that communicates via JSON strings. Previously i was convinced that an xhr request would be sufficent to do this, but i have learned using wireshark, that there is to much overhead using xhr, and i am not able to include an end of line or carriage return to the JSON string, that is required by the embedded system as I get a invalid or unexpected token error everytime i add the \r in the string.
('{"id":7, "Client_ID":"webinterface", "method":"OutBit", "param":
[10,1], "jsonrpc":"2.0", "protocol":"2X"}\r');
I have searched for examples on stackoverflow, and it seems that websocket can be used to send and receive raw data, but im not sure if this code is set up correctly and if its meant to be used by a client PC.
The embedded system only needs a JSON String with an end of line character. It sends back a JSON String with updated values after connecting. I cannot change the behaviour of the embedded system communication as its custom made hardware.
var WebSocket = require('ws')
var ws = new WebSocket("ws://1.100.0.280:9398");
ws.send('{"id":7, "Client_ID":"webinterface", "method":"OutBit", "param":
[10,1], "jsonrpc":"2.0", "protocol":"2X"\r}');
with the code above I receive a:
WebSocket is not open: readyState 0 (CONNECTING) error.
Is it even possible to use Websocket with a non websocket / embedded System Server?

Messages being delayed when using websockets

I have a program which is using the Websocket TCP: The client is an extension in Chrome and the server is an application written in C++.
When I send small data from the client to the server, it works fine. But when I send large amounts of data (e.g. a source html page), it will be slightly delayed.
For Example:
Client sends: 1,2,3
Server receives: 1,2
Client sends: 4
Server receives: 3
Client sends: 5
Server receives: 4
It's seems like it's a delay.
This is my code client:
var m_cWebsocket = new WebSocket("Servername");
if (m_cWebsocket == null) { return false; }
m_cWebsocket.onopen = onWebsocketOpen(m_cWebsocket); m_cWebsocket.onmessage = onWebsocketMessage;
m_cWebsocket.onerror = onWebsocketError;
m_cWebsocket.onclose = onWebsocketError;
I using m_cWebsocket.send(strMsg) to send data.
Server code
while (true) { recv(sSocket, szBufferTmp, 99990, 0); //recv(sSocket,
szBufferTmp, 99990, MSG_PEEK); //some process }
Since you haven't posted any code to show your implementation of the TCP server or client I can only speculate and try to explain what might be going on here.
That means the potential problems and solutions I outline below may or may not apply to you, but regardless this information should still be helpful to others who might find this question in the future.
TL;DR: (most likely) It's either the server is too slow, the server is not properly waiting for complete 'tcp packets' to be buffered, or the server doesn't know when to properly start and stop and is de-synching while it waits for what it thinks is a 'full packet' as defined by something like a buffer size.
It sounds to me like you are pushing data from the client either faster than the server the server can read, or more likely, the server is buffering a set number of bytes from the current TCP Stream and waiting for the buffer to fill before outputting additional data.
If you are sending this over localhost it's unlikely you are not close to limit of the stream though, and I would expect a server written in C++ would be able to keep up with the javascript client.
So this leads me to believe that the issue is in fact the stream buffer on the C++ side.
Now since the server has no way to know to what data you are sending and or how much of it you are sending, it is common for a TCP stream to utilize a stream buffer that contiguously reads data from the socket until either the buffer has filled to a known size, or until it sees a predefined 'stop character'. This would usually be something like a "line end" or \n character, sometimes \n\r (line feed, carriage feed) depending on your operating system.
Since you haven't specified how you are receiving your data, I'm going to just assume you created either a char or byte buffer of a certain size. I'm a pretty rusty on my C++ socket information so I might be wrong, but I do believe there is a default 'read timeout' on C++ tcp streams as well.
This means you are possibly running into 1 of 2 issues.
Situation 1) You are waiting until that byte/char buffer is filled before outputing it's data. Issue is that will act like a bus that only leaves the station when all seats are filled. If you don't fill all the seats, you server is just sitting and waiting until it gets more data to fill up fully and output your data.
Situation 2) You are running up against the socket read timeout and therefore the function is not getting all the data before outputting the data. This is like a bus that is running by the clock. Every 10 minutes that bus leaves the station, doesn't matter if that bus is full or empty, it's leaving and the next bus will pick up anyone who shows up late. In your case, the TCP stream isn't able to load 1, 2 and 3 onto a bus fast enough, so the bus leaves with just 1, 2 on it because after 20ms of not receiving data, the server is exiting from the function and outputing the data. On the next loop however, there is 3 waiting at the top of the stream buffer ready to get on the next bus out. The Stream will load 3, wait til those 20ms are finished, and then exit before repeating this loop.
I think it's more likely the first situation is occurring though, as I would expect the server to either start catching up, or falling further behind as the 2 servers either begin to sync together, or have internall TPC stream buffer fill up as the server falls further and further behind.
Main point here, you need some way to synchronize the client and the server connections. I would recommend sending a "start byte" and "End byte" to single when a message has begun and finished, so you don't exit the function too early.
Or send a start byte, followed by the packet size in bytes, then filling up the buffer until your buffer has the correct numbers of bytes. Additionally you could include an end byte as well for some basic error checking.
This is a pretty involved topic and hard to really give you a good answer without any code from you, but this should also help anyone in the future who might be having a similar issue.
EDIT I went back and re-read your question and noticed you said it was only with large amounts of data, so I think my original assumption was wrong, and it's more likely situation 2 because the client is sending the data to your server faster than the server can read it, and thus might be bottle necking the connection and the client is only able to send additional data once the server has emptied part of it's TCP stream buffer.
Think of it like a tube of of water. The socket (tube) can only accept (fill up) with so much data (water) before it's full. Once you let some water out the bottom though, you can fill it up a little bit more. The only reason it works for small files is that the file is too small to fill the entire tube.
Additional thoughts: You can see how I was approaching this problem in C# in this question: Continuously reading from serial port asynchronously properly
And another similar question I had previously (again in C#): How to use Task.WhenAny with ReadLineAsync to get data from any TcpClient
It's been awhile since I've played with TCP streams though, so my apologies in that I don't remember all the niche details and caveats of the protocal, but hopefully this information is enough to get you in the ball park for solving your problem.
Full disclaimer, it's been over 2 years since I last touched C++ TCP sockets, and have since worked with sockets/websockets in other languages (such as C# and JavaScript), so I may have some facts wrong about the behavior of C++ TCP sockets specifically, but the core information should still apply. If I got anything wrong, someone in the comments will most likely have the correct information.
Lastly, welcome to stack overflow!

How to recover from webusb response status: "babble"

I was testing the new webusb api (https://wicg.github.io/webusb/) on Chrome and was testing sending (transferOut) and receiving(transferIn) from a USB device.
It worked fine, but I tried reading less data than expected (2 bytes instead of the 3, where the length of the message is actually represented in the two first bytes).
The problem is that when I read less bytes than expected, the USB api returns the status "babble". How do I ensure normal communication after that? I can still send data, but receiving data always returns the error "DOMException: A transfer error has occured."
I tried running device.clearHalt("in", 1) (direction "in" and endpoint 1) but it also doesn't work (throws "DOMException: Unable to clear endpoint.").
Has anyone had this problem yet?
(I'm using Chrome 65.0.3325.181 on OSX)
As mentioned above I'm still investigating the the best ways to recover from the babble error but regardless the easiest way to solve this problem is to avoid calling transferIn() with a length that isn't a multiple of the endpoint's maximum packet size. It's much easier to handle the extra data in software than try to recover from a hardware protocol error.

Tomcat, service unavailable 503

My webapp uses JSP / JavaScript/ google visualization, and runs on Tomcat 7 on a 64bit windows server with enough resources dedicated to this app.It is still under testing, so, I have control over the load.
The problem is when I work from device at same network of the server, everything works fine. But when I work from device from different network with a request took a long time (more than 6 minutes) I get Service Unavailable [503] message after 6 minutes of waiting while processing in the server is going on and completed successfully. I checked the Tomcat logs but i couldn't find any thing every thing seems to be work fine. I tried different solutions but non of them worked with me:
Increase Tomcat's connector timeout.
Increase the Tomcat RAM.
Disable the server firewall
Try different browsers
Adjust the request timeout from the browser.
I experimented by setting Tomcat's Connector properties in conf/server.xml. I played around with all combinations and ranges of connectionTimeout and keepAliveTimeout.
The final configuration is:
<Connector port="80" protocol="HTTP/1.1"
address="0.0.0.0"
connectionTimeout="3600000"
redirectPort="8443" />
I'm wondering if anybody else has run into such a problem, and how they solved it.
I think you server.xml is having wrong data . Change connector port from 80 to 8080 it always allow four digit and start from 8080 not sure . please update as below
<Connector port="8080" protocol="HTTP/1.1"
address="0.0.0.0"
connectionTimeout="3600000"
redirectPort="8443" />
503 Service Unavailable
The server is currently unable to handle the request due to a temporary overloading or maintenance of the server. The implication is that this is a temporary condition which will be alleviated after some delay. If known, the length of the delay MAY be indicated in a Retry-After header. If no Retry-After is given, the client SHOULD handle the response as it would for a 500 response.
Note: The existence of the 503 status code does not imply that a
server must use it when becoming overloaded. Some servers may wish
to simply refuse the connection.click here for more information
let me know if you face any issue

Websocket frame size limitation

I'm sending huge chunks of JSON data through websockets. The JSON may have over 1000 entries. Due to the frame size limitation, the Websocket protocol automatically splits the JSON into frames, which cannot be helped. As we cannot change the frame size of websockets.
The problem:
When I try to evaluate my JSON using JSON.parse it gives me a parsing error which is obvious because the frames are not complete JSON objects. All this happens in the Websocket onmessage event callback. How can I recieve the huge JSON in differnt frames and still be able to parse it?
I have tried to concat the frames in onmessage still the error persists.
Side question:
How to concatinate a broken JSON properly?
A single WebSocket frame, per RFC-6455 base framing, has a maximum size limit of 2^63 bytes (9,223,372,036,854,775,807 bytes ~= 9.22 exabytes) (correction by #Sebastian)
However, a WebSocket message, made up of 1 or more frames, has no limit imposed on it from the protocol level.
Each WebSocket implementation will handle message and frame limits differently. Such as setting maximum messages sizes for whole message (usually for memory consumption reasons), or offering streaming options for large messages to better utilize memory.
But in your case, it is likely that your chosen WebSocket implementation has a bug and is improperly splitting up the JSON message into multiple messages, instead of multiple frames. You can use the network inspection tooling in Chrome or an external tool like Wireshark to confirm this behavior.
var wsServer = new websocket.server({
httpServer: server,
maxReceivedFrameSize: 131072,
maxReceivedMessageSize: 10 * 1024 * 1024,
autoAcceptConnections: false
});
Change the default maxFrameSize & MessageSize
Since you are dealing with WS, which is low-level, you need to create an application protocol that deals with data that is sent over multiple WS frames. It is up to you to concatenate the data that is in each WS frame (btw, don't concat the frames... concat the data in each frame).
Basically you are reinventing a file transfer protocol.

Categories