So I am building a Chrome DevTools extension that inspects binary traffic on particular website.
Requests on site are made with responseType = "blob".
Now when I get request with chrome.devtools.network.onRequestFinished and then access its content with request.getContent(), I get string as response and not blob.
This string looks like some kind of binary string but not sure how it is encoded. I tried transforming it to base64 string with a lot of different transforms (Utf-8 to latin, Utf-16 to latin, ... ) but nothing gave correct result.
Any idea how to get correct result?
Update:
This is comparison of results (as Uint8Array) from client and extension.
Client:
[170, 69, 224, 171, 51, 233, 216, 82, 197, 35, 170, 213, 145, 197, 218, 82, 72, 85, 33, 77, 81, 88, 93, 16, 97, 234, 253, 208, 203, 221, 44, 44]
Extension:
[65533, 69, 65533, 65533, 51, 65533, 65533, 82, 65533, 35, 65533, 81, 65533, 65533, 82, 72, 85, 33, 77, 81, 88, 93, 16, 97, 65533, 65533, 65533, 65533, 65533, 44, 44]
Notice how every byte that is over 128 is converted to 65533 byte (Replacement character) in extension?
Now how can I access pure binary data?
const data = Uint8Array.from(atob(content), c => c.charCodeAt(0)) ;
Related
I am unable to decrypt this Uint8Array
Uint8Array(32) [
174, 157, 255, 238, 54, 143, 97, 132,
70, 243, 7, 249, 98, 188, 68, 170,
53, 82, 78, 9, 96, 226, 182, 160,
131, 79, 3, 147, 153, 34, 205, 162
]
using this code
const string = new TextDecoder().decode(hash);
console.log(string);
I get this: ����Kc2Jk�&an↕�9���&��h-Lﺠ�Ԋ
I've also tried with many online converters but it says there's some problem with the byte array. This byte array is a response from an API.
Where Am I wrong? How can I convert it properly?
I'm working on a codebase and I have to extend a controller to allow uploading of PDF files. I'm submitting the file through jquery/ajax with formData.
The Backend contains of a pretty large framework, which has its own type of request - So using something like formidable is out of the question for handling the file upload server side.
My Problem: The POST request arrives on the server and when I log the body parameter that contains the file I get the following:
file: [
37, 80, 68, 70, 45, 49, 46, 51, 10, 37, 255, 255,
255, 255, 10, 56, 32, 48, 32, 111, 98, 106, 10, 60,
60, 10, 47, 84, 121, 112, 101, 32, 47, 69, 120, 116,
71, 83, 116, 97, 116, 101, 10, 47, 67, 65, 32, 49,
10, 62, 62, 10, 101, 110, 100, 111, 98, 106, 10, 55,
32, 48, 32, 111, 98, 106, 10, 60, 60, 10, 47, 84,
121, 112, 101, 32, 47, 80, 97, 103, 101, 10, 47, 80,
97, 114, 101, 110, 116, 32, 49, 32, 48, 32, 82, 10,
47, 77, 101, 100,
... 23741 more items
]
Which, from my understanding is an Array Buffer of the uploaded file. Now I've read that I can just write this to the file system with the fs library like this:
fs.writeFile(filename, data);
Which does create the file on the server, however it is always a corrupted file and not an actual pdf.
What am I missing, I'm assuming it has something to do with the encoding of the formData that I'm not aware of?
I am working with a Websocket API which I send protobuf objects to.
The documentation says:
Server uses Big Endian format for binary data.
Messages sent back and forth require a signed 4 byte int of the message size, prefixed to the message
So the payload should be a 4 byte int which contains the message size, followed by the message itself.
I set the message like this:
const message = req.serializeBinary();
How would I prefix a signed 4 byte int that contains the message size to this?
Note: console.log(message) prints the following to the console:
jspb.BinaryReader {decoder_: j…b.BinaryDecoder, fieldCursor_: 0, nextField_: -1, nextWireType_: -1, error_: false, …}
decoder_: jspb.BinaryDecoder
bytes_: Uint8Array(78) [0, 0, 0, 74, 152, 182, 75, 75, 242, 233, 64, 4, 49, 48, 53, 57, 242, 233, 64, 35, 77, 101, 115, 115, 97, 103, 101, 32, 108, 101, 110, 103, 116, 104, 32, 114, 101, 99, 101, 105, 118, 101, 100, 32, 105, 115, 32, 105, 110, 118, 97, 108, 105, 100, 46, 194, 233, 64, 19, 82, 105, 116, 104, 109, 105, 99, 32, 83, 121, 115, 116, 101, 109, 32, 73, 110, 102, 111]
cursor_: 78
end_: 78
error_: false
start_: 0
__proto__: Object
error_: false
fieldCursor_: 55
nextField_: 132760
nextWireType_: 2
readCallbacks_: null
I have never used google's protocol buffers library, only protobuf.js (https://github.com/protobufjs), but I assume we can work based on your object, since all we need is in message.bytes_
bl = message.bytes_.length;
msg = new Uint8Array(bl+4);
msg.set([(bl&0xff000000)>>24,(bl&0xff0000)>>16,(bl&0xff00)>>8,(bl&0xff)]);
msg.set(message.bytes_,4);
yourwebsocketobject.send(msg); // or maybe msg.buffer?
You will probably get better answers, but this may eventually work.
i am building a blockchain on hyperledger fabric (node.js SDK).
https://hyperledger.github.io/fabric-sdk-node/release-1.4/global.html#BlockchainInfo__anchor
i call the api BlockchainInfo and get a json format response, which includes currentBlockHash
the offical document shows that the response data type of currentBlockHash is Array< byte >
such as the following data.
i would like to convert this buffer to string but have no idea what to do.
thanks for your reading the question.
{ buffer:
{ type: 'Buffer',
data:
[ 8,
207,
230,
17,
18,
32,
124,
143,
73,
40,
171,
42,
251,
237,
193,
138,
36,
92,
58,
57,
254,
56,
144,
96,
54,
201,
242,
64,
10,
111,
150,
28,
198,
187,
196,
118,
97,
160,
26,
32,
16,
160,
154,
19,
11,
179,
147,
11,
38,
16,
150,
190,
126,
17,
121,
123,
200,
7,
71,
27,
241,
103,
54,
188,
196,
248,
178,
88,
48,
115,
186,
133 ] },
offset: 6,
markedOffset: -1,
limit: 38,
littleEndian: true,
noAssert: false }
here is the origin response data
{"height":{"low":291663,"high":0,"unsigned":true},"currentBlockHash":{"buffer":{"type":"Buffer","data":[8,207,230,17,18,32,124,143,73,40,171,42,251,237,193,138,36,92,58,57,254,56,144,96,54,201,242,64,10,111,150,28,198,187,196,118,97,160,26,32,16,160,154,19,11,179,147,11,38,16,150,190,126,17,121,123,200,7,71,27,241,103,54,188,196,248,178,88,48,115,186,133]},"offset":6,"markedOffset":-1,"limit":38,"littleEndian":true,"noAssert":false},"previousBlockHash":{"buffer":{"type":"Buffer","data":[8,207,230,17,18,32,124,143,73,40,171,42,251,237,193,138,36,92,58,57,254,56,144,96,54,201,242,64,10,111,150,28,198,187,196,118,97,160,26,32,16,160,154,19,11,179,147,11,38,16,150,190,126,17,121,123,200,7,71,27,241,103,54,188,196,248,178,88,48,115,186,133]},"offset":40,"markedOffset":-1,"limit":72,"littleEndian":true,"noAssert":false}}
I think the JSDoc is misleading in this case. I think the currentBlockHash will actually be a Node.js Buffer (or possibly a protobuf implementation that emulates the behaviour of a Buffer). You can convert a Buffer into a string by calling currentBlockHash.toString().
I have observed some cases where calling toString() on a protobuf Buffer implementation gives a debug string rather than a string representation of the content of the buffer so I tend to specify an encoding argument just to be safe.
In the case of a hash it may be more convenient to see it as a hex string rather than utf8 (which would be the default) anyway, so I would try currentBlockHash.toString('hex')
See here for more information on Buffers: https://nodejs.org/docs/latest-v10.x/api/buffer.html
I have encountered an interesting issue.
I'm using node v8.1.4
I have the following buffer.
[ 191, 164, 235, 131, 30, 28, 164, 179, 101, 138, 94, 36, 115, 176, 83, 193, 9, 177, 85, 228, 189, 193, 127, 71, 165, 16, 211, 132, 228, 241, 57, 207, 254, 152, 122, 98, 100, 71, 67, 100, 29, 218, 165, 101, 25, 17, 177, 173, 92, 173, 162, 186, 198, 1, 80, 94, 228, 165, 124, 171, 78, 49, 145, 158 ]
When i try to convert it to utf8 using nodejs and using browser i get different results. even length of string is not the same.
Is there a way to convert string to utf8 in browser same way as node js do?
It seems that some characters that some sequence which nodejs replace to U+FFFD are more lengthy than the replaced sequence in browser. so output utf8 string is different
Code i use in browser and in nodejs is same
i have buffer object tmpString
tmpString.toString('utf-8')
tmpString.toString('utf-8').length differs in browser and nodejs for the same source bytes.
In nodejs i use native buffer implementation, for browser webpack loads polyfill (feross/buffer i think)
i think more accurately would say that i try to interpret buffer bytes as UTF8 string.
Have you tried the TextEncoder/TextDecoder APIs? I've used them for converting strings in both nodejs and the browser and haven't seen any differences.
E.g.:
const encoder = new TextEncoder('utf-8');
const decoder = new TextDecoder('utf-8');
const foo = 'Hello world!';
const encoded = encoder.encode(foo);
console.log(encoded);
const decoded = decoder.decode(encoded);
console.log(decoded);