I'm coding a Webhook for GitHub, and implemented secure verification in KOA.js as:
function sign(tok, blob) {
var hmac;
hmac = crypto
.createHmac('sha1', tok)
.update(blob)
.digest('hex');
return 'sha1=' + hmac;
}
...
key = this.request.headers['x-hub-signature'];
blob = JSON.stringify(this.request.body);
if (!key || !blob) {
this.status = 400;
this.body = 'Bad Request';
}
lock = sign(settings.api_secret, blob);
if (lock !== key) {
console.log(symbols.warning, 'Unauthorized');
this.status = 403;
this.body = 'Unauthorized';
return;
}
...
for pull_requests and create events this works ok, even pushing new branches works, but for push commits events the x-hub-signature and the computed hash from the payload don't match, so it always get 403 unauthorized.
Update
I've noticed that for this kind of push payloads the commits and head_commit are added to the payload. I've tried removing the commits and the head_commit from the body but it didn't work.
Update
For more information please review these example payloads. I've also included url for the test repo and token info: https://gist.github.com/marcoslhc/ec581f1a5ccdd80f8b33
The default encoding of Crypto hash.update() is binary as detailed in the answer to Node JS crypto, cannot create hmac on chars with accents. This causes a problem in your push-event payload, which contains the character U+00E1 LATIN SMALL LETTER A WITH ACUTE in Hernández four times, and GitHub services is hashing the payload as utf-8 encoded. Note that your Gist shows these incorrectly-encoded in ISO-8859-1, so also make sure that you are handling the incoming request character-encoding properly (but this should happen by-default).
To fix this you need to either use a Buffer:
hmac = crypto.createHmac('sha1', tok).update(new Buffer(blob, 'utf-8')).digest('hex');
... or pass the encoding directly to update:
hmac = crypto.createHmac('sha1', tok).update(blob, 'utf-8').digest('hex');
The correct hash of 7f9e6014b7bddf5533494eff6a2c71c4ec7c042d will then be calculated.
Related
I'm working on file encryption for my messenger and I'm struggling with uploading the file after encryption is done.
The encryption seems fine in terms of performance, but when I try to make an upload, the browser hangs completely. Profiler writes "small GC" events infinitely, and the yellow bar about the hung up script is appearing every 10 seconds.
What I already tried:
Read the file with FileReader to ArrayBuffer, then turn it into a basic Array, encrypt it, then create a FormData object, create a File from the data, append it to FormData and send. It worked fast with original, untouched file around 1.3 Mb in size when I did not do the encryption, but on the encrypted "fake" File object after upload I get file with 4.7 Mb and it was not usable.
Send as a plain POST field (multipart formdata encoding). The data is corrupted after it is saved on PHP this way.
Send as a Base64-encoded POST field. Finally it started working this way after I found a fast converting function from binary array to Base64 string. btoa() gave wrong results after encode/decode. But after I tried a file of 8.5 Mb in size, it hung again.
I tried moving extra data to URL string and send file as Blob as described here. No success, browser still hangs.
I tried passing to Blob constructor a basic Array, a Uint8Array made of it, and finally I tried to send File as suggested in docs, but still the same result, even with small file.
What is wrong with the code? HDD load is 0% when this hang happens. Also the files in question are really very small
Here is what I get on the output from my server script when I emergency terminate the JS script by pressing the button:
Warning: Unknown: POST Content-Length of 22146226 bytes exceeds the limit of 8388608 bytes in Unknown on line 0
Warning: Cannot modify header information - headers already sent in Unknown on line 0
Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent in D:\xmessenger\upload.php on line 2
Array ( )
Here is my JavaScript:
function uploadEncryptedFile(nonce) {
if (typeof window['FormData'] !== 'function' || typeof window['File'] !== 'function') return
var file_input = document.getElementById('attachment')
if (!file_input.files.length) return
var file = file_input.files[0]
var reader = new FileReader();
reader.addEventListener('load', function() {
var data = Array.from(new Uint8Array(reader.result))
var encrypted = encryptFile(data, nonce)
//return //Here it never hangs
var form_data = new FormData()
form_data.append('name', file.name)
form_data.append('type', file.type)
form_data.append('attachment', arrayBufferToBase64(encrypted))
/* form_data.append('attachment', btoa(encrypted)) // Does not help */
form_data.append('nonce', nonce)
var req = getXmlHttp()
req.open('POST', 'upload.php?attachencryptedfile', true)
req.onload = function() {
var data = req.responseText.split(':')
document.getElementById('filelist').lastChild.realName = data[2]
document.getElementById('progress2').style.display = 'none'
document.getElementById('attachment').onclick = null
encryptFilename(data[0], data[1], data[2])
}
req.send(form_data)
/* These lines also fail when the file is larger */
/* req.send(new Blob(encrypted)) */
/* req.send(new Blob(new Uint8Array(encrypted))) */
})
reader.readAsArrayBuffer(file)
}
function arrayBufferToBase64(buffer) {
var binary = '';
var bytes = new Uint8Array(buffer);
var len = bytes.byteLength;
for (var i = 0; i < len; i++) {
binary += String.fromCharCode(bytes[i]);
}
return window.btoa(binary);
}
Here is my PHP handler code:
if (isset($_GET['attachencryptedfile'])) {
$entityBody = file_get_contents('php://input');
if ($entityBody == '') exit(print_r($_POST, true));
else exit($entityBody);
if (!isset($_POST["name"])) exit("Error");
$name = #preg_replace("/[^0-9A-Za-z._-]/", "", $_POST["name"]);
$nonce = #preg_replace("/[^0-9A-Za-z+\\/]/", "", $_POST["nonce"]);
if ($name == ".htaccess") exit();
$data = base64_decode($_POST["attachment"]);
//print_r($_POST);
//exit();
if (strlen($data)>1024*15*1024) exit('<script type="text/javascript">parent.showInfo("Файл слишком большой"); parent.document.getElementById(\'filelist\').removeChild(parent.document.getElementById(\'filelist\').lastChild); parent.document.getElementById(\'progress2\').style.display = \'none\'; parent.document.getElementById(\'attachment\').onclick = null</script>');
$uname = uniqid()."_".str_pad($_SESSION['xm_user_id'], 6, "0", STR_PAD_LEFT).substr($name, strrpos($name, "."));
file_put_contents("upload/".$uname, $data);
mysql_query("ALTER TABLE `attachments` AUTO_INCREMENT=0");
mysql_query("INSERT INTO `attachments` VALUES('0', '".$uname."', '".$name."', '0', '".$nonce."')");
exit(mysql_insert_id().":".$uname.":".$name);
}
HTML form:
<form name="fileForm" id="fileForm" method="post" enctype="multipart/form-data" action="upload.php?attachfile" target="ifr">
<div id="fileButton" title="Прикрепить файл" onclick="document.getElementById('attachment').click()"></div>
<input type="file" name="attachment" id="attachment" title="Прикрепить файл" onchange="addFile()" />
</form>
UPD: the issue is not solved, unfortunately. My answer is only partially correct. Now I made a silly mistake in the code (forgot to update the server side), and I found another cause of possible hang. If I submit a basic POST form (x-www-urlencoded) and code in the PHP script tries to execute this line ($uname is defined, $_FILES is an empty array)
if (!copy($_FILES['attachment']['tmp_name'], "upload/".$uname)) exit("Error");
then the whole thing hangs again. If I terminate the script, the server response is code 200, and the body contents are just fine (I have error output on on my dev machine). I know it is a bad thing - calling copy with the first argument which is undefined at all, but even server error 500 must not hang the browser in such a way (btw, new latest version of Firefox is also affected).
I have Apache 2.4 on Windows 7 x64 and PHP 5.3. Can someone please verify this thing? Maybe a bug should be filed to Apache/Firefox team?
Oh my God. This terrible behavior was caused by... post_max_size = 8M set in php.ini. And files smaller than 8 Mb actually did not hang the browser, as I figured it out.
Last question is - why? Why cannot Apache/PHP (I have Apache 2.4 btw, it is not old) somehow gracefully abort the connection, telling the browser that the limit is exceeded? Or maybe it is a bug in XHR implementation, and is not applicable to basic form submit. Anyway, will be useful for people who stumble upon it.
By the way, I tried it in Chrome with the same POST size limit, and it does not hang there completely like in Firefox (the request is still in some hung up condition with "no response available", but the JS engine and the UI are not blocked ).
I wrote a Jetty Websocket Client which is connecting to a remote server and trying to send JSON requests. The server is rejecting the requests as invalid and closing my connection. However, when I send the exact same JSON over using a Javascript client it works just fine. So now I'm left scratching my head is it Jackson 2's encoding of the JSON vs JSON.stringify()? (diff shows the two JSON outputs as exactly the same, no diffs). Is it a different default configuration between Javascript Websockets and Jetty? I'm definitely missing something, bashing my head against the wall.
Java Side Snippet:
final String wsAddress = "ws://" + wssUrl + "/ws";
final WebSocketClient client = new WebSocketClient();
final WSEventSocket socket = new WSEventSocket(loginRequest);
try {
client.start();
final ClientUpgradeRequest request = new ClientUpgradeRequest();
final URI wsUri = new URI(wsAddress);
logger.info("Connecting to {}", wsUri);
final Future<Session> session = client.connect(socket, wsUri, request);
final String requestjson = mapper.writeValueAsString(wsRequestPojo);
final Future<Void> fut = session.getRemote().sendStringByFuture(requestjson);
...
Javascript side snippet:
var wsRequestJson = {...Use output from Java Side...}
var mySock = new WebSocket("ws://" + wsocketUrl + "/ws");
mySock.onmessage = function(evt) { console.log(evt.data); }; mySock.onclose = function() { console.log("CLOSED"); };
mySock.send(JSON.stringify(wsRequestJson));
The Javascript side works perfectly, the Java side is not encoding the data properly. I've tried abunch of things like JSON to byte array and such no luck. I Wiresharked both transactions and I see the WebSocket pushes and the Responses. In Java I'm seeing a Payload that looks exactly like JSON and Wireshark deciphers it. On the Javascript packets I see some type of encoded data \214P\311n\3020\024... I read a little bit about Masked Payloads and both clients are sending Masked Payloads with Masking-Keys, maybe something to do with that?
Any idea how to get the Java Jetty side to encode the data in a similar fashion? Or even what type of encoding the Javascript side is using? I'm probably over thinking it at this point...
Thanks!
As stated by #Joakim Erdfelt in a comment above the Javascript side is using Sec-Websocket-Extension: permessage-deflate by default. Might not be the best way to set it but I updated my Java code by adding:
request.addExtensions(ExtensionConfig.parse("permessage-deflate"));
Updated Snippet:
final String wsAddress = "ws://" + wssUrl + "/ws";
final WebSocketClient client = new WebSocketClient();
final WSEventSocket socket = new WSEventSocket(loginRequest);
try {
client.start();
final ClientUpgradeRequest request = new ClientUpgradeRequest();
request.addExtensions(ExtensionConfig.parse("permessage-deflate"));
final URI wsUri = new URI(wsAddress);
logger.info("Connecting to {}", wsUri);
final Future<Session> session = client.connect(socket, wsUri, request);
final String requestjson = mapper.writeValueAsString(wsRequestPojo);
final Future<Void> fut = session.getRemote().sendStringByFuture(requestjson);
...
Works like a charm! Thanks!
I am trying to emulate Chrome's native messaging feature using Firefox's add-on SDK. Specifically, I'm using the child_process module along with the emit method to communicate with a python child process.
I am able to successfully send messages to the child process, but I am having trouble getting messages sent back to the add-on. Chrome's native messaging feature uses stdin/stdout. The first 4 bytes of every message in both directions represents the size in bytes of the following message so the receiver knows how much to read. Here's what I have so far:
Add-on to Child Process
var utf8 = new TextEncoder("utf-8").encode(message);
var latin = new TextDecoder("latin1").decode(utf8);
emit(childProcess.stdin, "data", new TextDecoder("latin1").decode(new Uint32Array([utf8.length])));
emit(childProcess.stdin, "data", latin);
emit(childProcess.stdin, "end");
Child Process (Python) from Add-on
text_length_bytes = sys.stdin.read(4)
text_length = struct.unpack('i', text_length_bytes)[0]
text = sys.stdin.read(text_length).decode('utf-8')
Child Process to Add-on
sys.stdout.write(struct.pack('I', len(message)))
sys.stdout.write(message)
sys.stdout.flush()
Add-on from Child Process
This is where I'm struggling. I have it working when the length is less than 255. For instance, if the length is 55, this works:
childProcess.stdout.on('data', (data) => { // data is '7' (55 UTF-8 encoded)
var utf8Encoded = new TextEncoder("utf-8).encode(data);
console.log(utf8Encoded[0]); // 55
}
But, like I said, it does not work for all numbers. I'm sure I have to do something with TypedArrays, but I'm struggling to put everything together.
The problem here, is that Firefox is trying to read stdout as UTF-8 stream by default. Since UTF-8 doesn't use the full first byte, you get corrupted characters for example for 255. The solution is to tell Firefox to read in binary encoding, which means you'll have to manually parse the actual message content later on.
var childProcess = spawn("mybin", [ '-a' ], { encoding: null });
Your listener would then work like
var decoder = new TextDecoder("utf-8");
var readIncoming = (data) => {
// read the first four bytes, which indicate the size of the following message
var size = (new Uint32Array(data.subarray(0, 4).buffer))[0];
//TODO: handle size > data.byteLength - 4
// read the message
var message = decoder.decode(data.subarray(4, size));
//TODO: do stuff with message
// Read the next message if there are more bytes.
if(data.byteLength > 4 + size)
readIncoming(data.subarray(4 + size));
};
childProcess.stdout.on('data', (data) => {
// convert the data string to a byte array
// The bytes got converted by char code, see https://dxr.mozilla.org/mozilla-central/source/addon-sdk/source/lib/sdk/system/child_process/subprocess.js#357
var bytes = Uint8Array.from(data, (c) => c.charCodeAt(0));
readIncoming(bytes);
});
Maybe is this similar to this problem:
Chrome native messaging doesn't accept messages of certain sizes (Windows)
Windows-only: Make sure that the program's I/O mode is set to O_BINARY. By default, the I/O mode is O_TEXT, which corrupts the message format as line breaks (\n = 0A) are replaced with Windows-style line endings (\r\n = 0D 0A). The I/O mode can be set using __setmode.
I am having serious problems decoding the message body of the emails I get using the Gmail API. I want to grab the message content and put the content in a div. I am using a base64 decoder, which I know won't decode emails encoded differently, but I am not sure how to check an email to decide which decoder to use -- emails that say they are utf-8 encoded are successfully decoded by the base64 decoder, but not be a utf-8 decoder.
I've been researching email decoding for several days now, and I've learned that I am a little out of my league here. I haven't done much work with coding around email before. Here is the code I am using to get the emails:
gapi.client.load('gmail', 'v1', function() {
var request = gapi.client.gmail.users.messages.list({
labelIds: ['INBOX']
});
request.execute(function(resp) {
document.getElementById('email-announcement').innerHTML = '<i>Hello! I am reading your <b>inbox</b> emails.</i><br><br>------<br>';
var content = document.getElementById("message-list");
if (resp.messages == null) {
content.innerHTML = "<b>Your inbox is empty.</b>";
} else {
var encodings = 0;
content.innerHTML = "";
angular.forEach(resp.messages, function(message) {
var email = gapi.client.gmail.users.messages.get({
'id': message.id
});
email.execute(function(stuff) {
if (stuff.payload == null) {
console.log("Payload null: " + message.id);
}
var header = "";
var sender = "";
angular.forEach(stuff.payload.headers, function(item) {
if (item.name == "Subject") {
header = item.value;
}
if (item.name == "From") {
sender = item.value;
}
})
try {
var contents = "";
if (stuff.payload.parts == null) {
contents = base64.decode(stuff.payload.body.data);
} else {
contents = base64.decode(stuff.payload.parts[0].body.data);
}
content.innerHTML += '<b>Subject: ' + header + '</b><br>';
content.innerHTML += '<b>From: ' + sender + '</b><br>';
content.innerHTML += contents + "<br><br>";
} catch (err) {
console.log("Encoding error: " + encodings++);
}
})
})
}
});
});
I was performing some checks and debugging, so there's leftover console.log's and some other things that are only there for testing. Still, you can see here what I am trying to do.
What is the best way to decode the emails I pull from the Gmail API? Should I try to put the emails into <script>'s with charset and type attributes matching the encoding content of the email? I believe I remember charset only works with a src attribute, which I wouldn't have here. Any suggestions?
For a prototype app I'm writing, the following code is working for me:
var base64 = require('js-base64').Base64;
// js-base64 is working fine for me.
var bodyData = message.payload.body.data;
// Simplified code: you'd need to check for multipart.
base64.decode(bodyData.replace(/-/g, '+').replace(/_/g, '/'));
// If you're going to use a different library other than js-base64,
// you may need to replace some characters before passing it to the decoder.
Caution: these points are not explicitly documented and could be wrong:
The users.messages: get API returns "parsed body content" by default. This data seems to be always encoded in UTF-8 and Base64, regardless of the Content-Type and Content-Transfer-Encoding header.
For example, my code had no problem parsing an email with these headers: Content-Type: text/plain; charset=ISO-2022-JP, Content-Transfer-Encoding: 7bit.
The mapping table of the Base64 encoding varies among various implementations. Gmail API uses - and _ as the last two characters of the table, as defined by RFC 4648's "URL and Filename safe Alphabet"1.
Check if your Base64 library is using a different mapping table. If so, replace those characters with the ones your library accepts before passing the body to the decoder.
1 There is one supportive line in the documentation: the "raw" format returns "body content as a base64url encoded string". (Thanks Eric!)
Use atob to decode the messages in JavaScript (see ref). For accessing your message payload, you can write a function:
var extractField = function(json, fieldName) {
return json.payload.headers.filter(function(header) {
return header.name === fieldName;
})[0].value;
};
var date = extractField(response, "Date");
var subject = extractField(response, "Subject");
referenced from my previous SO Question and
var part = message.parts.filter(function(part) {
return part.mimeType == 'text/html';
});
var html = atob(part.body.data);
If the above does not decode 100% properly, the comments by #cgenco on this answer below may apply to you. In that case, do
var html = atob(part.body.data.replace(/-/g, '+').replace(/_/g, '/'));
Here is the solution:
Gmail API - "Users.messages: get" method has in response message.payload.body.data parted base64 data, it's separated by "-" symbol. It's not entire base64 encoded text, it's parts of base64 text. You have to try to decode every single part of this or make one mono string by unite and replace "-" symbol. After this you can easily decode it to human text.
You can manually check every part here https://www.base64decode.org
I was also annoyed by this point. I discovered a solution through looking at an extension for VSCode. The solution is really simple:
const body = response.data.payload.body; // the base64 encoded body of a message
body = Buffer.alloc(
body.data.length,
body.data,
"base64"
).toString(); // the decoded message
It worked for me as I was using gmail.users.messages.get() call of Gmail API.
Please use websafe decoder for decoding gmail emails and attachments. I got blank pages when I used just base64decoder, had to use this: https://www.npmjs.com/package/urlsafe-base64
I can easily decode using another tool at https://simplycalc.com/base64-decode.php
In JS: https://www.npmjs.com/package/base64url
In Python 3:
import base64
base64.urlsafe_b64decode(coded_string)
Thank #ento 's answer. I explain more why you need to replace '-' and '_' character to '+' and '/' before decode.
Wiki Base64 Variants summary table shows:
RFC 4648 section 4: base64 (standard): use '+' and '/'
RFC 4648 section 5: base64url (URL-safe and filename-safe standard): use '-' and '_'
In short, Gmail API use base64url (urlsafe) format('-' and '_'), But JavaScript atob function or other
JavaScript libraries use base64 (standard) format('+' and '/').
For Gmail API, the document says body use base64url format, see below links:
string/bytes type
MessagePartBody
RAW
For Web atob/btoa standards, see below links:
The algorithm used by atob() and btoa() is specified in RFC 4648, section 4
8.3 Base64 utility methods
Forgiving base64
I'm developing a web app that can upload large file into the Azure Blob Storage.
As a backend, I am using Windows Azure Mobile Services (the web app will generate contents for mobile devices) in nodeJS.
My client can successfully send chunks of data to the backend, everything looks fine but, at the end, the uploaded file is empty. The data upload has been prepared by following this tutorial: it works perfectly when the file is small enough to be uploaded in a single requests. The process fails when the file needs to be broken in chunks. It uses the ReadableStreamBuffer from the tutorial.
Can somebody help me?
Here the code:
Back-end : createBlobBlockFromStream
[...]
//Get references
var azure = require('azure');
var qs = require('querystring');
var appSettings = require('mobileservice-config').appSettings;
var accountName = appSettings.STORAGE_NAME;
var accountKey = appSettings.STORAGE_KEY;
var host = accountName + '.blob.core.windows.net';
var container = "zips";
//console.log(request.body);
var blobName = request.body.file;
var blobExt = request.body.ext;
var blockId = request.body.blockId;
var data = new Buffer(request.body.data, "base64");
var stream = new ReadableStreamBuffer(data);
var streamLen = stream.size();
var blobFull = blobName+"."+blobExt;
console.log("BlobFull: "+blobFull+"; id: "+blockId+"; len: "+streamLen+"; "+stream);
var blobService = azure.createBlobService(accountName, accountKey, host);
//console.log("blockId: "+blockId+"; container: "+container+";\nblobFull: "+blobFull+"streamLen: "+streamLen);
blobService.createBlobBlockFromStream(blockId, container, blobFull, stream, streamLen,
function(error, response){
if(error){
request.respond(statusCodes.INTERNAL_SERVER_ERROR, error);
} else {
request.respond(statusCodes.OK, {message : "block created"});
}
});
[...]
Back-end: commitBlobBlock
[...]
var azure = require('azure');
var qs = require('querystring');
var appSettings = require('mobileservice-config').appSettings;
var accountName = appSettings.STORAGE_NAME;
var accountKey = appSettings.STORAGE_KEY;
var host = accountName + '.blob.core.windows.net';
var container = "zips";
var blobName = request.body.file;
var blobExt = request.body.ext;
var blobFull = blobName+"."+blobExt;
var blockIdList = request.body.blockList;
console.log("blobFull: "+blobFull+"; blockIdList: "+JSON.stringify(blockIdList));
var blobService = azure.createBlobService(accountName, accountKey, host);
blobService.commitBlobBlocks(container, blobFull, blockIdList, function(error, result){
if(error){
request.respond(statusCodes.INTERNAL_SERVER_ERROR, error);
} else {
request.respond(statusCodes.OK, result);
blobService.listBlobBlocks(container, blobFull)
}
});
[...]
The second method returns the correct list of blockId, so I think that the second part of the process works fine. I think that it is the first method that fails to write the data inside the block, as if it creates some empty blocks.
In the client-side, I read the file as an ArrayBuffer, by using the FileReader JS API.
Then I convert it in a Base4 encoded string by using the following code. This approach works perfectly if I create the blob in a single call, good for small files.
[...]
//data contains the ArrayBuffer read by the FileReader API
var requestData = new Uint8Array(data);
var binary = "";
for (var i = 0; i < requestData.length; i++) {
binary += String.fromCharCode( requestData[ i ] );
}
[...]
Any idea?
Thank you,
Ric
Which version of the Azure Storage Node.js SDK are you using? It looks like you might be using an older version; if so I would recommend upgrading to the latest (0.3.0 as of this writing). We’ve improved many areas with the new library, including blob upload; you might be hitting a bug that has already been fixed. Note that there may be breaking changes between versions.
Download the latest Node.js Module (code is also on Github)
https://www.npmjs.org/package/azure-storage
Read our blog post: Microsoft Azure Storage Client Module for Node.js v. 0.2.0 http://blogs.msdn.com/b/windowsazurestorage/archive/2014/06/26/microsoft-azure-storage-client-module-for-node-js-v-0-2-0.aspx
If that’s not the issue, can you check a Fiddler trace (or equivalent) to see if the raw data blocks are being sent to the service?
Not too sure if your still suffering from this problem but i was experiencing the exact same thing and came across this looking for a solution. Well i found one and though id share.
My problem was not with how i push the block but in how i committed it. My little proxy server had no knowledge of prior commits, it just pushes the data its sent and commits it. Trouble is i wasn't providing the commit message with the previously committed blocks so it was overwriting them with the current commit each time.
So my solution:
var opts = {
UncommittedBlocks: [IdOfJustCommitedBlock],
CommittedBlocks: [IdsOfPreviouslyCommittedBlocks]
}
blobService.commitBlobBlocks('containerName', 'blobName', opts, function(e, r){});
For me the bit that broke everything was the format of the opts object. I wasn't providing an array of previously committed block names. Its also worth noting that i had to base64 decode the existing block names as:
blobService.listBlobBlocks('containerName', 'fileName', 'type IE committed', fn)
Returns an object for each block with the name being base64 encoded.
Just for completeness here's how i push my blocks, req is from the express route:
var blobId = blobService.getBlockId('blobName', 'lengthOfPreviouslyCommittedArray + 1 as Int');
var length = req.headers['content-length'];
blobService.createBlobBlockFromStream(blobId, 'containerName', 'blobName', req, length, fn);
Also with the upload i had a strange issue where the content-length header caused it to break and so had to delete it from the req.headers object.
Hope this helps and is detailed enough.