createBlockBlob and commitBlobBlocks create empty files in BlobStorage - javascript

I'm developing a web app that can upload large file into the Azure Blob Storage.
As a backend, I am using Windows Azure Mobile Services (the web app will generate contents for mobile devices) in nodeJS.
My client can successfully send chunks of data to the backend, everything looks fine but, at the end, the uploaded file is empty. The data upload has been prepared by following this tutorial: it works perfectly when the file is small enough to be uploaded in a single requests. The process fails when the file needs to be broken in chunks. It uses the ReadableStreamBuffer from the tutorial.
Can somebody help me?
Here the code:
Back-end : createBlobBlockFromStream
[...]
//Get references
var azure = require('azure');
var qs = require('querystring');
var appSettings = require('mobileservice-config').appSettings;
var accountName = appSettings.STORAGE_NAME;
var accountKey = appSettings.STORAGE_KEY;
var host = accountName + '.blob.core.windows.net';
var container = "zips";
//console.log(request.body);
var blobName = request.body.file;
var blobExt = request.body.ext;
var blockId = request.body.blockId;
var data = new Buffer(request.body.data, "base64");
var stream = new ReadableStreamBuffer(data);
var streamLen = stream.size();
var blobFull = blobName+"."+blobExt;
console.log("BlobFull: "+blobFull+"; id: "+blockId+"; len: "+streamLen+"; "+stream);
var blobService = azure.createBlobService(accountName, accountKey, host);
//console.log("blockId: "+blockId+"; container: "+container+";\nblobFull: "+blobFull+"streamLen: "+streamLen);
blobService.createBlobBlockFromStream(blockId, container, blobFull, stream, streamLen,
function(error, response){
if(error){
request.respond(statusCodes.INTERNAL_SERVER_ERROR, error);
} else {
request.respond(statusCodes.OK, {message : "block created"});
}
});
[...]
Back-end: commitBlobBlock
[...]
var azure = require('azure');
var qs = require('querystring');
var appSettings = require('mobileservice-config').appSettings;
var accountName = appSettings.STORAGE_NAME;
var accountKey = appSettings.STORAGE_KEY;
var host = accountName + '.blob.core.windows.net';
var container = "zips";
var blobName = request.body.file;
var blobExt = request.body.ext;
var blobFull = blobName+"."+blobExt;
var blockIdList = request.body.blockList;
console.log("blobFull: "+blobFull+"; blockIdList: "+JSON.stringify(blockIdList));
var blobService = azure.createBlobService(accountName, accountKey, host);
blobService.commitBlobBlocks(container, blobFull, blockIdList, function(error, result){
if(error){
request.respond(statusCodes.INTERNAL_SERVER_ERROR, error);
} else {
request.respond(statusCodes.OK, result);
blobService.listBlobBlocks(container, blobFull)
}
});
[...]
The second method returns the correct list of blockId, so I think that the second part of the process works fine. I think that it is the first method that fails to write the data inside the block, as if it creates some empty blocks.
In the client-side, I read the file as an ArrayBuffer, by using the FileReader JS API.
Then I convert it in a Base4 encoded string by using the following code. This approach works perfectly if I create the blob in a single call, good for small files.
[...]
//data contains the ArrayBuffer read by the FileReader API
var requestData = new Uint8Array(data);
var binary = "";
for (var i = 0; i < requestData.length; i++) {
binary += String.fromCharCode( requestData[ i ] );
}
[...]
Any idea?
Thank you,
Ric

Which version of the Azure Storage Node.js SDK are you using? It looks like you might be using an older version; if so I would recommend upgrading to the latest (0.3.0 as of this writing). We’ve improved many areas with the new library, including blob upload; you might be hitting a bug that has already been fixed. Note that there may be breaking changes between versions.
Download the latest Node.js Module (code is also on Github)
https://www.npmjs.org/package/azure-storage
Read our blog post: Microsoft Azure Storage Client Module for Node.js v. 0.2.0 http://blogs.msdn.com/b/windowsazurestorage/archive/2014/06/26/microsoft-azure-storage-client-module-for-node-js-v-0-2-0.aspx
If that’s not the issue, can you check a Fiddler trace (or equivalent) to see if the raw data blocks are being sent to the service?

Not too sure if your still suffering from this problem but i was experiencing the exact same thing and came across this looking for a solution. Well i found one and though id share.
My problem was not with how i push the block but in how i committed it. My little proxy server had no knowledge of prior commits, it just pushes the data its sent and commits it. Trouble is i wasn't providing the commit message with the previously committed blocks so it was overwriting them with the current commit each time.
So my solution:
var opts = {
UncommittedBlocks: [IdOfJustCommitedBlock],
CommittedBlocks: [IdsOfPreviouslyCommittedBlocks]
}
blobService.commitBlobBlocks('containerName', 'blobName', opts, function(e, r){});
For me the bit that broke everything was the format of the opts object. I wasn't providing an array of previously committed block names. Its also worth noting that i had to base64 decode the existing block names as:
blobService.listBlobBlocks('containerName', 'fileName', 'type IE committed', fn)
Returns an object for each block with the name being base64 encoded.
Just for completeness here's how i push my blocks, req is from the express route:
var blobId = blobService.getBlockId('blobName', 'lengthOfPreviouslyCommittedArray + 1 as Int');
var length = req.headers['content-length'];
blobService.createBlobBlockFromStream(blobId, 'containerName', 'blobName', req, length, fn);
Also with the upload i had a strange issue where the content-length header caused it to break and so had to delete it from the req.headers object.
Hope this helps and is detailed enough.

Related

Using CSV node module in chrome extension

I am trying to make an extension for chrome and one of the needed functionality is to make it possible to save user's data locally (But I do plan to integrate Gdrive saving). I wanted to save user's data into CSV assuming that it would be easier but it turned out a disaster.
I have tried many different methods in order to make it possible but it seems like it's no use.
Here's part of the code that doesn't work:
var { parse } = require('csv-parse');
const fs = require("chrome-fs");
const records = [];
const csvFile = chrome.runtime.getURL("save_file.csv");
var parser = parse({ columns: true }, function(err, records) {
console.log(records);
});
fs.createReadStream(chrome.runtime.getURL("save_file.csv")).pipe(parser);
and here's an error that browser throws at me when I try to initialize this code:
Uncaught Error
at chrome.js:511:1
if (err.name === 'NotFoundError') {
var enoent = new Error()
511: enoent.code = 'ENOENT'
callback(enoent)

Running into a 1.7mb limit with CSOM based upload functionality

Running into the following error when I try to upload files larger than 1.7 MB:
"Request failed with error message - The request message is too big. The server does not allow messages larger than 2097152 bytes. . Stack Trace - undefined"
function uploadFile(arrayBuffer, fileName)
{
//Get Client Context,Web and List object.
var clientContext = new SP.ClientContext();
var oWeb = clientContext.get_web();
var oList = oWeb.get_lists().getByTitle('CoReTranslationDocuments');
var bytes = new Uint8Array(arrayBuffer);
var i, length, out = '';
for (i = 0, length = bytes.length; i < length; i += 1)
{
out += String.fromCharCode(bytes[i]);
}
var base64 = btoa(out);
var createInfo = new SP.FileCreationInformation();
createInfo.set_content(base64);
createInfo.set_url(fileName);
var uploadedDocument = oList.get_rootFolder().get_files().add(createInfo);
clientContext.load(uploadedDocument);
clientContext.executeQueryAsync(QuerySuccess, QueryFailure);
}
We just switched from SP2013 to Sharepoint Online. This code worked well with even larger files previously. Does the 2MB limit refer to the file being uploaded or the size of the REST request?
I also did read about a possible solution using filestream - is that something I can use in this scenario?
Any suggestions/ modifications to the code will be much appreciated.
SharePoint has its own limits for CSOM. Unfortunately, these limits cannot be configured in Central Administration and also cannot be set using CSOM for obvious reasons.
When googling for the issue, mostly a solution is given by setting the ClientRequestServiceSettings.MaxReceivedMessageSize property to the desired size.
Call the following PowerShell script from SharePoint Management Shell :
$ws = [Microsoft.SharePoint.Administration.SPWebService]::ContentService
$ws.ClientRequestServiceSettings.MaxReceivedMessageSize = 209715200
$ws.Update()
This will set the limit to 200 MB.
However, in SharePoint 2013 Microsoft apparently added another configuration setting to also limit the amount of data which the server shall process from a CSOM request (Why anyone would configure this one differently is beyond me...). After reading a very, very long SharePoint Log file and crawling through some disassembled SharePoint server code, I found that this parameter can be set via the property ClientRequestServiceSettings.MaxParseMessageSize.
We are now using the following script with SharePoint 2013 and it works great:
$ws = [Microsoft.SharePoint.Administration.SPWebService]::ContentService
$ws.ClientRequestServiceSettings.MaxReceivedMessageSize = 209715200
$ws.ClientRequestServiceSettings.MaxParseMessageSize = 209715200
$ws.Update()
Hope that saves some people a headache!

MS Graph API file replace SharePoint ReactJS 404 item not found or stream issue

I am trying to use the MS Graph API and ReactJS to download a file from SharePoint and then replace the file. I have managed the download part after using the #microsoft.graph.downloadUrl value. Here is the code that gets me the XML document from SharePoint.
export async function getDriveFileList(accessToken,siteId,driveId,fileName) {
const client = getAuthenticatedClient(accessToken);
//https://graph.microsoft.com/v1.0/sites/{site-id}/drives/{drive-id}/root:/{item-path}
const files = await client
.api('/sites/' + siteId + '/drives/' + driveId + '/root:/' + fileName)
.select('id,name,webUrl,content.downloadUrl')
.orderby('name')
.get();
//console.log(files['#microsoft.graph.downloadUrl']);
return files;
}
When attempting to upload the same file back up I get a 404 itemNotFounderror return. Because this user was able to get it to work I think I have the MS Graph API correct, although I am not sure I'm translating correctly to ReactJS syntax. Even though the error message says item not found I think MS Graph might actually be upset with how I'm sending the XML file back. The Microsoft documentation for updating an existing file state the contents of the file in a stream should be returned. Since I've loaded the XML file into the state I'm not entirely sure how to send it back. The closest match I found involved converting a PDF to a blob so I tried that.
export async function putDriveFile(accessToken,siteId,itemId,xmldoc) {
const client = getAuthenticatedClient(accessToken);
// /sites/{site-id}/drive/items/{item-id}/content
let url = '/sites/' + siteId + '/drive/items/' + itemId + '/content';
var convertedFile = null;
try{
convertedFile = new Blob(
[xmldoc],
{type: 'text/xml'});
}
catch(err) {
console.log(err);
}
const file = await client
.api(url)
.put(convertedFile);
console.log(file);
return file;
}
I'm pretty sure it's the way I'm sending the file back but the Graph API has some bugs so I can't entirely be sure. I was convinced I was getting the correct ID of the drive item but I've seen where the site ID syntax can be different with the Graph API so maybe it is the item ID.
The correct syntax for putting an (existing) file into a document library in SharePoint is actually PUT /sites/{site-id}/drive/items/{parent-id}:/{filename}:/content I also found this code below worked for taking the XML document and converting into a blob that could be uploaded
var xmlText = new XMLSerializer().serializeToString(this.state.xmlDoc);
var blob = new Blob([xmlText], { type: "text/xml"});
var file = new File([blob], this.props.location.state.fileName, {type: "text/xml",});
var graphReturn = await putDriveFile(accessToken, this.props.location.state.driveId, this.state.fileId,file);

How to open and write data into file from API call using node.js

I wrote an API and the response from that API is array of data. Whenever the response coming from that API i want to store that response in file that is.txt format. I tried but it shows an error like "No such directory or NO such path". How to create file and write data into file from API using node.js. This is the code i wrote :
exports.Entry = functions.https.onRequest((req, res) => {
var fs = require('fs');
var a = ['6', '7', '8'];
var b = ['22', '27', '20'];
var eachrecord = [];
for (var i = 0; i < a.length; i++) {
eachrecord += a + b;
}
console.log("eachrecord is", eachrecord);
//Writing each record value into file
fileWriteSync('./filewriting1.txt');
function fileWriteSync(filePath){
var fd = fs.openSync(filePath,'w');
var length = eachrecord.length;
for(i = 0;i<length;i++){
var eachrecordwrite = fs.writeSync(fd,eachrecord[i] + '\n',null,null);
console.log("hii",eachrecord[i]);
}
fs.closeSync(fd);
}
});
How to write data into file from API using node.js
You can only write files to os.tmpdir(), which is /tmp on Cloud Functions. /tmp is a memory based filesystem. Everything else is read-only. If you don't intend to do anything with that written file, it will consume memory indefinitely. You should always delete files written to /tmp before the function terminates. Writing a file to memory like this is almost certainly not the best solution to a problem, unless there is a consumer for that content that can only read it off the local filesystem.
Since you haven't really said what problem you're trying to solve, it's not possible to say what you could be doing instead (that's something for a different question). But anyway, you can only write to /tmp.

Write a variable into a buffer

I am very new to node.js and I think I understand the basics of how it functions but I feel like I am not seeing something that is vital to how fs.write and buffers function.
I am trying to send a user defined variable over socket.io and write it into an html file. I have a main site that has the button, when clicked it sends the information to the socket in a variable.
The thing I can't figure out is how to insert the variable into the html file.
I can save strings that I type, into a file:
(e.g.) var writeBuffer = new Buffer ('13');
But not variables that I put in:
(e.g.) var writeBuffer = new Buffer ($(newval));
I even tried different encoding methods, I think I am missing something.
Server.js
var newval = "User String";
var fd = fsC.open(fileName, 'rs+', function (error, fd) {
if (error) { throw error }
var writeBuffer = new Buffer($(newval));
var bufferLength = writeBuffer.length;
fsC.write( fd, writeBuffer, 0, bufferLength, 937,
function (error, written) {
if (error) { throw error }
fsC.close(fd, function() {
console.log('File Closed');
});
}
);
});
If you are using a version of jsdom 4.0.0 or later, it will not work with Node.js. As per the jsdom github readme:
Note that as of our 4.0.0 release, jsdom no longer works with
Node.js™, and instead requires io.js. You are still welcome to install
a release in the 3.x series if you use Node.js™.

Categories