Saving byteArray to pdf file in Titanium - javascript

I have a SOAP API that is returning me a file divided in chunks encoded in several base64 strings
i'm not being able to save it to the file system without breaking it
This is the pastebin of a whole file encoded, as is, once i download and chain the responses.
What is the way to save it correctly?
i tried in many ways
var f = Ti.FileSystem.getFile(Ti.FileSystem.tempDirectory, 'test.pdf');
...
var blobStream = Ti.Stream.createStream({ source: fileString, mode: Ti.Stream.MODE_READ });
var newBuffer = Ti.createBuffer({ length: fileString.length });
f.write(fileString);
or
var data = Ti.Utils.base64decode(fileString);
var blobStream = Ti.Stream.createStream({ source: data, mode: Ti.Stream.MODE_READ });
var newBuffer = Ti.createBuffer({ length: data.length });
var bytes = blobStream.read(newBuffer);
f.write(fileString);
or
var data = Ti.Utils.base64decode(fileString);
var blobStream = Ti.Stream.createStream({ source: data, mode: Ti.Stream.MODE_READ });
var newBuffer = Ti.createBuffer({ length: data.length });
var bytes = blobStream.read(newBuffer);
f.write(bytes);
but i'm not understanding which one is the right path
Do I have to convert back to byteArray the string on my own?
What is the right way to save it?
Do I have to create a buffer from the string or ...?

I think that the base64enc for the file is not valid or incomplete, I've tested it using bash and base64 utils. You can perform these steps.
Copy and paste the base64 string on a file called pdf.base64 then run this command:
cat pdf.base64 | base64 --decode >> out.pdf
the output file is not a valid pdf.
You can try to encode and decode a valid pdf file to take a look at the generated binary:
cat validfile.pdf | base64 | base64 --decode >> anothervalidfile.pdf
Try to check if you are chainig chunks correctly or simply make a phone call with the guy who build the soap api.

Before you start downloading your file you need to create the file stream to write too, writing to a blob is not the way to go:
// Step 1
var outFileStream = Ti.Filesystem.getFile('outfile.bin').open(Ti.Filesystem.MODE_WRITE);
After creating your HTTPClient or socket stream and when you receive a chunk of Base64 data from the serve, you need to put that decoded data into a Titanium.Buffer. This would probably go into your onload or onstream in an HTPPClient, :
// Step 2
var rawDecodedFileChunk = Ti.Utils.base64decode(fileString);
var outBuffer = Ti.createBuffer({
byteOrder : // May need to set this
type : // May also need to set this to match data
value: rawDecodedFileChunk
});
Finally you can write the data out to the file stream:
// Step 3
var bytesWritten = outFileStream.write(outBuffer); // writes entire buffer to stream
Ti.API.info("Bytes written:" + bytesWritten); // should match data length
if(outBuffer.length !== bytesWritten) {
Ti.API.error("Not all bytes written!");
}
Generally errors come from having the wrong byte order or type of data, or writing in the wrong order. Obviously, this all depends on the server sending the data in the correct order and it being valid!
You may also want to consider the pump command version of this, which allows you to transfer from input stream to output file stream, minimizing your load.

Related

How do I store and fetch bytea data through hasura?

I've got a blob of audio data confirmed to play in the browser but fails to play after storing, retrieving, and conversion of the same data. I've tried a few methods without success, each time returning the error:
Uncaught (in promise) DOMException: Failed to load because no supported source was found
Hasura notes that bytea data must be passed in as a String, so I tried a couple things.
Converting the blob into base64 stores fine but the retrieval and playing of the data doesn't work. I've tried doing conversions within the browser to base64 and then back into blob. I think it's just the data doesn't store properly as bytea if I convert it to base64 first:
// Storing bytea data as base64 string
const arrayBuffer = await blob.arrayBuffer();
const byteArray = new Uint8Array(arrayBuffer);
const charArray = Array.from(byteArray, (x: number) => String.fromCharCode(x));
const encodedString = window.btoa(charArray.join(''));
hasuraRequest....
`
mutation SaveAudioBlob ($input: String) {
insert_testerooey_one(
object: {
blubberz: $input
}
) {
id
blubberz
}
}
`,
{ input: encodedString }
);
// Decoding bytea data
const decodedString = window.atob(encodedString);
const decodedByteArray = new Uint8Array(decodedString.length).map((_, i) =>
decodedString.charCodeAt(i)
);
const decodedBlob = new Blob([decodedByteArray.buffer], { type: 'audio/mpeg' });
const audio4 = new Audio();
audio4.src = URL.createObjectURL(decodedBlob);
audio4.play();
Then I came across a Github issue (https://github.com/hasura/graphql-engine/issues/3336) suggesting the use of a computed field to convert the bytea data to base64, so I tried using that instead of my decoding attempt, only to be met with the same error:
CREATE OR REPLACE FUNCTION public.content_base64(mm testerooey)
RETURNS text
LANGUAGE sql
STABLE
AS $function$
SELECT encode(mm.blobberz, 'base64')
$function$
It seemed like a base64 string was not the way to store bytea data, so I tried converting the data to a hex string prior to storing. It stores ok, I think, but upon retrieval the data doesn't play, and I think it's a similar problem as storing as base64:
// Encoding to hex string
const arrayBuffer = await blob.arrayBuffer();
const byteArray = new Uint8Array(arrayBuffer);
const hexString = Array.from(byteArray, (byte) =>
byte.toString(16).padStart(2, '0')
).join('');
But using the decoded data didn't work again, regardless of whether I tried the computed field method or my own conversion methods. So, am I just not converting it right? Is my line of thinking incorrect? Or what is it I'm doing wrong?
I've got it working if I just convert to base64 and store as a text field but I'd prefer to store as bytea because it takes up less space. I think something's wrong with how the data is either stored, retrieved, or converted, but I don't know how to do it. I know the blob itself is fine because when generated I can play audio with it, it only bugs out after fetching and attempted conversion its stored value. Any ideas?
Also, I'd really like to not store the file in another service like s3, even if drastically simpler.

Some Random characters appearing at beginning of .txt file after converting from base64 node js

So i am dropping a .txt file in an uploader which is converting it into base64 data like this:
const {getRootProps, getInputProps} = useDropzone({
onDrop: async acceptedFiles => {
let font = ''; // its not actually a font just reusing some code i'll change it later its a .txt file so wherever you see font assume its NOT a font.
let reader = new FileReader();
let filename = acceptedFiles[0].name.split(".")[0];
console.log(filename);
reader.readAsDataURL(acceptedFiles[0]);
reader.onload = await function (){
font = reader.result;
console.log(font);
dispatch({type:'SET_FILES',payload:font})
};
setFontSet(true);
}
});
Then a POST request is made to the node js server and I indeed receive the base64 value. I then proceed to convert it back into a .txt file by writing it into a file called signals.txt like this:
server.post('/putInDB',(req,res)=>{
console.log(req.body);
var bitmap = new Buffer(req.body.data, 'base64');
let dirpath = `${process.cwd()}/signals.txt`;
let signalPath = path.normalize(dirpath);
connection.connect();
fs.writeFile(signalPath, bitmap, async (err) => {
if (err) throw err;
console.log('Successfully updated the file data');
//all the ending brackets and stuff
Now the thing is the orignal file looks like this :
Time,1,2,3,4,5,6,7,8,9,10,11,12
0.000000,7.250553,14.951141,5.550423,2.850217,-1.050080,-3.050233,1.850141,2.850217,-3.150240,1.350103,-2.950225,1.150088
But the file when writing back from base64 looks like this :
u«Zµìmþ™ZŠvÚ±î¸Time,1,2,3,4,5,6,7,8,9,10,11,12
0.000000,1.250095,0.250019,-4.150317,-0.350027,3.650278,1.950149,0.950072,-1.250095,-1.150088,-7.750591,-1.850141,-0.050004
See the weird characters in the beginning ? Why is this happening.
Remember to read up on what the functions you use do, because you're using readAsDataURL which does not give you the base64 encoded version of your data: it gives you Data-URL, and Data-URLs have a header prefix to tell URL parsers what kind of data this will be, and how to decode the data directly following the header.
To quote the MDN article:
Note: The blob's result cannot be directly decoded as Base64 without first removing the Data-URL declaration preceding the Base64-encoded data. To retrieve only the Base64 encoded string, first remove data:*/*;base64, from the result.
If you don't, blindly converting the Data-URL from base64 to plain text will give you some nonsense data at the start:
> Buffer.from('data:*/*;base64', 'base64').toString('utf-8')
'u�Z���{�'
Which raises another point: you would have caught this with POST data validation, because the Data-URL that you sent contains characters that are not allowed in base64. POST validation is always a good idea.
I know this isn't the exact code, but it is difficult to reproduce your problem with the code you provided. But the data you are sending needs to be a URL/URI encoded form.
So essentially:
encodeURI(base64data);
Encode URI is built into javascript: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURI
EDIT:
I saw you used the function readDataAsUrl(), but try using the encodeURI function and then readDataAsUrl().

Mechanisms for hashing a file in JavaScript

I am relatively new to JavaScript and I want to get the hash of a file, and would like to better understand the mechanism and code behind the process.
So, what I need: An MD5 or SHA-256 hash of an uploaded file to my website.
My understanding of how this works: A file is uploaded via an HTML input tag of type 'file', after which it is converted to a binary string, which is consequently hashed.
What I have so far: I have managed to get the hash of an input of type 'text', and also, somehow, the hash of an uploaded file, although the hash did not match with websites I looked at online, so I'm guessing it hashed some other details of the file, instead of the binary string.
Question 1: Am I correct in my understanding of how a file is hashed? Meaning, is it the binary string that gets hashed?
Question 2: What should my code look like to upload a file, hash it, and display the output?
Thank you in advance.
Basically yes, that's how it works.
But, to generate such hash, you don't need to do the conversion to string yourself. Instead, let the SubtleCrypto API handle it itself, and just pass an ArrayBuffer of your file.
async function getHash(blob, algo = "SHA-256") {
// convert your Blob to an ArrayBuffer
// could also use a FileRedaer for this for browsers that don't support Response API
const buf = await new Response(blob).arrayBuffer();
const hash = await crypto.subtle.digest(algo, buf);
let result = '';
const view = new DataView(hash);
for (let i = 0; i < hash.byteLength; i += 4) {
result += view.getUint32(i).toString(16).padStart(2, '0');
}
return result;
}
inp.onchange = e => {
getHash(inp.files[0]).then(console.log);
};
<input id="inp" type="file">

Nodejs decode base64 and save them into a file using streams

Over my node.js application I decode base64 encoded images using the following line of code:
const fileDataDecoded = Buffer.from(base64EncodedfileData,'base64');
So far I can write a file with the following piece of code:
const fs = require('fs');
....
const fileDataDecoded = Buffer.from(base64EncodedfileData,'base64');
fs.writeFile("/tmp/test.png", fileDataDecoded, function(err) {
//Handle Error
});
Now what I want to achieve is the decoded files via the buffer to get written into a file via streams in order to acheive better efficiency on the executed application.
In other words I want the filedata to get written at the same time that the data is getting base64-decoded, in order to write large files with an efficient way. If is not possible via a Buffer to stream base64 decoded data then I would like to know how is possible to decode base64 data.
You can create a readable stream and push the image buffer. Then pipe it to a writeable stream like this.
var base64 = 'iVBORw0KGgoAAAANSUhEUgAAAJAAAACQCAYAAADnRuK4AAAMcklEQVR42u2deVBUVxbGGxHQBOPEZdxr3KJJcO+INGsDDXGJC8RWBKMGFIhRGVxYLDCgINAoq9J2I2jiGCPDKGClZvKXmaWmZq1U4lRiJslkU2dMTWapmiQVWzlzz+vXLCrQ0G/vc6q+egVF1Tvvfr8+5977mvd0OgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKisEFgE9RUdEwc0uLL0k+oQfohWqgMV4tGs4lTaGoQE/QG8XCxCXX6+czI+LbmqfFdzTOWH7ZOl1+nZFB8l4zjj16gF7055XsgaXSlVhch32bqaPxTaZPmP7HdIfpezkUd+W0YiTTGNzhPUAv3kRvXPC4PFMMPCy5KJbkB0zQpXa75Ipj51W65BiXHr58YLpsNSoCIlcCse32lO4E7Y7YdpvD1Ga/x37fKY0aO9m5VSfMW4rxcXphc/rCPGK/g8g36rbICpGrFCLNLnhYYncwOanUq9qpXJKNW5sd2xrEtDZAZFNZvDxzIn4mr7el+7FP0XWp4TFpDJ6eLV8qiPAYcfbYR1PNhpE9PZWy+vhEt1pTu9sWVRw1VSR2DofxQi2E1BTuQS+xGEi6p8AOvrFttp87L9bmELfiNDrlRQC5rlk8gOyOmEtWMDQcvsq8HG42m32lbF8+c3NSRzGAPuUu1jlhpqqjomqEnmHLDLWVfjlxRcR49FSSNuaqPsssBU+wBL7h+3YnwaM6iHAlCOHNFd8GZWxeiJ5KUoWMRdz8Z/jS8twgLINiAOR17WoQbU3AiTQHUMRZi+Pp9KRl6Kk+XYJ5EA+Qn75w13xceQkLEEHiXjVqFAyg8KYKx5NbEgycp1IAxJ8kYNGBHYuEBMhELUvaJX8PgGalrAlDT4PMZn/JAFq4N22xUAAREDLMjXoANHPT6nDVAkQQyASRzACNEAIgMl9GiB4EaISqACLTZYZIzQCR2QqASK0AkckKgUiNAJG5CoJIbQDRPo/C9onUBRAZqrgdazUBRGYqsJWpBSC6MarQG7CqAUjsT1iLFaKKiiBiXz5E7FWRWL6YN+YvSxVSA0BSfMKMtRUQan4JQterUCzvqMoSZqZNeoiUDpBUJTrKcpQzI2rTbtj5ShVkl55QvDBPzBfzjiguhKjXa6SHSMkASTnvcQG0IjUH/njtOnx+67bihXlivph3eFEhRJ6t5CGySzcfIoB6A/Tcjjx4/5Mv4Pa//qt4YZ6Yb0+AOIjO14i7X6YGgKReZbgAWptxEG599U/4/o5D8cI8Md/7AXJCVC36pisB9BCA1mUehK++/jfcu3dP8cI8Md+HASQFRIoFyEQACQKQq52JNidSLEAybJRpFaDuibU4qzPFASTXTquWARIVIucTTQggrQMkGkTKAqiRABIRIFEg4jw7rQyA+AcmEUAiAiQ4RAyguCsEkFcB1LU6E2LBohSA4pwJEUASASTYPhEPUESzhQDyNoAEgYgA8m6AXBANuZ0pASB8DjEBJB9A3TvWNg8BSiCAvBWgIa/OCCACyCOI5AcokwBSEECDhkjzALU3wsoLVkg8VQfrT/StxOIyWJuRDynZh+HSW2/DL375O8UL88R8Me91BUcg4ZilT62psoDplAgQaR2gVecaYOuhSkjLq/B6bS2ogPiGwU6s7d4N0Maqaq+AI/UgO+Z3/7ydKT23/IG/W19mGfTqrN99Iq0DlFxepXl4tpQeh9U/tcG6nzTAi4UWBk85NGZlw28ytoJlb36vv006UiHsZiMBpH4lnj0Jsb86C6a3z0ByVQ3szD0KH6eZ4cvURPh1xjZ4KbfMI4D6hYgAUr+Sj1fDs281wSq2YNhafAzSWQXq2JkBH27fAE17smFHXrnHAPU5JyKANCA293mRLRSwfbl+l8mqTlZOCQeTpy2s39UZAeRd8hSgByAigAggjyAigAggjyC6bCOACCCP7uJ34uqMACKAhqSIM5Wdxtdr+a+0EkAE0BAAinz1GITUF8v1hTICSAsALasudMw0LyeACCAPAEoggAggAogAIoAIIAKIACKACCACiAB6/nA5bMoqJvFKPFRCAA3qvy1+nK/Oh4eLpMhd+wkgAmjoiniZACKACCACiAAigAggAogAIoAEAsjwfCaEJOyAZeu2q06YN+bvpQClyQ6QITED9Ku2wKLlm2Dhs0mqE+aN+eN1EEAyAPTM6m2qBOd+4XUQQDIAtHjl5i4T9CtTYHVaNiSk71e8ME/M15U7XoesAJmXa+8Zie4AhOXfZUJ0Ugb8/t334ePPbypemCfm2wU/uw5ZAdLiQzbdASgkMR0Wr3B+kmOSMuFP167Dpzf+rnhhnpgvV31Y/ngd8gF0yHsBcq7CMiB4bSrEv5AF7374N/jH1/9RvDBPzBfzxvzlXIXJ+KV6cZ9UP9h9oJVpOfDeXz9TxSsvMU/MVwn7QKGnSgkgAmhoAEWdq4LwpnJtvq2HABIfoOiL9dp9XxgBJC5ArH11xl4+pWGA9h4kgHoCtPuAwP8bX6vtd6Yaj5ZAqHknAYRi4xBZWCBk9YGY1gaNv7W5xQpReQUQlrIbQpNeHlAr0/PgvY9UAhDLE/N157rCUnZBxL5ciLQL/HgXr3hvfJsNos5VQ2STZUCtOX8Crt24oQqAME/M153rimy2CNu6sPr8zOolAPHvOMdPzEADs+aNk+oCiOUr6KTYXYAu1HI+KRYgZyuTHiICyA14XjsOMWzl5X0AuQERATSwjC31XR4pGiBRWtkAEBFAbrzdkM0pVQOQWBDhA7MfBhEBNNDEuaG3P2oASDyIbA9AtPx8Hbz6599C21/eUbwwT8xXstZ1sf5Bb1QDUHujZBCR3GhdagNItPkQQTSwXjvWvepSM0CxIlWh/uZEXi8274luPdm3J+oCSODbHEPcbPQmRbfU9+uH6gASs5URRH3sNmsNIIJIAnj4l6loFiCCSMwVF/cSFfd8UDNAokPkhRNrXK73ueLSIkBiQ+RNqzOsPIOCRysAiQ+R9veJ+Hd/DX7stQIQQeThaqvNNrRx1xJAYu8TaQ4i3CS8WO/ReMsO0IID6UtMHXaHUADRjrW7tyeO97/DPEiAIporHDM2r42QDCCj0TicO9nu1IUMnm+EBUjcG7BqX+IPeqXlBkBh9rJvf7R+VSh6qtfr/UQHSGc2++LJxoUtnhx7yfoF13ra7PeEBEiuL6UpFhzWsrivZAx1vvOwsWWe4VgY6g7deuzpWbPRU52zOIgew6ZOnTqSHccaL9Rf5QG6KwZABBFfde7/MpgwVf5uTGsDLCnd/wfm5Tje02GSADQhfsGjCFBw5cFcXEKa2hpFA6irpbV7F0T4BXhjywlBq053++LeG383vLkCgvZsKUMvxxuDAqUCyEc3e3YAOz4eMP6xJ8KaLJ9xEHXYHGJCJEY1UuLEGtsVLs9jLp0SZxwRHrb4iblkBX1Zzk3/sYFPoZe66dNHcN5KApBO5zdqzuRx7DhtbkZSJlsK8okxiMT4xIi45FfMEh/BYXlw//Qn1tjx8OB1h9QcgplJq7PRQ95LPykBwslW4MgxY6ay45x5WamlodYj+P/WXGnkhBNrNtMXUyZ8rIwAYhW0M+p8TSc+8kRq4VMy8NzRrQ3ijRM3YXb6goAuqy6EuTuSKtE75uE09JL3VBKAdHyvxJI3NmD06FnsGDT7hbXZz1Tk3gxrLINo1rtNYt8w7aXTEHfFM2G+RtY6sBJIIXy4E7cZyJblUowRfrjDTpeDvmTfrVkbnstBz3jvxvJeDtNJGD58yUNyJ/qPGvUkO84fOWaUYW56cu3iV7LeWWrJux1youi7MFvpXYO1xCGV8JFtQ5WhocQRXFXgWFqRJ44s+Y7g4wWOkLoidq4joo9FmK3kruFk8XfBlfm3FzFP5mzfWMc8wj2f+bxnE3kP/aSsPj2rEE6mf8A0hZ+MzfPz81vEtCRwyoSYiSFLN0yKNCRPCtPLpOAhSJ88wcAULKAMvKS+/sjg5AkhSzeiF8wTPXqDHvFeTeG9C5C6+vSsQr58+cNEJusCAmb7Px44z9/ffwH7eQEPE0lmoRfoCXqDHnFeOT0bwXvoo5MpekL0GNN4JpxYz2CJzmVJP+UfyIAiySfmAfNiDueJ05vxvFeyw3M/RHgj7hE+OZyY/ZDvsZN4TSZJKte4T+S9QE9G8x75KwWenhC5QPLj++pIPtlHeQWSJJVr3B/hvQjgvfHt4ZfiwpXYMF6+JEXI5YdiwRkIKJK8oqCgoKCgoKCgoKCgoKCgoKCg8Cj+Dyqrhq0vA9X1AAAAAElFTkSuQmCC';
var fs = require('fs')
var Readable = require('stream').Readable
const imgBuffer = Buffer.from(base64, 'base64')
var s = new Readable()
s.push(imgBuffer)
s.push(null)
s.pipe(fs.createWriteStream("test.png"));
Just another solution with base-64 & utf8 in case we're using react-native which doesn't have Buffer global object:
function (base64Str) {
return utf8.decode(base64.decode(base64Str))
}

Piping to same Writable stream twice via different Readable stream

I am trying to concatenate a string and a Readable stream (the readable stream is pointing to a file which may have data in multiple chunks, i.e. the file may be large) into one writable stream so that the writable stream can finally be written to a destination.
I am encrypting the string and content of the file and then applying zlib compression on them, then finally I want to pipe them to the writable stream.
To achieve this, I can:
a) Convert the file content into a string then concatenate both the string then encrypt, do compression and then finally pipe it into the writable stream. But this is not possible because the file may be big in size, thus I can't convert its content to string.
b) I can first encrypt and compress the string then convert the string into a stream then pipe it into the writable stream after that is done completely, pipe the file contents into the same writable stream.
To do so, I have written this:
var crypto = require('crypto'),
algorithm = 'aes-256-ctr',
password = 'd6FAjkdlfAjk';
var stream = require('stream');
var fs = require('fs');
var zlib = require('zlib');
// input file
var r = fs.createReadStream('file.txt');
// zip content
var zip = zlib.createGzip();
// encrypt content
var encrypt = crypto.createCipheriv(algorithm, password, iv);
var w = fs.createWriteStream('file.out');
// the string contents are being converted into a stream so that they can be piped
var Readable = stream.Readable;
var s = new Readable();
s._read = function noop() {};
s.push(hexiv+':');
s.push(null);
s.pipe(zlib.createGzip()).pipe(w);
// start pipe when 'end' event is encountered
s.on('end', function(){
r.pipe(zip).pipe(encrypt).pipe(w);
});
What I observe is:
Only the first pipe is done successfully, i.e. the string is written to the file.out. The second pipe doesn't make any difference on the output destination. At first, I thought that the reason might be due to asynchronous behavior of pipe. So, for this reason, I am piping the file content after the first piping is closed. But still, I didn't get the desired output.
I want to know why is this happening any the appropriate way for doing this.
Found the reason why was the second piping to the writable stream w wasn't working.
When the first .pipe(w) finishes, the writable at w is also automatically closed.
So instead of using s.pipe(zlib.createGzip()).pipe(w); I should have used:
s.pipe(zlib.createGzip()).pipe(w, {end: false});
Where the {end: false} is passed to pipe() as the second argument, stating that the writable should not get closed when the piping is done.

Categories