Issue while printing pdf files from Blob angular - javascript

I have an Api that returns the data in the format of
{ fileName: string, blob: Blob }[]
I want to print all these files, so I am using
_files.forEach((_fileInfo) => {
const blobUrl = URL.createObjectURL(_fileInfo.blob);
const oWindow = window.open(blobUrl, "print");
oWindow.print();
oWindow.close();
});
this opens the multiple print windows, but in preview it shows blank documents.
but when i download all these files as a zip it downloads the correct PDF files.
// add files to zip
files.forEach((_fileInfo) => {
zip.file(_fileInfo.fileName, _fileInfo.blob);
});
// download and save
return zip.generateAsync({ type: "blob" }).then((content) => {
if (content) {
return saveAs(content, name);
}
});
What could be the issue,
is there any way to print all documents in a sequence without opening multiple windows?

Pdf file takes time to load thats why it was showing blank document, So i used print-js to solve this issue.
how to import
import * as printJS from "print-js";
how to use
const blobUrl = URL.createObjectURL(_fileInfo.blob);
printJS(blobUrl);

In case, you don't want to use printJS
I was facing the same issue but in my case I only have to deal with one file at a time. I solved it by putting the print function in a timeout block, so in your case this would be
setTimeout(function(){
oWindow.print();
}, 1000)
if this works then great, if not then you might also need to set the source of window equal to the 'blobUrl' that you created. This should be done after you come out of the loop.

Related

How to make a pdf from MongoBD Binary with File API?

I would like some help to make a File object from a pdf stored in MongoDB.
I am not using GridFS, the file is stored as such:
File structure in MongoDB
I am using this function to make the File:
const handlegetfile = () => {
API.getFile(2).then((result) => {
console.log(result.data);
const file = new File(Uint8Array.from(result.data.File.data), result.data.File.name);
console.log(file);
API.writeFile({
CodeTiers: "2525",
Type: { value: "non", label: "testfile" },
Format: "pdf",
File: file,
});
});
};
The .pdf file created by the writeFile() function can't be opened, and when opened with an editor, it looks like this:
pdf data after retrieved
Important: I do not want to write the file to the disk, writeFile() is just here to be sure that the pdf can be opened.
The thing is: the data goes from this in the original file:
original pdf data
To this in MongoDB:
data in MongoDB
To what is in the second screenshot. I am pretty sure the problem comes from the cast to a Uint8array but I can't find what else to use there for it to work. I tried exploding the response with {...result.data.File} and using an ArrayBuffer, also by simply casting to an array new File([result.data.File.data], result.data.File.name) but I couldn't get it to work either way.
Could you help me please ?

Get file extention using only file name in javascript

I am creating a discord bot (irrelevent) that sends images into the chat. The user can type out the name of the image without needing to type the file extention. The problem is that the bot doesn't know what the file extention is so it will crash if the picture is a .jpg and the program was expecting a .png. Is there a way to make the program not require a file extention to open the file?
let image = imageName;
message.channel.send({ files: [`media/stickers/${imageName}.png`] });
Unfortunately, the extension of the filename is required. You know file.mp4 and file.mp3 is entirely different.
However, you can use a try-except and a for loop to get the correct file!
I would suggest:
let image = imageName;
let extensions = [".png", ".jpg", "gif"] // All the extensions you can think of
const pass = () => {}
for (const extension of extensions) {
try {
message.channel.send({ files: [`media/stickers/${imageName}${extension}`] }); // successfully get file and send
break
} catch(error) {
pass() // do nothing, and go back to the loop and test other extension
}
}
I haven't tried that before, and I am a Python programmer. But I hope you get the idea.
Using fs - specifically the Promise version of fs, makes this quite simple
import { readdir } from 'fs/promises';
const getFullname = async (path, target) =>
(await readdir(path))
.find(file =>
file === target || file.split('.').slice(0,-1).join('.') === target
);
try {
const actualName = await getExtension('media/stickers', imageName);
if (!actualName) {
throw `File ${imageName} not found`;
}
message.channel.send({ files: [`media/stickers/${actualName}`] });
} catch(error) {
// handle your errors here
}
You can pass in the name with or without the extension and it will be found - note, this is NOT case insensitive ... so XYZ won't match xyz.jpg - easily changed if you need case insensitivity
There are only a few known image extensions like jpg, png, gif, jpeg. Maybe try and fetch the file with best guess extension, if it throws exception try the next format.

Download a 'data:' image/file using puppeteer and node.js

I'm trying to download an image using node.js and puppeteer but I'm running into some issues. I'm using a webscraper to gather the links of the images from the site and then using the https/http package to download the image.
This works for the images using http and https sources but some images have links that look like this (the whole link is very long so I cut the rest):
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAw8AAADGCAYAAACU07w3AAAZuUlEQVR4Ae3df4yU930n8Pcslu1I1PU17okdO1cLrTD+g8rNcvRyti6247K5NG5S5HOl5hA2uZ7du6RJEGYPTFy1Nv4RUJy0cWVkeQ9ErqqriHNrR8niZuVIbntBS886rBZWCGHVsNEFRQ5BloPCzGn2B+yzZMLyaP........
I'm not sure how to handle these links or how to download the image. Any help would be appreciated.
You need to first decode the url from base64 using node.js Buffer.
// the content type image/png has to be removed first
const data = 'iVBORw0KGgoAAAANSUhEUgAAAw8AAADGCAYAAACU07w3AAAZuUlEQVR4Ae3df4yU930n8Pcslu1I1PU17okdO1cLrTD+g8rNcvRyti6247K5NG5S5HOl5hA2uZ7du6RJEGYPTFy1Nv4RUJy0cWVkeQ9ErqqriHNrR8niZuVIbntBS886rBZWCGHVsNEFRQ5BloPCzGn2B+yzZMLyaP';
const buffer = new Buffer(data);
const base64data = buff.toString('base64');
// after this you will get the url string and continue to fetch the image
These are the base64 encoded images (mostly used for icons and small images).
you can ignore it.
if(url.startsWith('data:')){
//base 64 image
} else{
// an image url
}
if you really want to mess with base64 I can give you a workaround.
import { parseDataURI } from 'dauria';
import mimeTypes from 'mime-types';
const fileContent = parseDataURI(file);
// you probably need an extension for that image.
let ext = mimeTypes.extension(fileContent.MIME) || 'bin';
fs.writeFile("a random file"+"."+ext, fileContent.buffer, function (err) {
console.log(err); // writes out file without error, but it's not a valid image
});

What is the best way to keep a file open to read/write?

I have a local JSON file which I intent to read/write from a NodeJS electron app. I am not sure, but I believe that instead of using readFile() and writeFile(), I should get a FileHandle to avoid multiple open and close actions.
So I've tried to grab a FileHandle from fs.promises.open(), but the problem seems to be that I am unable to get a FileHandle from an existing file without truncate it and clear it to 0.
const { resolve } = require('path');
const fsPromises = require('fs').promises;
function init() {
// Save table name
this.path = resolve(__dirname, '..', 'data', `test.json`);
// Create/Open the json file
fsPromises
.open(this.path, 'wx+')
.then(fileHandle => {
// Grab file handle if the file don't exists
// because of the flag 'wx+'
this.fh = fileHandle;
})
.catch(err => {
if (err.code === 'EEXIST') {
// File exists
}
});
}
Am I doing something wrong? Are there better ways to do it?
Links:
https://nodejs.org/api/fs.html#fs_fspromises_open_path_flags_mode
https://nodejs.org/api/fs.html#fs_file_system_flags
Because JSON is a text format that has to be read or written all at once and can't be easily modified or added onto in place, you're going to have to read the whole file or write the whole file at once.
So, your simplest option will be to just use fs.promises.readFile() and fs.promises.writeFile() and let the library open the file, read/write it and close the file. Opening and closing a file in a modern OS takes advantage of disk caching so if you're reopening a file you just previously opened not long ago, it's not going to be a slow operation. Further, since nodejs performs these operations in secondary threads in libuv, it doesn't block the main thread of nodejs either so its generally not a performance issue for your server.
If you really wanted to open the file once and hold it open, you would open it for reading and writing using the r+ flag as in:
const fileHandle = await fsPromises.open(this.path, 'r+');
Reading the whole file would be simple as the new fileHandle object has a .readFile() method.
const text = await fileHandle.readFile({encoding 'utf8'});
For writing the whole file from an open filehandle, you would have to truncate the file, then write your bytes, then flush the write buffer to ensure the last bit of the data got to the disk and isn't sitting in a buffer.
await fileHandle.truncate(0); // clear previous contents
let {bytesWritten} = await fileHandle.write(mybuffer, 0, someLength, 0); // write new data
assert(bytesWritten === someLength);
await fileHandle.sync(); // flush buffering to disk

extension crash on exporting stringified/encodeURIComponent data to file

It is about exporting extension data from options page.
I have array of objects, with stored page screenshots encoded in base64, and some other minor obj properties. I'm trying to export them with this code:
exp.onclick = expData;
function expData() {
chrome.storage.local.get('extData', function (result) {
var dataToSave = result.extData;
var strSt = JSON.stringify(dataToSave);
downloadFn('extData.txt', strSt);
});
}
function downloadFn(filename, text) {
var fLink = document.createElement('a');
fLink .setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(text));
fLink .setAttribute('download', filename);
fLink .click();
}
On button click, get data from storage, stringify it, create fake link, set attributes and click it.
Code works fine if resulting file is under ~1.7 MB, but everything above that produce option page to crash and extension gets disabled.
I can console.log(strSt) after JSON.stringify and everything works fine no matter of the size, if I don't pass it to download function..
Is there anything I can do to fix the code and avoid crash?...or is there any limitation is size when using this methods?
I solved this, as Xan suggested, switching to chrome.downloads (it's extra permission, but works fine)
What I did is just replacing code in downloadFN function, it's cleaner that way
function downloadFn(filename, text) {
var eucTxt = encodeURIComponent(text);
chrome.downloads.download({'url': 'data:text/plain;charset=utf-8,'+eucTxt, 'saveAs': false, 'filename': filename});
}
note that using URL.createObjectURL(new Blob([ text ])) also produce same crashing of extension
EDIT:
as #dandavis pointed (and RobW confirmed), converting to Blob also works
(I had messed code that was producing crash)
This is a better way of saving data locally, because on browser internal downloads page, dataURL downloads can clutter page and if file is too big (long URL), it crashes browser. They are presented as actual URLs (which is raw saved data) while blob downloads are only with id
function downloadFn(filename, text) {
var vLink = document.createElement('a'),
vBlob = new Blob([text], {type: "octet/stream"}),
vUrl = window.URL.createObjectURL(vBlob);
vLink.setAttribute('href', vUrl);
vLink.setAttribute('download', filename);
vLink.click();
}

Categories