Emoji to PNG or JPG in Node.js - how to? - javascript

For the project I'm working on I need to generate an image file from emoji (ideally Apple emoji). I thought it should be a fairly simple thing, but with each tool I use, I eventually run into a wall.
I've also considered working with an emoji set, like this one that I could query when needed. Unfortunately, the one I've linked to doesn't have Unicode 9.0 emoji such as avocado (🥑) shrimp (🦐) or harambe (🦍). Do you know of such an up-to-date set?
Code-wise, I've tried opentype.js, but it doesn't support .ttc fonts, which is the extension of the emoji font on my mac (Apple Color Emoji.ttc). I've converted the font to .ttf but that didn't work either:
var opentype = require('opentype.js');
opentype.load('./build_scripts/apple_color_emoji.ttf', function (err, font) {
if (err) {
alert('Could not load font: ' + err);
} else {
console.log("loaded the font",font);
var Canvas = require('canvas')
, Image = Canvas.Image
, canvas = new Canvas(200, 200)
, ctx = canvas.getContext('2d');
var path = font.getPath('🐐🦃', 0, 150, 72);
path.draw(ctx);
console.log('<img src="' + canvas.toDataURL() + '" />');
}
});
The result looks like this:
I've tried fontkit, which is supposed to be able to read .ttc files, but it throws an error if I try to use the Apple Color Emoji font.
var fontkit = require('fontkit');
var font = fontkit.openSync('./build_scripts/Apple Color Emoji.ttc');
// TypeError: font.layout is not a function
If I try the same with my converted .ttf file I end up with some incorrect svg:
var fontkit = require('fontkit');
var font = fontkit.openSync('./build_scripts/apple_color_emoji.ttf');
var run = font.layout('🐐🦃');
var svg = run.glyphs[0].path.toSVG();
console.log(svg);
// M-1 0ZM799 800Z
My question is, am I even approaching this the right way? Converting emoji that I already have on my machine to a .png or another format seems like something that should be fairly straightforward but I just can't get it to work.
EDIT:
I've found a list of emoji names by their hex codes in this repo (big shoutouts to rodrigopolo). Now I can simply use this:
var emoji = "😊".codePointAt(0).toString(16); //1f60a
var emojiFile = fs.readFileSync('./my-emoji-folder/' + emoji + '.png');
Still, would be great to know if somebody has a coding solution to this problem!
FURTHER EDIT:
Turns out the first solution I've found only included emoji up to Unicode 8.0, not Unicode 9.0. I've found a ruby gem gemoji that does emoji extraction. If you're not a ruby developer (I'm not), you can simply use the following commands in your shell:
git clone https://github.com/github/gemoji.git
cd gemoji
bundle install
bundle exec gemoji extract some-directory/emoji
You now have Unicode 9.0 emoji in your some-directory/emoji folder!

I was able to get this to work with fontkit by selecting a font from the font collection. I haven't found a case yet where using either of the TTFs included in the "Apple Color Emoji.ttc" gives different results.
const fs = require('fs');
const fontkit = require('fontkit');
const emoji = require('node-emoji');
const font = fontkit.openSync('./Apple Color Emoji.ttc').fonts[0];
let emo = emoji.get('100');
let run = font.layout(emo);
let glyph = run.glyphs[0].getImageForSize(128)
fs.writeFileSync('100.png', glyph.data);

Related

Color-Thief Node plugin error: "Image given has not completed loading"

Working in Node/Express, I was trying to get the npm package color-thief to grab the dominant color from an image, and it failed because "image given has not completed loading".
The image was, again, local, so it shouldn't have had this particular problem. And besides that, color-thief returns a promise, and I was using async/await, so it should have waited however long it took for the image to load instead of throwing an error.
Below is my SSCCE code:
const ColorThief = require('color-thief');
let colorThief = new ColorThief();
async function getDominantColor() {
const img = 'public/img/seed/big-waves-2193828__340.webp';
const dominantColor = await colorThief.getColor(img);
console.log(dominantColor);
}
getDominantColor();
The issue turned out to be that the plugin apparently does not support .webp files.
It works fine with .jpg and .png, though the Documentation (which isn't easy to get to) doesn't explicitly state what file types it does/does not support.
I've submitted a feature request on Github to either add support for webp or update the documentation with an explicit list of supported filetypes, but the author states at the very bottom of his blog regarding the project:
"In the short term I'm not planning on doing any more work on the script."
Just figured I would try to save someone else using this in the future some headache and time
As per R Greenstreet answer above, Color-Thief does not support other formats than .jpg or .png.
So to workaround this, you need to convert image on the fly.
The fastest and most convenient way I could find is just to use node sharp module. And the code itself is just a few lines...
const sharp = require('sharp');
let image = await sharp(imageData);
let imageData = await image.metadata();
if (imageData.format === 'webp') {
image = await image.toFormat('png').toBuffer();
} else {
image = await image.toBuffer();
}
I know, this is not the optimal solution, but if you want a stable fix, this should be fine.

Is it possible to use custom Google web fonts with jsPDF

I'm using jsPDF (https://parall.ax/products/jspdf, https://github.com/MrRio/jsPDF) to produce dynamic PDFs in a web application.
It works well, but I'd like to figure out whether it's possible to use Google web fonts in the resulting PDF.
I've found a variety of links that are related to this question (including other questions on SO), but most are out of date, and nothing looks definitive, so I'm hoping someone clarify whether/how this would work.
Here's what I've tried so far, with no success:
First, load the font, and cache it as a base64-encoded string:
var arimoBase64;
var request = new XMLHttpRequest()
request.open('GET', './fonts/Arimo-Regular.ttf');
request.responseType = 'blob';
request.onload = function() {
var reader = new FileReader();
reader.onloadend = function() {
arimoBase64 = this.result.split(',')[1];
}
reader.readAsDataURL(this.response);
};
request.send()
Next, create the pdf doc:
doc = new jsPDF({
orientation: "landscape",
unit: "pt",
format: "letter"
});
doc.addFileToVFS("Arimo-Regular.ttf", arimoBase64);
doc.addFont("Arimo-Regular.ttf", "Arimo Regular", "normal");
doc.setFont("Arimo Regular", "normal");
doc.text("Hello, World!", 100, 100);
doc.save("customFontTest");
When the PDF is saved - if I view it in my browser - I can see the custom font. However - if I view it using Adobe Reader or the Mac Preview app - the fonts are not visible.
I assume that's because the font is rendered in the browser using the browser's font cache, but the font is not actually embedded in the PDF, which is why it's not visible using Adobe Reader.
So - is there a way to accomplish what I'm trying to do?
OK - I finally figured it out, and have gotten it to work. In case this is useful for anyone else - here is the solution I'm using...
First - you need two libraries:
jsPDF: https://github.com/MrRio/jsPDF
jsPDF-CustomFonts-support: https://github.com/sphilee/jsPDF-CustomFonts-support
Next - the second library requires that you provide it with at least one custom font in a file named default_vfs.js.
That file should look like this:
(function (jsPDFAPI) {
"use strict";
jsPDFAPI.addFileToVFS("[Your font's name]","[Base64-encoded string of your font]");
})(jsPDF.API);
I'm using two custom fonts - Arimo-Regular.ttf and Arimo-Bold.ttf - both from Google Fonts. So, my default_vfs.js file looks like this:
(function (jsPDFAPI) {
"use strict";
jsPDFAPI.addFileToVFS("Arimo-Regular.ttf","[Base64-encoded string of your font]");
jsPDFAPI.addFileToVFS("Arimo-Bold.ttf","[Base64-encoded string of your font]");
})(jsPDF.API);
There's a bunch of ways to get the Base64-encoded string for your font, but I used this: https://www.giftofspeed.com/base64-encoder/.
It lets you upload a font .ttf file, and it'll give you the Base64 string that you can paste into default_vfs.js.
You can see what the actual file looks like, with my fonts, here: https://cdn.rawgit.com/stuehler/jsPDF-CustomFonts-support/master/dist/default_vfs.js
So, once your fonts are stored in that file, your HTML should look like this:
<script src="js/jspdf.min.js"></script>
<script src="js/jspdf.customfonts.min.js"></script>
<script src="js/default_vfs.js"></script>
Finally, your JavaScript code looks something like this:
const doc = new jsPDF({
unit: 'pt'
});
doc.addFont("Arimo-Regular.ttf", "Arimo", "normal");
doc.addFont("Arimo-Bold.ttf", "Arimo", "bold");
doc.setFont("Arimo");
doc.setFontType("normal");
doc.setFontSize(28);
doc.text("Hello, World!", 100, 100);
doc.setFontType("bold");
doc.text("Hello, BOLD World!", 100, 150);
doc.save("customFonts.pdf");
This is probably obvious to most, but in that addFont() method, the three parameters are:
The font's name you used in the addFileToVFS() function in the default_vfs.js file
The font's name you use in the setFont() function in your JavaScript
The font's style you use in the setFontType() function in your JavaScript
You can see this working here: https://codepen.io/stuehler/pen/pZMdKo
Hope this works as well for you as it did for me.
I recently ran into this same issue, but it looks like the jsPDF-CustomFonts-support repo was rolled into MrRio's jsPDF repository, so you no longer need it to get this working.
I happen to be using it in a React App and did the following:
npm install jspdf
Create a new file fonts/index.js (Note: You can download the Google Font as a .ttf and turn it into the Base64 encoded string using the tool in mattstuehler's answer)
export const PlexFont = "[BASE64 Encoded String here]";
Import that file where you need it:
import jsPDF from 'jspdf';
import { PlexFont } from '../fonts';
// Other Reacty things...
exportPDF = () => {
const doc = new jsPDF();
doc.addFileToVFS('IBMPlexSans-Bold.ttf', PlexBold);
doc.addFont('IBMPlexSans-Bold.ttf', 'PlexBold', 'normal')
doc.setFont('PlexBold');
doc.text("Some Text with Google Fonts", 0, 0);
// Save PDF...
}
// ...
Just wanted to add an updated answer - for version 1.5.3:
Convert the font file to base64 = https://www.giftofspeed.com/base64-encoder/
const yanone = "AAWW...DSES"; // base64 string
doc.addFileToVFS('YanoneKaffeesatz-Medium.ttf', yanone);
doc.addFont('YanoneKaffeesatz-Medium.ttf', 'YanoneKaffeesatz', 'normal');
doc.setFont('YanoneKaffeesatz');

JavaScript pdf generation library with Unicode support

I want to generate a pdf file using JavaScript at client side . Firstly I tried using jsPdf API . But it does not work with Unicode character like Chinese.
Is there any option to enhance jspdf to support Unicode or any other library which supports Unicode .
Pdfmake API says it supports Unicode but when I tried it also does not work out, I checked in there demo example placing Unicode character .
I tried using pdfkit in Node.js but pdf is not getting created properly
var PDFDocument = require("pdfkit");
var fs = require('fs');
var doc = new PDFDocument;
doc.pipe(fs.createWriteStream('output.pdf'));
doc.fontSize(15);
doc.text('Generate PDF! 漢字漢字漢字漢字');
doc.end();
In pdf it created as Generate PDF! o"[Wo"[Wo"[Wo"[W
Also , Can I add multiple font in pdfMake
var fontDescriptors = {
Roboto: {
normal: 'examples/fonts/Roboto-Regular.ttf',
bold: 'examples/fonts/Roboto-Medium.ttf',
italics: 'examples/fonts/Roboto-Italic.ttf',
bolditalics: 'examples/fonts/Roboto-Italic.ttf'
}
};
var printer = new pdfMakePrinter(fontDescriptors);
I'll describe a solution using Node.js and PDFKit since you mentioned it but it also applies to pdfmake which internally uses PDFKit.
Most of the time, the default font used in these libraries do not accept Chinese characters. You have to add and use fonts that are compatible for the languages you need to support. If we take pdfmake for example, they use Roboto as their default font and it indeed does not accept Chinese characters.
Using your code example, we can add support for Chinese using the Microsoft YaHei font (msyh.ttf) for instance with only 1 additional line of code:
var PDFDocument = require("pdfkit");
var fs = require('fs');
var doc = new PDFDocument;
doc.pipe(fs.createWriteStream('output.pdf'));
doc.font('fonts/msyh.ttf');
doc.fontSize(15);
doc.text('Generate PDF! 漢字漢字漢字漢字');
doc.end();
The output would look like this:

Getting image dimensions with Angular vs Node.js

I am confused about the best way to discover the image dimensions, or the naturalWidth of images, given the url to the image, most often found in the src attribute of an <img> tag.
My goal is take as input a url to a news article and use machine learning to find the top 5 biggest pictures (.jpg, .png, etc) files in the document. The problem with using the front-end to do this, is that I don't know of a way to use AJAX to http GET html from some random page of some random server, because of CORS related issues.
However, using Node.js, or some server technology, I can make requests to get the HTML from other servers (as one would expect) but I don't know a way of getting the image sizes without downloading the images first. The problem is that, I want the downloaded images on the front-end, not the back-end, and therefore downloading images with Node.js is wasted effort, if it's just to check the image dimensions.
Has anyone experienced this exact problem before? Not sure how to proceed. As I said, my goals are to download images on the front-end, and keep the ones that are bigger than say 300px in width.
Both ways are ok, depends greatly on exactly what you need to achieve in terms of performance:
To me seems that, the simplest way for you would be on client side, then you only need a few lines of JavaScript to do it:
var img = new Image();
img.onload = function() {
console.log(this.width + 'x' + this.height);
}
img.src = 'http://www.google.com/intl/en_ALL/images/logo.gif';
On server side is also possible but you will need to install GraphicsMagick or ImageMagick. I'd go with GraphicsMagick as it is faster.
Once you have installed both the program and it's module (npm install gm) you would do something like this to get the width and height.
gm = require('gm');
// obtain the size of an image
gm('test.jpg')
.size(function (err, size) {
if (!err) {
console.log(size.width + 'x' + size.height);
}
});
Also, this other module looks good, I haven't used it but it looks promsing https://github.com/netroy/image-size
To get the img urls from the html string
You can load your html string using a simple http request, then you need to use a regexp capture group to extract the urls, and if you're wanting to match globally g, i.e. more than once, when using capture groups, you need to use exec in a loop (match ignores capture groups when matching globally).
This way you'll have all the sources in an array.
For example:
var m;
var urls = [];
var rex = /<img[^>]+src="?([^"\s]+)"?\s*\/>/g;
// this is you html string
var str = '<img src="http://example.com/one.jpg />\n <img src="http://example.com/two.jpg />';
while ( m = rex.exec( str ) ) {
urls.push( m[1] );
}
console.log( urls );
// [ "http://example.com/one.jpg", "http://example.com/two.jpg" ]
Hope it helps.

Saving text from website using Firefox extension, wrong characters saved

Sorry about the vague title but I'm a bit lost so it's hard to be specific. I've started playing around with Firefox extensions using the add-on SDK. What I'm trying to to is to watch a page for changes, a Twitch.tv chat window in this case, and save those changes to a file.
I've gotten this to work, every time something changes on the page it gets saved. But, "unusual" characters like for example something in Korean doesn't get saved properly. I think this has to do with encoding of the file/string? I tried saving the same characters by copy-pasting them into notepad, it asked me to save in Unicode and when I did everything worked fine. So I figured, ok, I'll change the encoding of the log file to unicode as well before writing to it. Didn't exactly work... Now all the characters were in some kind of foreign language.
The code I'm using to write to the file is this:
var {Cc, Ci, Cu} = require("chrome");
var {FileUtils} = Cu.import("resource://gre/modules/FileUtils.jsm");
var file = FileUtils.getFile("Desk", ["mylogfile.txt"]);
var stream = FileUtils.openFileOutputStream(file, FileUtils.MODE_WRONLY | FileUtils.MODE_CREATE | FileUtils.MODE_APPEND);
stream.write(data, data.length);
stream.close();
I looked at the description of FileUtils.jsm over at MDN and as far as I can tell there's no way to tell it which encoding I want to use?
If you don't know a fix could you give me some good search terms because I seem to be coming up short on that front. Since I know basically nothing on the subject I'm flailing around in the dark a bit at the moment.
edit:
This is what I ended up with (for now) to get this thing working:
var {Cc, Ci, Cu} = require("chrome");
var {FileUtils} = Cu.import("resource://gre/modules/FileUtils.jsm");
var file = Cc['#mozilla.org/file/local;1']
.createInstance(Ci.nsILocalFile);
file.initWithPath('C:\\temp\\temp.txt');
if(!file.exists()){
file.create(file.NORMAL_FILE_TYPE, 0666);
}
var charset = 'UTF-8';
var fileStream = Cc['#mozilla.org/network/file-output-stream;1']
.createInstance(Ci.nsIFileOutputStream);
fileStream.init(file, FileUtils.MODE_WRONLY | FileUtils.MODE_CREATE | FileUtils.MODE_APPEND, 0x200, false);
var converterStream = Cc['#mozilla.org/intl/converter-output-stream;1']
.createInstance(Ci.nsIConverterOutputStream);
converterStream.init(fileStream, charset, data.length,
Ci.nsIConverterInputStream.DEFAULT_REPLACEMENT_CHARACTER);
converterStream.writeString(data);
converterStream.close();
fileStream.close();
Dumping just the raw bytes (well, raw jschars actually) won't work. You need to first convert the data into some sensible encoding.
See e.g. the File I/O Snippets. Here are the crucial bits of creating a converter output stream wrapper:
var converter = Components.classes["#mozilla.org/intl/converter-output-stream;1"].
createInstance(Components.interfaces.nsIConverterOutputStream);
converter.init(foStream, "UTF-8", 0, 0);
converter.writeString(data);
converter.close(); // this closes foStream
Another way is to use OS.File + TextConverter:
let encoder = new TextEncoder(); // This encoder can be reused for several writes
let array = encoder.encode("This is some text"); // Convert the text to an array
let promise = OS.File.writeAtomic("file.txt", array, // Write the array atomically to "file.txt", using as temporary
{tmpPath: "file.txt.tmp"}); // buffer "file.txt.tmp".
It might be even possible to mix both. OS.File has the benefit that it will write data and access files off the main thread (so it won't block the UI while the file is being written).

Categories