Mustache.js external template (without jQuery) - javascript

I'm writing a component without jQuery as a dependency, and I'm hoping to find a way to load an external Mustache.js template without jQuery. Using jQuery's $.get method appears to work, but I'm trying to do this in vanilla JS.
I tried using an XMLHttpRequest and appending the template to the body, and then hydrating it with my JSON, but when my JS tries to put the JSON in the template, the template isn't there to be hydrated (cannot read property innerHTML of null). Here was my code (in CoffeeScript, test.js is the mustache template):
req2 = new XMLHttpRequest()
req2.onload = ->
foo = document.getElementById('thatsmyjam')
templ = document.createTextNode(this.responseText)
foo.insertAdjacentHTML('beforeEnd', templ)
req2.open('GET', 'test.js', { async: false})
req2.responseType = 'document'
req2.send()
This adds the literal text [object Text] in the DOM instead of treating it as HTML, so it seems to be evaluating the string of HTML rather than rendering it as such.
There's probably a better way. I'm basically trying to combine my App (getting the JSON), mustache.js, and the template into one concatenated, minified file for distribution as a UI widget.
I also looked into something like Hogan.js to precompile the template, but it felt complicated, and I'm not able to use Node in this project.
Update
If I update the above CoffeeScript to this:
req2 = new XMLHttpRequest()
req2.onload = ->
foo = document.getElementById('thatsmyjam')
window.templ = document.createTextNode(this.responseText)
foo.insertAdjacentHTML('beforeEnd', templ)
req2.open('GET', 'test.js', { async: false})
req2.send()
then it's treated as a string in the relevant part of my app that tries to render the template:
populateDom: =>
self = #
#request.addEventListener 'loadend', ->
if #status is 200 && #response
resp = self.responseAsJSON(#response)
# here, window.templ is a string returned from the XMLHttpRequest above,
# as opposed to an actual "template", so Mustache can't call render with it.
rendered = Mustache.render(window.templ, resp)
document.getElementById('thatsmyjam').innerHTML = rendered
self.reformatDate(resp)
So Mustache treats the string differently than a template inside of a script tag. Is there a way to get Mustache to recognize that string as a legitimate template?

I figured out how to retrieve an external template using core JavaScript using an implementation inspired by this SO answer. The process is, essentially, create a new div, retrieve the template with an XMLHttpRequest, and fill the created div's innerHTML with the template string. Here's the implementation in CoffeeScript:
class TemplateManager
templateUrl: '/path/to/template.mustache'
retrieveTemplate: ->
req = new XMLHttpRequest()
req.onload = ->
div = document.createElement('div')
div.innerHTML = this.responseText
window.mustacheTemplate = div
req.open('GET', #templateUrl, { async: false})
req.send()
You can then call
rendered = Mustache.render(window.mustacheTemplate.innerHTML, resp)
document.getElementById('YOURDIV').innerHTML = rendered
to render the template with resp, your JSON data.

Here's a 2018 alternative using fetch to get both the data and the template in parallel:
// Get external data with fetch
const data = fetch('data.json').then(response => response.json());
// Get external template with fetch
const template = fetch('template.mst').then(response => response.text());
// wait for all the data to be received
Promise.all([data,template])
.then(response => {
resolvedData = response[0];
resolvedTemplate = response[1];
// Cache the template for future uses
Mustache.parse(resolvedTemplate);
var output = Mustache.render(resolvedTemplate, resolvedData);
// Write out the rendered template
return document.getElementById('target').innerHTML = output;
}).catch(error => console.log('Unable to get all template data: ', error.message));

Related

Import HTML to dynamically add elements using JS

I will be dynamically adding elements to my main index.html using .innerHTML += "<p>example</p>"; However I do not want to store the large html-like string in this .js file, but instead import it from another file example.html
example.html:
<p>example</p>
(It is just a snippet of code and not a standalone html file, but I want to keep the .html extension for linting, autocompletion and general readability)
My attempts:
$(...).load('example.html'): did not work as it replaces of contents of ... with this example instead of appending it
import * from example.html: this and other attempts of importing file failed because of MIME error that text/html cannot be imported
I will be perfectly satisfied with a solution of a method that reads .html as text and returns it as a string (preferably not using AJAX or ES6 as I do not feel confident with them). I would then just use the string in .innerHTML += imported_string; and call it a day.
If I correctly understand what you want to do, you can use FileReader to import the content of a file and convert it to text, for example:
function readFile(event) {
var file = event.target.files[0];
var stream = new FileReader();
stream.onload = function(e) {
var fileContent = e.target.result;
alert(fileContent);
}
stream.readAsText(file);
}
document.getElementById('myFile').addEventListener('change', readFile, false);
<input type="file" accept="html" id="myFile">
The file input is for presentation purposes, you can easily adapt this to your needs.
You should also perform the customary checks, which I ommited for brevity purposes.
Create a fetch request to the file that you want to retrieve. This is, in a basic sense, the same way how a browser would request a html file from the server.
The function below sends a request based on what you input as a file. For example, 'example.html'. It then checks if the request was a success and returns the response as a string. The string can then be appended to your innerHTML.
const getFileAsText = async file => {
const response = await fetch(file);
if (!response.ok) {
throw new Error(`Fetching the HTML file went wrong - ${response.statusText}`);
}
return response.text();
};
You can use it like the example below.
getFileAsText('example.html').then(html => {
document.body.innerHTML += html;
});

How to get all links from a website with puppeteer

Well, I would like a way to use the puppeteer and the for loop to get all the links on the site and add them to an array, in this case the links I want are not links that are in the html tags, they are links that are directly in the source code, javascript file links etc... I want something like this:
array = [ ]
for(L in links){
array.push(L)
//The code should take all the links and add these links to the array
}
But how can I get all references to javascript style files and all URLs that are in the source code of a website?
I just find a post and a question that teaches or shows how it gets the links from the tag and not all the links from the source code.
Supposing you want to get all the tags on this page for example:
view-source:https://www.nike.com/
How can I get all script tags and return to console? I put view-source:https://nike.com because you can get the script tags, I don't know if you can do it without displaying the source code, but I thought about displaying and getting the script tag because that was the idea I had, however I do not know how to do it
It is possible to get all links from a URL using only node.js, without puppeteer:
There are two main steps:
Get the source code for the URL.
Parse the source code for links.
Simple implementation in node.js:
// get-links.js
///
/// Step 1: Request the URL's html source.
///
axios = require('axios');
promise = axios.get('https://www.nike.com');
// Extract html source from response, then process it:
promise.then(function(response) {
htmlSource = response.data
getLinksFromHtml(htmlSource);
});
///
/// Step 2: Find links in HTML source.
///
// This function inputs HTML (as a string) and output all the links within.
function getLinksFromHtml(htmlString) {
// Regular expression that matches syntax for a link (https://stackoverflow.com/a/3809435/117030):
LINK_REGEX = /https?:\/\/(www\.)?[-a-zA-Z0-9#:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()#:%_\+.~#?&//=]*)/gi;
// Use the regular expression from above to find all the links:
matches = htmlString.match(LINK_REGEX);
// Output to console:
console.log(matches);
// Alternatively, return the array of links for further processing:
return matches;
}
Sample usage:
$ node get-links.js
[
'http://www.w3.org/2000/svg',
...
'https://s3.nikecdn.com/unite/scripts/unite.min.js',
'https://www.nike.com/android-icon-192x192.png',
...
'https://connect.facebook.net/',
... 658 more items
]
Notes:
I used the axios library for simplicity and to avoid "access denied" errors from nike.com. It is possible to use any other method to get the HTML source, like:
Native node.js http/https libraries
Puppeteer (Get complete web page source html with puppeteer - but some part always missing)
Although the other answers are applicable in many situations, they will not work for client-side rendered sites. For instance, if you just do an Axios request to Reddit, all you'll get is a couple of divs with some metadata. As Puppeteer actually gets the page and parses all JavaScript in a real browser, the websites' choice of document rendering becomes irrelevant for extracting page data.
Puppeteer has an evaluate method on the page object which allows you to run JavaScript directly on the page. Using that, you easily extract all links as follows:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://example.com');
const pageUrls = await page.evaluate(() => {
const urlArray = Array.from(document.links).map((link) => link.href);
const uniqueUrlArray = [...new Set(urlArray)];
return uniqueUrlArray;
});
console.log(pageUrls);
await browser.close();
})();
yes you can get all the script tags and their links without opening view source.
You need to add dependency for jsdom library in your project and then pass the HTML response to its instance like below
here is the code:
const axios = require('axios');
const jsdom = require("jsdom");
// hit simple HTTP request using axios or node-fetch as you wish
const nikePageResponse = await axios.get('https://www.nike.com');
// now parse this response into a HTML document using jsdom library
const dom = new jsdom.JSDOM(nikePageResponse.data);
const nikePage = dom.window.document
// now get all the script tags by querying this page
let scriptLinks = []
nikePage.querySelectorAll('script[src]').forEach( script => scriptLinks.push(script.src.trim()));
console.debug('%o', scriptLinks)
Here I have made CSS selector for <script> tags that have src attribute inside them.
You can write same code in using puppeteer, but it will take time opening the browser and everything and then getting its pageSource.
you can use this to find the links and then do whatever you want to use with them using puppeteer or anything.

How to parse HTML data from JSON and render it in UIWebView (using Swift 3.0)

I'd like to use JSON2HTML to parse the HTML data from JSON and render it in an UIWebView (using Swift 3.0). Please let me know how to achieve it. Thanks in advance!
Here's what I've tried:
let jsfile1 = try!String(contentsOfFile: Bundle.main.path(forResource: "json2html", ofType: "js")!)
func loadJS()
{
var getData={}
var context = JSContext()
var valSwiftyJson:JSON = [:]
var test = context?.evaluateScript(jsfile1)
let testFunction = test?.objectForKeyedSubscript("json2html")
let urlString = //Have removed the URL string due to restrictions
Alamofire.request(urlString,encoding:JSONEncoding.default).responseJSON
{ response in
if let alamoJson = response.result.value
{
let swiftyJson = JSON(data:response.data!)
valSwiftyJson = swiftyJson["FormInfo"]["Form"]
print(valSwiftyJson)
}
}
let result = testFunction?.call(withArguments: [getData,valSwiftyJson])
webView.loadHTMLString((result?.toString())!, baseURL: nil)
}
Finally, I managed to solve the issue by creating an index.html file (locally stored) and I referred the JSON2HTML library inside it. I then added the JSON(HTML inside) content dynamically to it each time whenever I needed to convert JSON to HTML. At last I load the final index.html in the UIWebView (it worked like charm).
Are you talking about this library as JSON2HTML ? If so, I don't think there is a library for translating the JSON elements to HTML in Swift.
Do you plan to download the JSON elements from a back-end ? Then, as there is a node.js wrapper to JSON2HTML, I would recommend to do the translating from JSON to HTML on the same server. Thus you would just download the HTML compiled data and rendering it in the UIWebView would be as easy as this line of code (in Swift 3) :
// html is the HTML data downloaded from your back-end
webView.mainFrame.loadHTMLString(html, baseURL: nil)

Node JS, Ideas on Template For PDF Creation

How to create PDF Documents in Node.JS.? Is there any better solution to manage templates for different types of PDF creation.
I am using PDFKit to create PDF Documents and this will be server side using Javascript. I can not use HTML to create PDF. It will blob of paragraphs and sections with replacing tags with in.
Does anyone know Node.js has any npm package that can deal templates with paragraphs sections headers.
Something like
getTemplateByID() returns a template that contains sections , headers, paragraphs and then i use to replace appropriate tags within the template.
In my case, I have to get my HTML template from my database (PostgreSQL) stocked as stream. I request the db to get my template and I create a tmp file.
Inside my template, I have AngularJS tags so I compile this template with datas thanks to the 'ng-node-compile' module:
var ngCompile = require('ng-node-compile');
var ngEnvironment = new ngCompile();
var templateHTML = getTemplateById(id);
templateHTML = ngEnvironment.$compile(templateHTML)(datas);
Now I have my compiled template (where you can set your paragraph etc.) and I convert them into PDF thanks to a PhantomJS module 'phantom-html-to-pdf'
var phantomHTML2PDF = require('phantom-html-to-pdf')(options);
phantomHTML2PDF(convertOptions, function (error, pdf) {
if(error) console.log(error);
// Here you have 'pdf.stream.path' which is your tmp PDF file
callback(pdf);
});
Now you have your compiled and converted template (pdf), you can do whatever you want ! :)
Useful links:
https://github.com/MoLow/ng-node-compile
https://github.com/pofider/phantom-html-to-pdf
I hope this help !

Converting multiple files into HTML (from Markdown)?

I'm currently working on a small project in which I want to convert couple (or more) Markdown files into HTML and then append them to the main document. I want all this to take place client-side. I have chose couple of plugins such as Showdown (Markdown to HTML converter), jQuery (overall DOM manipulation), and Underscore (for simple templating if necessary). I'm stuck where I can't seem to convert a file into HTML (into a string which has HTML in it).
Converting Markdown into HTML is simple enough:
var converter = new Showdown.converter();
converter.makeHtml('#hello markdown!');
I'm not sure how to fetch (download) a file into the code (string?).
How do I fetch a file from a URL (that URL is a Markdown file), pass it through Showdown and then get a HTML string? I'm only using JavaScript by the way.
You can get an external file and parse it to a string with ajax. The jQuery way is cleaner, but a vanilla JS version might look something like this:
var mdFile = new XMLHttpRequest();
mdFile.open("GET", "http://mypath/myFile.md", true);
mdFile.onreadystatechange = function(){
// Makes sure the document exists and is ready to parse.
if (mdFile.readyState === 4 && mdFile.status === 200)
{
var mdText = mdFile.responseText;
var converter = new showdown.Converter();
converter.makeHtml(mdText);
//Do whatever you want to do with the HTML text
}
}
jQuery Method:
$.ajax({
url: "info.md",
context: document.body,
success: function(mdText){
//where text will be the text returned by the ajax call
var converter = new showdown.Converter();
var htmlText = converter.makeHtml(mdText);
$(".outputDiv").append(htmlText); //append this to a div with class outputDiv
}
});
Note: This assumes the files you want to parse are on your own server. If the files are on the client (IE user files) you'll need to take a different approach
Update
The above methods will work if the files you want are on the same server as you. If they are NOT then you will have to look into CORS if you control the remote server, and a server side solution if you do not. This question provides some relevant background on cross-domain requests.
Once you have the HTML string, you can append to the whatever DOM element you wish, by simply calling:
var myElement = document.getElementById('myElement');
myElement.innerHTML += markdownHTML;
...where markdownHTML is the html gotten back from makeHTML.

Categories