I have json file with info from multiple individuals. The JS file (imported into my HTML file) first reads the json file, and stores the info in an array of people objects. I want to iterate through this array, updating the HTML for one person at a time (essentially creating a unique form for each person). At the end of each update (iteration), I want to generate a pdf of the current HTML using wkhtmltopdf. Then the info from the HTML will be cleared, and updated with info from the next person, at which point a new pdf will be generated.
Please point me in the right direction to go about doing this.
Actually wkhtmltopdf is able to run JavaScript pretty well (but not always perfectly). The key might be to use something like --javascript-delay 3000 - this thing will make wkhtmltopdf wait for 3000ms (3 seconds) for JavaScript execution to finish. For example I have this file
<html>
<body>
<h1>Users</h1>
<div id="users"></div>
</body>
<script>
function pushMessage(msg) {
const x = document.createElement("div");
x.innerText = msg;
document.getElementById("users").appendChild(x);
}
function handleUsers(users) {
users.forEach(user => pushMessage(user.name));
}
try {
const xhr = new XMLHttpRequest();
xhr.onreadystatechange = function () {
if (xhr.readyState !== 4) return;
if (xhr.status >= 200 && xhr.status < 300) {
const users = JSON.parse(xhr.responseText);
handleUsers(users);
}
};
xhr.open('GET', 'https://jsonplaceholder.typicode.com/users');
xhr.send();
} catch(e) {
document.write(e.message);
}
</script>
</html>
And then I convert this with wkhtmltopdf --javascript-delay 3000 index.html index.pdf I will get the expected result in a PDF; meaning it looks the same as running this normally in the browser.
Related
Currently I am building a local (non-internet) application that launches a Chromium browser in Visual Basic .NET.
It uses CefSharp to achieve this.
When the HTML launches I need to read multiple files in order to plot graphs using Plotly.
The problem: I can't read binary files.
I have succeeded in reading ASCII and non-binary files, by disabling security on CefSharp. I tried using the FolderSchemeHandlerFactory class, but that didn't work.
In order to read ASCII files I have resorted to using XMLHttpRequest which works for ASCII , but not binary. I have tried changing the response type to arraybuffer, but that doesn't work either.
function readTextFile(file){
var array = []
var file= new XMLHttpRequest();
file.open("GET", file, false);
file.onreadystatechange = function ()
{
if(file.readyState === 4)
{
if(file.status === 200 || file.status == 0)
{
var text= file.responseText;
array = text.split("\n");
}
}
}
file.send(null);
return array;
}
I am making my first blog, and I want to be able to write the posts as text files on a word processor like Microsoft Word so that I can spellcheck and check for mistakes, but then include the contents of those files into my website with custom formatting using CSS (e.g. add a style attribute to the HTML like this: style='font-family: sans-serif;').
I have already tried searching around the web, and I found this website blog.teamtreehouse.com, but it didn't suit my needs because it needs the user to click a button to include the file. I also came up with some test code that relies on the FileReader API, but since I don't understand the bits parameter of the File object (I left it blank), the test page just shows undefined. Here's the code:
<!DOCTYPE html>
<html>
<head>
<title>Test Webpage</title>
</head>
<body>
<p id='output'></p><!--p tag will have styles applied to it-->
</body>
<script>
var reader = new FileReader();
reader.onload = function(e) {
var text = reader.result;
}
//What does the first parameter do? What am I supposed to put here?
// |
// |
var file = new File([ ],'test.txt');
var txt = reader.readAsText(file);
var out = document.getElementById('output');
out.innerHTML = txt+'';
</script>
</html>
Just don't read files in js in a web browser.
You can create an API with node.js and then make an http request to get this data.
Once you created the server, just do like that:
const fs = require('fs');
var content;
// First I want to read the file
fs.readFile('./Index.html', function read(err, data) {
if (err) {
throw err;
}
content = data;
// Invoke the next step here however you like
console.log(content); // Put all of the code here (not the best solution)
processFile(); // Or put the next step in a function and invoke it
});
function processFile() {
console.log(content);
}
if you want to know how to do an api, here it is: https://dev.to/tailomateus/creating-an-express-api--7hc
Hope it helps you.
In case you have *.txt files on your server you can utilize Javascript to display the content in the browser like so:
fetch('/path/to/file.txt')
.then(r => r.text())
.then(txt => document.querySelector('<a selector>').innerHTML = txt);
That approach has the drawbacks:
The urls/filenames need to be known by the Javascript
Plain txt files do not contain any formatting, so the text block wont have a headline or alike.
But all in all: Without any server side processing this is a repetitive thing, since JS from the client side cannot even trigger a directory listing, to gain all the files that should be loaded, so for each new file you create, you have to add an entry in the Javascript as well. This is a very common problem an is already solved by the various content management systems out there (Wordpress, Joomla, Drupal,…), so I would recommend to just use on of those. Btw. Grav is a purely file based CMS, that works without a backend interface as well, so a very simple solution for your problem.
In the end, I used an HTTP request to retrieve the text from a file.
function includeText() {
var xmlhttp, allElements, element, file;
allElements = document.getElementsByTagName("*");
for (let i = 0; i < z.length; i++) {
element = allElements[i];
file = element.getAttribute("insert-text-from-file");
if (file) {
xmlhttp = new XMLHttpRequest();
xmlhttp.onreadystatechange = function() {
if (this.readyState == 4) {
if (this.status == 200) {element.innerText = this.responseText;}
if (this.status == 404) {element.innerText = "Contents not found";}
elmnt.removeAttribute("insert-text-from-file");
includeText();
}
}
xmlhttp.open("GET", file, true);
xmlhttp.send();
return;
}
}
}
I'm trying to make the website that shows current temp at home and allows me to set the temp as I want. To read temp I use raspberry pi zero and python code to save the temp in every 5 min to the .txt file. The problem is that I need to read current temp on my website from that file in let's say very 5 min. I can use:
<head>
<meta http-equiv="Refresh" content="s" />
</head>
But it doesn't look like a good choice so I though that I can use javascript and read data from the file. Unfortunately this function works only once no matter what I change in .txt file and refresh the web, output is still the same (looks like it save previous data to some kind of the global variable).
function readTextFile()
{
var rawFile = new XMLHttpRequest();
rawFile.open("GET", 'text.txt', true);
rawFile.onreadystatechange = function ()
{
if(rawFile.readyState === 4)
{
if(rawFile.status === 200 || rawFile.status == 0)
{
var allText = rawFile.responseText;
alert(allText);
}
}
}
rawFile.send(null);
}
On the left side at this picture there are data from the txt file (using php), and at the alert handler are data using javascript and submit button. These data should be the same.
So the question is: Can I read from a .txt file when its dynamicly change? And if so, how can I do it or what function use to do it? I don't know javascript very well.
I will be very grateful for help.
Using XHR to fetch records in a defined interval is not a good solution. I would recommend you to use JavaScript EventSource API. You can use it to receive text/event-stream at a defined interval. You can learn more about it here -
https://developer.mozilla.org/en-US/docs/Web/API/EventSource
For your application, you can do this -
JavaScript -
var evtSource = new EventSource('PATH_TO_PHP_FILE');
evtSource.onmessage = function(e) {
//e.data contains the fetched data
}
PHP Code -
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
$myfile = fopen("FILE_NAME.txt", "r");
echo "data:". fread($myfile,filesize("FILE_NAME.txt"))."\n\n";
fclose($myfile);
flush();
?>
you need update your .txt file in some interval, for save changes,
after set attribute id="textarea"
//use jOuery
var interval = setInterval(function(){ myFunction() },5*60*1000); // 1000=1 second
function myFunction(){
$.get('server_address',{param1:1,param2:2},function(response){
//if is textarea or input
$('#textarea').val(response);
//if is block div, span or something else
//$('#textarea').text(response);
});
}
server_address - is page where you read file content, and print them .... if it is php, then file "write.php" with code like
<?php
echo(file_get_contents('some.txt'));
{param1:1,param2:2} - is object, with params, what you send to "server_address", like {action:'read',file:'some.txt'} or something other
response - is text what is print in page on "server_address"
I have a large file format "json". I need to use this information when you open the page in a browser. the only solution - is to place the data into a variable in the ".js" file, but it turns out 5000 lines. Maybe there is an option to read data? I open the page in folder
The JSON.parse() method parses a JSON string, constructing the
JavaScript value or object described by the string.
-MDN
If you need those objects to render out your webpage / webapp you're going to have to get them to the browser. Break up the JSON. Don't forget to minify.
I think the desired architecture would be to use XHR or filesystem (if that's really your use case / local only) to grab what JSON you need on demand.
If you want to have the data directly, you have to put it in a .js file like you did. You could write a build rule to create this .js file from the .json file.
Another solution is using Ajax, which will allow from the js to fetch the content of the .json and store it into a variable.
You can use <link> element with rel attribute set to "import", get and pass link.import.body.textContent to JSON.parse() to create javascript object from JSON.
<script>
let json;
function handleJSON(link) {
json = JSON.parse(link.import.body.textContent);
}
window.onload = function() {
console.log(json)
}
</script>
<link id="json" rel="import" type="application/json" href="https://gist.githubusercontent.com/guest271314/ffac94353ab16f42160e/raw/aaee70a3e351f6c7bc00178eabb5970a02df87e9/states.json" onload="handleJSON(this)"/>
//basic method !! think about handling the exceptions...
if (window.XMLHttpRequest) { // Objet standard
xhr = new XMLHttpRequest(); // Firefox, Safari, ...
} else if (window.ActiveXObject) { // Internet Explorer
xhr = new ActiveXObject("Microsoft.XMLHTTP");
}
// Ajax req
xhr.onreadystatechange = function () {
// if ok 200
if (this.readyState == 4 && this.status == 200) {
data = this.response
// your data
console.log(data[0].title);
}
}
xhr.open("GET", "resc/placeholder.json", true)
xhr.responseType = "json"
xhr.send(/*prams if needed */)
I have a php script to delete an old instance of a csv file and upload a new one and a javascript function to read the file. It was working fine until I added the php to delete the old file, and now for some reason the javascript function always fetches the same file even when it's changed.
I've gone in and checked the data.csv file and it's the new file but the function still fetches the old one. And if I delete the file manually the function still mysteriously accesses the data.csv file... even though it's deleted.
This is the php:
<?php
if(file_exists('upload/data.csv'))
{
unlink('upload/data.csv'); // deletes file
}
$tmp_file_name = $_FILES['Filedata']['tmp_name'];
$ok = move_uploaded_file($tmp_file_name, 'upload/data.csv');
?>
This is the javascript. Note: the variable "allText" will always be the contents of the old CSV file even if data.csv has changed or is deleted.
function LoadCSV(){
var txtFile = new XMLHttpRequest();
txtFile.open("GET", "http://****.com/mailer/upload/data.csv", true);
txtFile.onreadystatechange = function() {
if (txtFile.readyState === 4) { // Makes sure the document is ready to parse.
if (txtFile.status === 200) { // Makes sure it's found the file.
allText = txtFile.responseText;
ProcessCSV(allText);
}
}
}
txtFile.send(null);
}
I'm not sure why this is happening or how to fix it?
It's probably browser caching.
I like to use a random value in the url to trick the browser into thinking it is a different page:
Try this:
function LoadCSV() {
var txtFile = new XMLHttpRequest();
txtFile.open("GET", "http://****.com/mailer/upload/data.csv?nocache="+(Math.random()+'').replace('.',''), true);
txtFile.onreadystatechange = function () {
if (txtFile.readyState === 4) { // Makes sure the document is ready to parse.
if (txtFile.status === 200) { // Makes sure it's found the file.
allText = txtFile.responseText;
ProcessCSV(allText);
}
}
}
txtFile.send(null);
}
The get parameter nochache doesn't mean anything to the server or browser, but fools it into fetching a new resource every time, at the cost of losing browser caching altogether. Technically it's possible(although spectacularly unlikely) to get the same value twice, so you can add time in milliseconds or something if you want to make it totally foolproof.
Note that this will also bypass almost all other types of caches as well.