I don't even know if my project is possible. After looking around for a few hours and reading up on other Stack Overflow questions, my hopes are slowly diminishing, but it will not stop me from asking!
My Project: To create a simple HTML table categorizing our Sales Team phone activity for my superior. Currently I need something to pull data values from a file and use those values inside the table.
My Problem: Can Javascript even do this? I know it reads cookies on the client side computer, but can it read a file in the same directory as the webpage? (If the webpage is on the company server?)
My Progress: I will update as I find more information.
Update: Many of you are curious about how the file is stored. It is a static webpage (table.html) on our fileserver. The text file (data.txt) will be in the same directory.
I've recently completed a project where i had almost the exact conditions as yourself (the only difference is that users exclusively use IE).
I ended up using JQuery's $.ajax() function, and pulled the data from an XML file.
This solution does require the use of either Microsoft Access or Excel. I used as early as the 2003 version, but later releases work just fine.
My data is held in a table on Access (on Excel i used a list). Once you've created your table in Access; it's honestly as simple as hitting 'Export', saving as XML and then playing around with your 'ajax()' function (http://api.jquery.com/jQuery.ajax/) to manipulate the data which you want to be output, and then CSS/HTML for the layout of your page.
I'd recommend Access as there's less hastle in getting it to export XML in the right manner, though Excel does it just fine with a little more tinkering.
Here's the steps with ms-access:
Create table in access & export as XML
The XML generated will look like:
<?xml version="1.0" encoding="UTF-8"?>
<dataroot xmlns:od="urn:schemas-microsoft-com:officedata" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="Calls.xsd" generated="2013-08-12T19:35:13">
<Calls>
<CallID>1</CallID>
<Advisor>Jenna</Advisor>
<AHT>125</AHT>
<Wrap>13</Wrap>
<Idle>6</Idle>
</Calls>
<Calls>
<CallID>3</CallID>
<Advisor>Edward</Advisor>
<AHT>90</AHT>
<Wrap>2</Wrap>
<Idle>4</Idle>
</Calls>
<Calls>
<CallID>2</CallID>
<Advisor>Matt</Advisor>
<AHT>246</AHT>
<Wrap>11</Wrap>
<Idle>5</Idle>
</Calls>
Example HTML
<table id="doclib">
<tr><th>Name</th><th>AHT</th><th>Wrap</th><th>Idle</th></tr>
</table>
jQuery:
$(document).ready(function(){
$.ajax({
type: "GET",
url: "Calls.xml",
dataType: "xml",
success: function(xml) {
$(xml).find('Calls').each(function(){
var advisor = $(this).find('Advisor').text(),
aht = $(this).find('AHT').text(),
wrap = $(this).find('Wrap').text(),
idle = $(this).find('Idle').text(),
td = "<td>",
tdc = "</td>";
$('#doclib').append("<tr>" +
td + advisor + tdc + td + aht + tdc + td + wrap + tdc + td + idle + tdc + "</tr>")
});
}
});
});
JavaScript cannot automatically read files due to security reasons.
You have two options:
If you can rely on IE being used, you could use some fancy ActiveX stuff.
Use a backend which either constantly pushs data to the JS client or provides the data on pull requests.
This could work if you had a server like build with Node.js, PHP, ...etc.
JavaScript can read files with the Ajax protocol, but this mean that you need a server.
Otherwise your requests will go through the file:// protocol which doesn't support Ajax.
You can try looking into FileReader:
https://developer.mozilla.org/en-US/docs/Web/API/FileReader
The FileReader object lets web applications asynchronously read the contents of files (or raw data buffers) stored on the user's computer
I've never personally gotten it to work properly, but it's supposed to be able to allow this sort of thing.
Try with XMLHttpRequest or ActiveXObject in IE 5 or IE 6.
Here you can find an explanation:
http://www.w3schools.com/xml/xml_http.asp
Or try this example:
http://www.w3schools.com/ajax/tryit.asp?filename=tryajax_first
It sounds like you just want to get the contents of a static file from your server; is that right? If that's what you need to do, you're in luck. That's very easy.
load('textTable.txt', function(err, text) {
buildTable(text);
});
function load(url, callback) {
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function() {
if (xhr.readyState < 4) return;
if (xhr.status !== 200) {
return callback('HTTP Status ' + xhr.status);
}
if (xhr.readyState === 4) {
callback(null, xhr.responseText);
}
};
xhr.open('GET', url, true);
xhr.send('');
}
If you go with qwest, it'll look something like this:
qwest.get('textTable.txt').success(function(text) {
buildTable(text);
});
With jQuery:
jQuery.get('textTable.txt', function(text) {
buildTable(text);
});
Related
I've seen some answers to this that refer the askee to other libraries (like phantom.js), but I'm here wondering if it is at all possible to do this in just node.js?
Considering my code below. It requests a webpage using request, then using cheerio it explores the dom to scrape the page for data. It works flawlessly and if everything had gone as planned, I believe it would have outputted a file as i imagined in my head.
The problem is that the page I am requesting in order to scrape, build the table im looking at asynchronously using either ajax or jsonp, i'm not entirely sure how .jsp pages work.
So here I am trying to find a way to "wait" for this data to load before I scrape the data for my new file.
var cheerio = require('cheerio'),
request = require('request'),
fs = require('fs');
// Go to the page in question
request({
method: 'GET',
url: 'http://www1.chineseshipping.com.cn/en/indices/cbcfinew.jsp'
}, function(err, response, body) {
if (err) return console.error(err);
// Tell Cherrio to load the HTML
$ = cheerio.load(body);
// Create an empty object to write to the file later
var toSort = {}
// Itterate over DOM and fill the toSort object
$('#emb table td.list_right').each(function() {
var row = $(this).parent();
toSort[$(this).text()] = {
[$("#lastdate").text()]: $(row).find(".idx1").html(),
[$("#currdate").text()]: $(row).find(".idx2").html()
}
});
//Write/overwrite a new file
var stream = fs.createWriteStream("/tmp/shipping.txt");
var toWrite = "";
stream.once('open', function(fd) {
toWrite += "{\r\n"
for(i in toSort){
toWrite += "\t" + i + ": { \r\n";
for(j in toSort[i]){
toWrite += "\t\t" + j + ":" + toSort[i][j] + ",\r\n";
}
toWrite += "\t" + "}, \r\n";
}
toWrite += "}"
stream.write(toWrite)
stream.end();
});
});
The expected result is a text file with information formatted like a JSON object.
It should look something like different instances of this
"QINHUANGDAO - GUANGZHOU (50,000-60,000DWT)": {
"2016-09-29": 26.7,
"2016-09-30": 26.8,
},
But since the name is the only thing that doesn't load async, (the dates and values are async) I get a messed up object.
I tried Actually just setting a setTimeout in various places in the code. The script will only be touched by developers that can afford to run the script several times if it fails a few times. So while not ideal, even a setTimeout (up to maybe 5 seconds) would be good enough.
It turns out the settimeouts don't work. I suspect that once I request the page, I'm stuck with the snapshot of the page "as is" when I receive it, and I'm in fact not looking at a live thing I can wait for to load its dynamic content.
I've wondered investigating how to intercept the packages as they come, but I don't understand HTTP well enough to know where to start.
The setTimeout will not make any difference even if you increase it to an hour. The problem here is that you are making a request against this url:
http://www1.chineseshipping.com.cn/en/indices/cbcfinew.jsp
and their server returns back the html and in this html there are the js and css imports. This is the end of your case, you just have the html and that's it. Instead the browser knows how to use and to parse the html document, so it is able to understand the javascript scripts and to execute/run them and this is exactly your problem. Your program is not able to understand that has something to do with the HTML contents. You need to find or to write a scraper that is able to run javascript. I just found this similar issue on stackoverflow:
Web-scraping JavaScript page with Python
The guy there suggests https://github.com/niklasb/dryscrape and it seems that this tool is able to run javascript. It is written in python though.
You are trying to scrape the original page that doesn't include the data you need.
When the page is loaded, browser evaluates JS code it includes, and this code knows where and how to get the data.
The first option is to evaluate the same code, like PhantomJS do.
The other (and you seem to be interested in it) is to investigate the page's network activity and to understand what additional requests you should perform to get the data you need.
In your case, these are:
http://index.chineseshipping.com.cn/servlet/cbfiDailyGetContrast?SpecifiedDate=&jc=jsonp1475577615267&_=1475577619626
and
http://index.chineseshipping.com.cn/servlet/allGetCurrentComposites?date=Tue%20Oct%2004%202016%2013:40:20%20GMT+0300%20(MSK)&jc=jsonp1475577615268&_=1475577620325
In both requests:
_ is a decache parameter to prevent caching.
jc is a name of a JS wrapper function which should be invoked with the result (https://en.wikipedia.org/wiki/JSONP)
So, scrapping the table template at http://www1.chineseshipping.com.cn/en/indices/cbcfinew.jsp and performing two additional requests you will be able to combine them into the same data structure you see in the browser.
I would like to add a function in my javascript to write to a text file in the local directory where the javascript file is located. This means I'm not looking for some insecure way of accessing the user's file system in any way. All I care about is extracting the user's input into an html page that is accessed by my javascript then using that input as data externally. I just need a simple text file. This user input isn't actually text by the way, but rather a bunch of actions using my online game's components that the underlying javascript turns into a text string (so this particular string is what I want to save, not really even anything direct from the user).
I don't want to write to a user's file system, but rather, the file where the javascript (and html) code is located (a folder hosted on a server). Is there any simple way to get some file I/O going?
I know Javascript has a FileReader, is there any way to get it to do this in reverse? Like a FileWriter. GoogleClosure looks like it has a FileWriter, but it doesn't seem to quite work and I can't find any decent examples of how to get it to do this.
If this requires a different language, is there any way I can just get the relevant snippet and insert this into my Javascript file?
(the folder is hosted on a Linux system if that helps)
ADDENDUM: Elias Van Ootegem's solution below is excellent and I would highly recommend looking into it as it's a great example of client-server interaction and getting your system to provide you the data you're looking to extract. Workers are pretty interesting.
But for those of you looking at this post with that similar question that I initially had about JavaScript I/O, I found one other work-a-round depending on your case. My team's project site made use of a database site, MongoDB, that stored some of the user's interaction data if the user had hit a "Save" button. MongoDB, and other online database systems, provide a "dumping" function/script that you can call from your local machine/server and put that data into an output file (I was able to put the JSON data into a text file). From that output, you can write a parser to extract and format the data you desire from that output since databases like MongoDB can be pretty clear as to what format the text will be in (very structured, organized). I wrote a parser in C (with a few libraries I had written to extend the language) to do what I needed, but the idea is pretty generalizable to other programming/scripting languages.
I did look at leaving cookies as option as well, and made use of a test program to try it out (it works too!). However, one tradeoff for leaving cookies on a user's local system is that those cookies generally are meant to hold small amounts of data (usually things like username, date created, & expiration date of the cookie) and are dependent upon the user's local machine. Further, while you can extract the data in those cookies from JavaScript, you are back to the initial problem: the data still exists on the web, not on an output file on your server's file system. In the case you need to extract data and want some guarantee this data will exist on your machine, use Elias Van Ootegem's solution.
JavaScript code that is running client-side cannot access the server's filesystem at the same time, let alone write a file. People often say that, if JS were to have IO capabilities, that would be rather insecure... just imagine how dangerous that would be.
What you could do, is simply build your string, using a Worker that, on closing, returns the full data-string, which is then sent to the server (AJAX call).
The server-side script (Perl, PHP, .NET, Ruby...) can receive this data, parse it and then write the file to disk as you want it to.
All in all, not very hard, but quite an interesting project anyway. Oh, and when using a worker, seeing as it's an online game and everything, perhaps a setInterval to send (a part of) the data every 5000ms might not be a bad idea, either.
As requested - some basic code snippets.
A simple AJAX-setup function:
function getAjax(url,method, callback)
{
var ret;
method = method || 'POST';
url = url || 'default.php';
callback = callback || success;//assuming you have a default function called "success"
try
{
ret = new XMLHttpRequest();
}
catch (error)
{
try
{
ret= new ActiveXObject('Msxml2.XMLHTTP');
}
catch(error)
{
try
{
ret= new ActiveXObject('Microsoft.XMLHTTP');
}
catch(error)
{
throw new Error('no Ajax support?');
}
}
}
ret.open(method, url, true);
ret.setRequestHeader('X-Requested-With', 'XMLHttpRequest');
ret.setRequestHeader('Content-type', 'application/x-www-form-urlencode');
ret.onreadystatechange = callback;
return ret;
}
var getRequest = getAjax('script.php?some=Get¶ms=inURL', 'GET');
getRequest.send(null);
var postRequest = getAjax('script.php', 'POST', function()
{//passing anonymous function here, but this could just as well have been a named function reference, obviously...
if (this.readyState === 4 && this.status === 200)
{
console.log('Post request complete, answer was: ' + this.response);
}
});
postRequest.send('foo=bar');//set different headers to pos JSON.stringified data
Here's a good place to read up on whatever you don't get from the code above. This is, pretty much a copy-paste bit of code, but if you find yourself wanting to learn just a bit more, Here's a great place to do just that.
WebWorkers
Now these are pretty new, so using them does mean not being able to support older browsers (you could support them by using the event listeners to send each morsel of data to the server, but a worker allows you to bundle, pre-process and structure the data without blocking the "normal" flow of your script. Workers are often presented as a means to sort-of multi-thread JavaScript code. Here's a good intro to them
Basically, you'll need to add something like this to your script:
var worker = new Worker('preprocess.js');//or whatever you've called the worker
worker.addEventListener('message', function(e)
{
var xhr = getAjax('script.php', 'post');//using default callback
xhr.send('data=' + e.data);
//worker.postMessage(null);//clear state
}, false);
Your worker, then, could start off like so:
var time, txt = '';
//entry point:
onmessage = function(e)
{
if (e.data === null)
{
clearInterval(time);
txt = '';
return;
}
if (txt === '' && !time)
{
time = setInterval(function()
{
postMessage(txt);
}, 5000);//set postMessage to be called every 5 seconds
}
txt += e.data;//add new text to current string...
}
Server-side, things couldn't be easier:
if ($_POST && $_POST['data'])
{
$file = $_SESSION['filename'] ? $_SESSION['filename'] : 'File'.session_id();
$fh = fopen($file, 'a+');
fwrite($fh, $_POST['data']);
fclose($fh);
}
echo 'ok';
Now all of this code is a bit crude, and most if it cannot be used in its current form, but it should be enough to get you started. If you don't know what something is, google it.
But do keep in mind that, when it comes to JS, MDN is easily the best reference out there, and as far as PHP goes, their own site (php.net/{functionName}) is pretty ugly, but does contain a lot of info, too...
I have a html page using javascript that gives the user the option to read and use his own text files from his PC. But I want to have an example file on the server that the user can open via a click on a button.
I have no idea what is the best way to open a server file. I googled a bit. (I'm new to html and javascript, so maybe my understanding of the following is incorrect!). I found that javascript is client based and it is not very straightforward to open a server file. It looks like it is easiest to use an iframe (?).
So I'm trying (first test is simply to open it onload of the webpage) the following. With kgr.bss on the same directory on the server as my html page:
<IFRAME SRC="kgr.bss" ID="myframe" onLoad="readFile();"> </IFRAME>
and (with file_inhoud, lines defined elsewhere)
function readFile() {
func="readFile=";
debug2("0");
var x=document.getElementById("myframe");
debug2("1");
var doc = x.contentDocument ? x.contentDocument : (x.contentWindow.document || x.document);
debug2("1a"+doc);
var file_inhoud=doc.document.body;
debug2("2:");
lines = file_inhoud.split("\n");
debug2("3");
fileloaded();
debug2("4");
}
Debug function shows:
readFile=0//readFile=1//readFile=1a[object HTMLDocument]//
So statement that stops the program is:
var file_inhoud=doc.document.body;
What is wrong? What is correct (or best) way to read this file?
Note: I see that the file is read and displayed in the frame.
Thanks!
Your best bet, since the file is on your server is to retrieve it via "ajax". This stands for Asynchronous JavaScript And XML, but the XML part is completely optional, it can be used with all sorts of content types (including plain text). (For that matter, the asynchronous part is optional as well, but it's best to stick with that.)
Here's a basic example of requesting text file data using ajax:
function getFileFromServer(url, doneCallback) {
var xhr;
xhr = new XMLHttpRequest();
xhr.onreadystatechange = handleStateChange;
xhr.open("GET", url, true);
xhr.send();
function handleStateChange() {
if (xhr.readyState === 4) {
doneCallback(xhr.status == 200 ? xhr.responseText : null);
}
}
}
You'd call that like this:
getFileFromServer("path/to/file", function(text) {
if (text === null) {
// An error occurred
}
else {
// `text` is the file text
}
});
However, the above is somewhat simplified. It would work with modern browsers, but not some older ones, where you have to work around some issues.
Update: You said in a comment below that you're using jQuery. If so, you can use its ajax function and get the benefit of jQuery's workarounds for some browser inconsistencies:
$.ajax({
type: "GET",
url: "path/to/file",
success: function(text) {
// `text` is the file text
},
error: function() {
// An error occurred
}
});
Side note:
I found that javascript is client based...
No. This is a myth. JavaScript is just a programming language. It can be used in browsers, on servers, on your workstation, etc. In fact, JavaScript was originally developed for server-side use.
These days, the most common use (and your use-case) is indeed in web browsers, client-side, but JavaScript is not limited to the client in the general case. And it's having a major resurgence on the server and elsewhere, in fact.
The usual way to retrieve a text file (or any other server side resource) is to use AJAX. Here is an example of how you could alert the contents of a text file:
var xhr;
if (window.XMLHttpRequest) {
xhr = new XMLHttpRequest();
} else if (window.ActiveXObject) {
xhr = new ActiveXObject("Microsoft.XMLHTTP");
}
xhr.onreadystatechange = function(){alert(xhr.responseText);};
xhr.open("GET","kgr.bss"); //assuming kgr.bss is plaintext
xhr.send();
The problem with your ultimate goal however is that it has traditionally not been possible to use javascript to access the client file system. However, the new HTML5 file API is changing this. You can read up on it here.
I am looking for an equivalent to jquery's load() method that will work offline. I know from jquery's documentation that it only works on a server. I have some files from which I need to call the html found inside a particular <div> in those files. I simply want to take the entire site and put it on a computer without an internet connection, and have that portion of the site (the load() portion) function just as if it was connected to the internet. Thanks.
Edit: BTW, it doesn't have to be js; it can be any language that will work.
Edit2:
My sample code (just in case there are syntax errors I am missing; this is for the files in the same directory):
function clickMe() {
var book = document.getElementById("book").value;
var chapter = document.getElementById("chapter").value;
var myFile = "'" + book + chapter + ".html'";
$('#text').load(myFile + '#source')
}
You can't achieve load() over the file protocol, no other ajax request is going to work for html files. I have tried even with the crossDomain and isLocale option on without anything success, even if precising the protocol.
The problem is that even if jQuery is trying the browser will stop the request for security issues (well most browsers as the snippet below works in FF) as it allows you to load locale file so you could get access to a lot of things.
The one thing you could load locally is javascript files, but that probably means changing a lot of the application/website architecture.
Only works in FF
$.ajax({
url: 'test.html',
type: 'GET',
dataType: 'text',
isLocale: true,
success: function(data) {
document.body.innerHTML = data;
}
});
What FF does well is that it detect that the file requesting local files is on the file protocol too when other don't. I am not sure if it has restriction over the type of files you can request.
You can still use the JQuery load function in this context:
You would could add an OfflineContent div on your page:
<div id="OfflineContent">
</div>
And then click a button which calls:
$('#OfflineContent').load('OfflinePage.html #contentToLoad');
Button code:
$("#btnLoadContent").click(function() {
$('#OfflineContent').load('OfflinePage.html #contentToLoad');
});
In the OfflinePage.html you could have to have another section called contentToLoad which would display on the initial page.
On the server, there is a text file. Using JavaScript on the client, I want to be able to read this file and process it. The format of the file on the server cannot be changed.
How can I get the contents of the file into JavaScript variables, so I can do this processing? The size of the file can be up to 3.5 MB, but it could easily be processed in chunks of, say, 100 lines (1 line is 50-100 chars).
None of the contents of the file should be visible to the user; he will see the results of the processing of the data in the file.
You can use hidden frame, load the file in there and parse its contents.
HTML:
<iframe id="frmFile" src="test.txt" onload="LoadFile();" style="display: none;"></iframe>
JavaScript:
<script type="text/javascript">
function LoadFile() {
var oFrame = document.getElementById("frmFile");
var strRawContents = oFrame.contentWindow.document.body.childNodes[0].innerHTML;
while (strRawContents.indexOf("\r") >= 0)
strRawContents = strRawContents.replace("\r", "");
var arrLines = strRawContents.split("\n");
alert("File " + oFrame.src + " has " + arrLines.length + " lines");
for (var i = 0; i < arrLines.length; i++) {
var curLine = arrLines[i];
alert("Line #" + (i + 1) + " is: '" + curLine + "'");
}
}
</script>
Note: in order for this to work in Chrome browser, you should start it with the --allow-file-access-from-files flag. credit.
Loading that giant blob of data is not a great plan, but if you must, here's the outline of how you might do it using jQuery's $.ajax() function.
<html><head>
<script src="jquery.js"></script>
<script>
getTxt = function (){
$.ajax({
url:'text.txt',
success: function (data){
//parse your data here
//you can split into lines using data.split('\n')
//an use regex functions to effectively parse it
}
});
}
</script>
</head><body>
<button type="button" id="btnGetTxt" onclick="getTxt()">Get Text</button>
</body></html>
You need to use Ajax, which is basically sending a request to the server, then getting a JSON object, which you convert to a JavaScript object.
Check this:
http://www.w3schools.com/ajax/tryit.asp?filename=tryajax_first
If you are using jQuery library, it can be even easier:
http://api.jquery.com/jQuery.ajax/
Having said this, I highly recommend you don't download a file of 3.5MB into JS! It is not a good idea. Do the processing on your server, then return the data after processing. Then if you want to get a new data, send a new Ajax request, process the request on server, then return the new data.
Hope that helps.
I used Rafid's suggestion of using AJAX.
This worked for me:
var url = "http://www.example.com/file.json";
var jsonFile = new XMLHttpRequest();
jsonFile.open("GET",url,true);
jsonFile.send();
jsonFile.onreadystatechange = function() {
if (jsonFile.readyState== 4 && jsonFile.status == 200) {
document.getElementById("id-of-element").innerHTML = jsonFile.responseText;
}
}
I basically(almost literally) copied this code from http://www.w3schools.com/ajax/tryit.asp?filename=tryajax_get2 so credit to them for everything.
I dont have much knowledge of how this works but you don't have to know how your brakes work to use them ;)
Hope this helps!
It looks like XMLHttpRequest has been replaced by the Fetch API. Google published a good introduction that includes this example doing what you want:
fetch('./api/some.json')
.then(
function(response) {
if (response.status !== 200) {
console.log('Looks like there was a problem. Status Code: ' +
response.status);
return;
}
// Examine the text in the response
response.json().then(function(data) {
console.log(data);
});
}
)
.catch(function(err) {
console.log('Fetch Error :-S', err);
});
However, you probably want to call response.text() instead of response.json().
Just a small point, I see some of the answers using innerhtml. I have toyed with a similar idea but decided not too, In the latest version react version the same process is now called dangerouslyinnerhtml, as you are giving your client a way into your OS by presenting html in the app. This could lead to various attacks as well as SQL injection attempts
You need to check for status 0 (as when loading files locally with XMLHttpRequest, you don't get a status and if it is from web server it returns the status)
function readTextFile(file) {
var rawFile = new XMLHttpRequest();
rawFile.open("GET", file, false);
rawFile.onreadystatechange = function ()
{
if(rawFile.readyState === 4)
{
if(rawFile.status === 200 || rawFile.status == 0)
{
var allText = rawFile.responseText;
alert(allText);
}
}
}
rawFile.send(null);
}
For device file readuing use this:
readTextFile("file:///C:/your/path/to/file.txt");
For file reading from server use:
readTextFile("http://test/file.txt");
I really think your going about this in the wrong manner. Trying to download and parse a +3Mb text file is complete insanity. Why not parse the file on the server side, storing the results viva an ORM to a database(your choice, SQL is good but it also depends on the content key-value data works better on something like CouchDB) then use ajax to parse data on the client end.
Plus, an even better idea would to skip the text file entirely for even better performance if at all possible.