I have a server using node and express. I'm building the client app. It's supposed to get data from the server like so:
let req = new XMLHTTPRequest();
req.open('GET', 'localhost:5000/newarticle');
req.onload = function() {
let data = JSON.parse(req.responseText);
renderHTML(data);
}
req.send();
This code is part of a main.js file which is linked to my index.html file. The index.html file is served by express to the client when they access the website. On index.html, there's a button, with functionality associated with the onclick event that's defined in main.js. So far, so good. But when the button is clicked, I get a console message in the browser reading "XMLHttpRequest is not defined."
My understanding is that the browser (Chrome/V8) and node are separate JavaScript environments. Things like XMlHttpRequest, which are defined in node, are not defined in V8.
This is confusing to me. No tutorial I find on how to use AJAX mentions this as an issue. I haven't been able to find a source that digs into how I can serve up an HTML + javascript page that allows the user to send AJAX requests back to the server that addresses this issue. Honestly, I don't even know quite what to search for to try and find an explanation for this set of issues. Any advice or links would be very appreciated!
Related
I’m a bit new to javascriipt/nodejs and its packages. Is it possible to download a file using my local browser or network? Whenever I look up scraping html files or downloading them, it is always done through a separate package and their server doing a request to a given url. How do I make my own computer download a html file as if I did right click save as on a google chrome webpage without running into any server/security issues and errors with javascript?
Fetching a document over HTTP(S) in Node is definitely possible, although not as simple as some other languages. Here's the basic structure:
const https = require(`https`); // use http if it's an http url;
https.get(URLString, res => {
const buffers = [];
res.on(`data`, data => buffers.push(data));
res.on(`end`, ()=>{
const data = Buffer.concat(buffers);
/*
from here you can do what you want with the data. You can write it to a file
with fs, you can console.log it using data.toString(), etc.
*/
});
})
Edit: I think I missed the main question you had, give me a sec to add that.
Edit 2: If you're comfortable with doing the above, the way you access a website the same way as your browser is to open up the developer tools (F12 on Chrome) go to the network tab, find the request that the browser has made, and then using http(s).get(url, options, callback), set the exact same headers in the options that you see in your browser. Most of the time you won't need all of them, all you'll need is the authentication/session cookie.
I have tried numerous different codes that I have found here along with the following code below I got from learn.microsoft.com. They all give me the same error though. The error I get is "ActiveXObject is not defined". Can someone please tell me how to fix this issue. How do I define the object or is this yet another problem that is related to my host, GoDaddy?!?!
This is for a Plesk/Windows server hosted by GoDaddy.
This is a link is to just one of the codes from stackoverflow that I have tried: Use JavaScript to write to text file?
Microsoft Code
<script>
var fso, tf;
fso = new ActiveXObject("Scripting.FileSystemObject");
tf = fso.CreateTextFile("G:\mysite\file.txt", true);
// Write a line with a newline character.
tf.WriteLine("Testing 1, 2, 3.") ;
// Write three newline characters to the file.
tf.WriteBlankLines(3) ;
// Write a line.
tf.Write ("This is a test.");
tf.Close();
</script>
You can't write to a file on the server with client-side JavaScript (if clients could write arbitrary files on servers then Google's homepage would be vandalised every second).
The code you've found could write to the hard disk of the computer the "page" was loaded on, but only if the "page" was an HTA application and not a web page.
The standard way to send data to an HTTP server from JavaScript is to make an HTTP request. You can do this with an Ajax API like fetch.
You then need a server-side program (written in the language of your choice) that will process the request and write to the file (although due to race conditions, you are normally better off using a database than a flat file).
I am using angular and ASP.NET Web API to allow users to download files that are generated on the server.
HTML Markup for download link:
<img src="/content/images/table_excel.png">
<a ng-click="exportToExcel(report.Id)">Excel Model</a>
<a id="report_{{report.Id}}" target="_self"></a>
The last anchor tag is there to serve as a place holder for an automatic click event. The visible anchor calls the exportToExcel method to initiate the call to the server and begin creating the file.
$scope.exportToExcel = function(reportId) {
reportService.excelExport(reportId, function (result) {
var url = "/files/report_" + reportId + "/" + result.data.Model.fileName;
var dLink = document.getElementById("report_" + reportId);
dLink.href = url;
dLink.setAttribute('download', result.data.Model.fileName);
dLink.click();
});
}
The Web API code creates an Excel file. The file, on the server is about 279k, but when it is downloaded on the client it is only 7k. My first thought was that the automatic click might be happening before the file is completely written. So, I added a 10 second $timeout around the click event as a test. It failed with the same result.
This seems to only be happening on our remote QA server. On my local development server I always get the entire file back. I am at a loss as to why this might be happening. We have similar functionality where files are constructed from a database blob and saved to the local disk for download. The same method is employed for the client side download and that seems to work fine. I am wondering if anyone else has run into a similar issue.
Update
After the comment by SilentTremmor we think it actually may be IIS or some sort of Sever issue. Originally, we didn't think it could be, but after some digging it may be. It seems the instance of the client code is only allowing 7k of data to be downloaded. It doesn't matter what we try to download the result is always the same.
It turns out the API application was writing the file to a different instance of our application. The client code had no idea and was trying to download a file that did not exist. So, when the download link was creating the file it was empty, thus the small file size.
I am running Mopidy on a Raspberry Pi with the latest Raspbian Wheezy.
I am trying to call a server side Perl script from Javascript like this:
var addToPlaylist = function() {
var xmlHttpRequest = new XMLHttpRequest();
xmlHttpRequest.open("POST", "addToPlaylist.pl?uri=" + encodeURI("testuri") + "&&name=" + encodeURI("testname"), true);
xmlHttpRequest.send();
}
But I get the error:
POST http://192.168.0.10:6680/addToPlaylist.pl?uri=testuri&&name=testname 404 (Not Found)
However, if I navigate my browser to:
http://192.168.0.10:6680/addToPlaylist.pl
I can see the script in plain text.
I have tried moving the file to where Mopidy gets it's Javascript files from and to various other places, and the file has a full set of permissions.
Is this likely to be something Mopidy specific or is this a general web server thing? Obviously I don't want to be able to access the R-Pi's whole file system, so is there somewhere where I need to whitelist what can be seen from the client? I am new to Javascript and Web Servers so I do not know the terminology to search for. Could you point me in the right direction?
Thanks
You need to run something that can run Perl scripts, e.g. Perl Dancer, on another port.
I'm having difficulty with something probably remedial. I'm developing a website without a server. In doing so, I run into problems when trying to access files via XMLHttpRequest.
As you can see in the example code snippet, I create the variable, open it with a relative path to the desired file, and use the send function.
When I use a relative path that has to traverse up to a parent directory, the send() function fails. However, if I provide a path that is either in the same directory as the webpage or forward in a subfolder of the current webpage directory, i see that the XMLHttpRequest returns successfully. In these successful cases, I can see the data of the test file in request.responseText.
Any help with this would be greatly appreciated.
The only lead I have right now is that there may be a security threat that prevents 'get' requests that traverse up the parent directory.
Thank you.
Code Snippet:
function test(){
var request = new XMLHttpRequest();
request.open('GET', "../test.txt", true);
request.send();//FAILS HERE
//Get Response
var response = request.responseText;
}
function test2(){
var request = new XMLHttpRequest();
request.open('GET', "test.txt", true);
request.send();
//Get Response
var response = request.responseText; //SUCCESSFUL
}
From the MDN website, about the Gecko engine and therefore the Firefox browser:
a file can read another file only if the parent directory of the originating file is an ancestor directory of the target file.
Similar rules exist in other browsers. This is a limitation of the file:/// protocol. It's there for a very good reason. There's no point trying to break it. The solution is to run a local server on your computer, which isn't even slightly difficult.
To clarify: let's presume that you have a file structure like this:
- file1.html
- dir1/
- file2.html
- dir2/
- index.html
- file3.html
- dir3/
- file4.html
From within index.html, you can use Javascript to access file3.html and file4.html. It cannot access file1.html or file2.html.
Open the file in your browser directly using file->open then copy the URL in the address bar.