Catch Javascript syntax errors while using YUI3 - javascript

I'm using slightly modified sample code provided by the YUI team. When my source responds with something other than JSON (or just has a JSON syntax error) my browser (Safari) aborts script processing, preventing me from notifying the user there was a problem.
I'm definitely no JS guru, so this code may be a lot uglier than it has to be. The code is, roughly:
YUI().use("dump", "node", "datasource-get", "datasource-jsonschema", function(Y) {
var myDataSource = new Y.DataSource.Get({
source:"/some/json/source/?"}),
myCallback = {
success: function(e){
myResponse = e.response;
doSomething(myDataSource);
},
failure: function(e){
Y.get("#errors").setContent("<li>Could not retrieve data: " + e.error.message + "</li>");
}
};
myDataSource.plug(Y.Plugin.DataSourceJSONSchema, {
schema: {
resultListLocator: "blah.list",
resultFields: ["user", "nickname"]
}
});
myDataSource.sendRequest("foo=bar", myCallback);
}
I've tried wrapping the "var myDataSource" block in a try/catch, and I've also tried wrapping the whole YUI().use() block.
Is it possible to catch syntax errors? Do I have to replace the all-in-one DataSource.Get call with separate IO and parse calls?

Since you are requesting a local script, you can use Y.io + Y.JSON.parse inside a try/catch or Y.DataSource.IO + Y.DataSchema.JSON (+ Y.JSON).
The benefit of DataSource.Get is that it avoids the Same Origin Policy. However, it is less secure and less flexible. If it is not necessary, you should avoid using it.
The contract of DataSource.Get is that the server supports JSONP. The way this works is that Get adds a script node to the page with a src=(the url you provided)&callback=someDataSourceFunction.
The browser will request the resource at that url and one of two things will happen:
the server will respond with a JavaScript string in the form of someDataSourceFunction({"all":"your data"}); or
the server will return some text that can't be parsed as JavaScript.
In either event, that string is treated as the contents of a script node--it is parsed and executed. If it cannot be parsed, the browser will throw an error. There's no stopping this. While JSONP is technically not under the spec constraints of true JSON (even invalid JSON should parse and execute), you should always use pure JSON, and always use a server side lib to generate the JSON output (look on http://json.org for a list of libs in every conceivable language). Don't hand-roll JSON. It only leads to hours of debugging.

The problem is probably that the error happens at some level in the browser (Javascript parsing) before YUI even gets the occasion to report a failure.
It is notoriously hard to catch this kind of error in Safari, which does not implement window.onerror. In order to catch more errors with my Javascript library, bezen.org, I added try/catch in places where asynchronous code is triggered:
dynamic script loading (equivalent to your JSON download)
setTimeout/setTimer: I wrapped and replaced these browser functions to insert a try/catch which logs errors
You may be interested in having a look at the source code of the corresponding modules, which may be useful to you as is or as hints for the resolution of your problem:
bezen.dom.js Look for safelistener in appendScript method
bezen.error.js Check safeSetTimeout/safeSetInterval and catchError

Maybe try this before you "doSomething":
try
{
var test = YAHOO.lang.JSON.parse(jsonString);
...
}
catch (e)
{
alert('invalid json');
}

Related

How to call a fetch request and wait for it's answer inside a onBeforeRequest in a web extension

I'm trying to write a web extension that stops the requests from a url list provided locally, fetches the URL's response, analyzes it in a certain way and based on the analysis results, blocks or doesn't block the request.
Is that even possible?
The browser doesn't matter.
If it's possible, could you provide some examples?
I tried doing it with Chrome extensions, but it seems like it's not possible.
I heard it's possible on mozilla though
I think that this is only possible using the old webRequestBlocking API which Chrome is removing as a part of Manifest v3. Fortunately, Firefox is planning to continue supporting blocking web requests even as they transition to manifest v3 (read more here).
In terms of implementation, I would highly recommend referring to the MDN documentation for webRequest, in particular their section on modifying responses and their documentation for the filterResponseData method.
Mozilla have also provided a great example project that demonstrates how to achieve something very close to what I think you want to do.
Below I've modified their background.js code slightly so it is a little closer to what you want to do:
function listener(details) {
if (mySpecialUrls.indexOf(details.url) === -1) {
// Ignore this url, it's not on our list.
return {};
}
let filter = browser.webRequest.filterResponseData(details.requestId);
let decoder = new TextDecoder("utf-8");
let encoder = new TextEncoder();
filter.ondata = event => {
let str = decoder.decode(event.data, {stream: true});
// Just change any instance of Example in the HTTP response
// to WebExtension Example.
str = str.replace(/Example/g, 'WebExtension Example');
filter.write(encoder.encode(str));
filter.disconnect();
}
// This is a BlockingResponse object, you can set parameters here to e.g. cancel the request if you want to.
// See: https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/webRequest/BlockingResponse#type
return {};
}
browser.webRequest.onBeforeRequest.addListener(
listener,
// 'main_frame' means this will only affect requests for the main frame of the browser (e.g. the HTML for a page rather than the images, CSS, etc. that are loaded afterwards). You might want to look into whether you want to expand this.
{urls: ["*://*/*"], types: ["main_frame"]},
["blocking"]
);
Correction:
The above example only works properly if the response data fits in one chunk. If it is larger (and you still want to inspect the entirety of the response data), you would need to put all of the data into a buffer, and then work on it once all data has been received. See the document here for more information: https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/webRequest/StreamFilter/ondata#webextension_examples (the code section titled "This example combines all buffers into a single buffer" would be of most interest to you I think).
In terms of using this API to block responses, data is only returned from this URL if you call filter.write(), so if you don't like the response, you can simply not call it (just call filter.close()) and an empty response will be returned. You can also only return part of the full response body by filter.write()ing only the bits that you want to return.

Ajax Communication: Malformed JSON string in Perl

I want to send by XMLHttpRequest a JSON object to a Perl Script (*.cgi)
But I can't decode the JSON object in the cgi file.
I always reveive the error message:
malformed JSON string, neither array, object, number, string or atom, at character offset 0 (before
"(end of string)")
This is my javascript code:
//ajax communication for receiver/transceiver
function doAjaxRequest(query)
{
if(whatReq == "")
{
alert('ERROR: Request-Type undefined');
return;
}
if (window.XMLHttpRequest)
{
arequest = new top.XMLHttpRequest(); // Mozilla, Safari, Opera
}
else if (window.ActiveXObject)
{
try
{
arequest = new ActiveXObject('Msxml2.XMLHTTP'); // IE 5
}
catch (e)
{
try
{
arequest = new ActiveXObject('Microsoft.XMLHTTP'); // IE 6
}
catch (e)
{
alert('ERROR: Request not possible');
return;
}
}
}
if (!arequest)
{
alert("Kann keine XMLHTTP-Instanz erzeugen");
return false;
}
else
{
var url = "****.cgi";
var dp = document.location.pathname;
arequest.open('post', url, true);
arequest.setRequestHeader("Content-Type", "application/json;charset=UTF-8");
//receiver function
arequest.onreadystatechange = function()
{
switch (arequest.readyState)
{
case 4:
if (arequest.status != 200)
{
alert("Der Request wurde abgeschlossen, ist aber nicht OK\nFehler:"+arequest.status);
}
else
{
var content = arequest.responseText;
analyseResponse(content);
}
break;
default:
//alert("DEFAULT:" + arequest.readyState );
break;
}
}
//transceiver function
query="jsonObj=" + JSON.stringify({name:"John Rambo", time:"2pm"});
alert(query);
arequest.send(query)
}
}
And here the cgi file:
#!/usr/bin/perl
use CGI qw/:standard/;
use CGI::Carp qw(fatalsToBrowser);
use strict;
use warnings;
use JSON;
use Data::Dumper;
my $jsonObj = param('jsonObj');
my $json = JSON->new->utf8;
my $input = $json->decode( $jsonObj );
print Dumper(\$input);
Can you help me? I don't know how to access the JSON object.
Thank you very much.
This message says you've got non-JSON string in $jsonObj. One particulary common case is empty string. Try printing out raw content of $jsonObj and make sure you set up everything correctly for CGI::param to work and also check with browser's built in tools that it actually sends data.
Also I highly suggest you throwing away 10 years old ActiveX shit. You're using JSON.stringify and all the browsers that support it also support native XMLHttpRequest.
I was about to vote to put your question on hold because of insufficient information to reproduce and diagnose the problem, but then I realized that your question does contain enough clues to figure out what's wrong — they're just really well hidden.
Clue #1: Your error message says (emphasis mine):
malformed JSON string, neither array, object, number, string or atom, at character offset 0 (before "(end of string)")
This implies that your $jsonObj variable has length 0, i.e. it is empty.
So, what's causing it to be empty? Well, the Perl code looks like perfectly standard CGI stuff, so the problem must be either in your JS code, or in something that your haven't showed us (such as your web server config). Since we can't debug what we can't see, let's focus on your JS code, where we find...
Clue #2: There's something wrong with this line:
arequest.setRequestHeader("Content-Type", "application/json;charset=UTF-8");
Of course, you can set any content type you want for a POST request, but CGI.pm expects to receive the content in one of the standard formats for HTML form submissions, i.e. either application/x-www-form-urlencoded or multipart/form-data. When it receives something labeled as application/json instead, it doesn't know how to parse it, and so won't. Thus, the param() method in your Perl script will return nothing, since, as far as it's concerned, the client sent no parameters that it could understand.
There should have been a warning about this somewhere in your web server error logs, but you presumably didn't think to check those. (Hint: you really should!)
(You could've also used the warningsToBrowser option of CGI::Carp to get those warnings sent as HTML comments, but I guess you weren't aware of that option, either. Also, to make that work reliably, you'd really need to use CGI::Carp before the CGI module, so that it can catch any early errors.)
Anyway, the fix is simple: just replace application/json in your JS code with application/x-www-form-urlencoded, since that's what what you're actually trying to send to the server. You should also make sure that your JSON data actually is properly URL-encoded before embedding it in the request, by passing the output of JSON.stringify() through encodeURIComponent(), like this:
var data = {name:"John Rambo", time:"2pm"};
var query = "jsonObj=" + encodeURIComponent(JSON.stringify(data));
(I'll also second Oleg V. Volkov's suggestion to get rid of all the obsolete ActiveX stuff in your JS code. In fact, you could do even better by using a modern JS utility library like, say, jQuery, which provides convenient wrapper functions so that you don't even have to mess with XMLHttpRequest directly.)

JavaScript: Writing an output file within a limited & secure scenario

I would like to add a function in my javascript to write to a text file in the local directory where the javascript file is located. This means I'm not looking for some insecure way of accessing the user's file system in any way. All I care about is extracting the user's input into an html page that is accessed by my javascript then using that input as data externally. I just need a simple text file. This user input isn't actually text by the way, but rather a bunch of actions using my online game's components that the underlying javascript turns into a text string (so this particular string is what I want to save, not really even anything direct from the user).
I don't want to write to a user's file system, but rather, the file where the javascript (and html) code is located (a folder hosted on a server). Is there any simple way to get some file I/O going?
I know Javascript has a FileReader, is there any way to get it to do this in reverse? Like a FileWriter. GoogleClosure looks like it has a FileWriter, but it doesn't seem to quite work and I can't find any decent examples of how to get it to do this.
If this requires a different language, is there any way I can just get the relevant snippet and insert this into my Javascript file?
(the folder is hosted on a Linux system if that helps)
ADDENDUM: Elias Van Ootegem's solution below is excellent and I would highly recommend looking into it as it's a great example of client-server interaction and getting your system to provide you the data you're looking to extract. Workers are pretty interesting.
But for those of you looking at this post with that similar question that I initially had about JavaScript I/O, I found one other work-a-round depending on your case. My team's project site made use of a database site, MongoDB, that stored some of the user's interaction data if the user had hit a "Save" button. MongoDB, and other online database systems, provide a "dumping" function/script that you can call from your local machine/server and put that data into an output file (I was able to put the JSON data into a text file). From that output, you can write a parser to extract and format the data you desire from that output since databases like MongoDB can be pretty clear as to what format the text will be in (very structured, organized). I wrote a parser in C (with a few libraries I had written to extend the language) to do what I needed, but the idea is pretty generalizable to other programming/scripting languages.
I did look at leaving cookies as option as well, and made use of a test program to try it out (it works too!). However, one tradeoff for leaving cookies on a user's local system is that those cookies generally are meant to hold small amounts of data (usually things like username, date created, & expiration date of the cookie) and are dependent upon the user's local machine. Further, while you can extract the data in those cookies from JavaScript, you are back to the initial problem: the data still exists on the web, not on an output file on your server's file system. In the case you need to extract data and want some guarantee this data will exist on your machine, use Elias Van Ootegem's solution.
JavaScript code that is running client-side cannot access the server's filesystem at the same time, let alone write a file. People often say that, if JS were to have IO capabilities, that would be rather insecure... just imagine how dangerous that would be.
What you could do, is simply build your string, using a Worker that, on closing, returns the full data-string, which is then sent to the server (AJAX call).
The server-side script (Perl, PHP, .NET, Ruby...) can receive this data, parse it and then write the file to disk as you want it to.
All in all, not very hard, but quite an interesting project anyway. Oh, and when using a worker, seeing as it's an online game and everything, perhaps a setInterval to send (a part of) the data every 5000ms might not be a bad idea, either.
As requested - some basic code snippets.
A simple AJAX-setup function:
function getAjax(url,method, callback)
{
var ret;
method = method || 'POST';
url = url || 'default.php';
callback = callback || success;//assuming you have a default function called "success"
try
{
ret = new XMLHttpRequest();
}
catch (error)
{
try
{
ret= new ActiveXObject('Msxml2.XMLHTTP');
}
catch(error)
{
try
{
ret= new ActiveXObject('Microsoft.XMLHTTP');
}
catch(error)
{
throw new Error('no Ajax support?');
}
}
}
ret.open(method, url, true);
ret.setRequestHeader('X-Requested-With', 'XMLHttpRequest');
ret.setRequestHeader('Content-type', 'application/x-www-form-urlencode');
ret.onreadystatechange = callback;
return ret;
}
var getRequest = getAjax('script.php?some=Get&params=inURL', 'GET');
getRequest.send(null);
var postRequest = getAjax('script.php', 'POST', function()
{//passing anonymous function here, but this could just as well have been a named function reference, obviously...
if (this.readyState === 4 && this.status === 200)
{
console.log('Post request complete, answer was: ' + this.response);
}
});
postRequest.send('foo=bar');//set different headers to pos JSON.stringified data
Here's a good place to read up on whatever you don't get from the code above. This is, pretty much a copy-paste bit of code, but if you find yourself wanting to learn just a bit more, Here's a great place to do just that.
WebWorkers
Now these are pretty new, so using them does mean not being able to support older browsers (you could support them by using the event listeners to send each morsel of data to the server, but a worker allows you to bundle, pre-process and structure the data without blocking the "normal" flow of your script. Workers are often presented as a means to sort-of multi-thread JavaScript code. Here's a good intro to them
Basically, you'll need to add something like this to your script:
var worker = new Worker('preprocess.js');//or whatever you've called the worker
worker.addEventListener('message', function(e)
{
var xhr = getAjax('script.php', 'post');//using default callback
xhr.send('data=' + e.data);
//worker.postMessage(null);//clear state
}, false);
Your worker, then, could start off like so:
var time, txt = '';
//entry point:
onmessage = function(e)
{
if (e.data === null)
{
clearInterval(time);
txt = '';
return;
}
if (txt === '' && !time)
{
time = setInterval(function()
{
postMessage(txt);
}, 5000);//set postMessage to be called every 5 seconds
}
txt += e.data;//add new text to current string...
}
Server-side, things couldn't be easier:
if ($_POST && $_POST['data'])
{
$file = $_SESSION['filename'] ? $_SESSION['filename'] : 'File'.session_id();
$fh = fopen($file, 'a+');
fwrite($fh, $_POST['data']);
fclose($fh);
}
echo 'ok';
Now all of this code is a bit crude, and most if it cannot be used in its current form, but it should be enough to get you started. If you don't know what something is, google it.
But do keep in mind that, when it comes to JS, MDN is easily the best reference out there, and as far as PHP goes, their own site (php.net/{functionName}) is pretty ugly, but does contain a lot of info, too...

Parsing XML / RSS from URL using Java Script

Hi i want to parse xml/rss from a live url like http://rss.news.yahoo.com/rss/entertainment using pure Java Script(not jquery). I have googled a lot. Nothing worked for me. can any one help with a working piece of code.
(You cannot have googled a lot.) Once you have worked around the Same Origin Policy, and if the resource is served with an XML MIME type (which it is in this case, text/xml), you can do the following:
var x = new XMLHttpRequest();
x.open("GET", "http://feed.example/", true);
x.onreadystatechange = function () {
if (x.readyState == 4 && x.status == 200)
{
var doc = x.responseXML;
// …
}
};
x.send(null);
(See also AJAX, and the XMLHttpRequest Level 2 specification [Working Draft] for other event-handler properties.)
In essence: No parsing necessary. If you then want to access the XML data, use the standard DOM Level 2+ Core or DOM Level 3 XPath methods, e.g.
/* DOM Level 2 Core */
var title = doc.getElementsByTagName("channel")[0].getElementsByTagName("title")[0].firstChild.nodeValue;
/* DOM Level 3 Core */
var title = doc.getElementsByTagName("channel")[0].getElementsByTagName("title")[0].textContent;
/* DOM Level 3 XPath (not using namespaces) */
var title = doc.evaluate('//channel/title/text()', doc, null, 0, null).iterateNext();
/* DOM Level 3 XPath (using namespaces) */
var namespaceResolver = (function () {
var prefixMap = {
media: "http://search.yahoo.com/mrss/",
ynews: "http://news.yahoo.com/rss/"
};
return function (prefix) {
return prefixMap[prefix] || null;
};
}());
var url = doc.evaluate('//media:content/#url', doc, namespaceResolver, 0, null).iterateNext();
(See also JSX:xpath.js for a convenient, namespace-aware DOM 3 XPath wrapper that does not use jQuery.)
However, if for some (wrong) reason the MIME type is not an XML MIME type, or if it is not recognized by the DOM implementation as such, you can use one of the parsers built into recent browsers to parse the responseText property value. See pradeek's answer for a solution that works in IE/MSXML. The following should work everywhere else:
var parser = new DOMParser();
var doc = parser.parseFromString(x.responseText, "text/xml");
Proceed as described above.
Use feature tests at runtime to determine the correct code branch for a given implementation. The simplest way is:
if (typeof DOMParser != "undefined")
{
var parser = new DOMParser();
// …
}
else if (typeof ActiveXObject != "undefined")
{
var xmlDoc = new ActiveXObject("Microsoft.XMLDOM");
// …
}
See also DOMParser and HTML5: DOM Parsing and Serialization (Working Draft).
One big problem you might run into is that generally, you cannot get data cross domain. This is big issue with most rss feeds.
The common way to deal with loading data in javascript cross domain is calls JSONP. Basically, this means that the data you are retrieving is wrapped in a javascript callback function. You load the url with a script tag, and you define the function in your code. So when the script loads, it executes the function and passes the data to it as an argument.
The problem with most xml/rss feeds is that services that only provide xml tend not to provide JSONP wrapping capability.
Before you go any farther, check to see if your data source provides a json format and JSONP functionality. That will make this a lot easier.
Now, if your data source doesn't provide json and jsonp functionality, you have to get creative.
On relatively easy way to handle this is to use a proxy server. Your proxy runs somewhere under your control, and acts as a middleman to get your data. The server loads your xml, and then your javascript does the requests to it instead. If the proxy server runs on the same domain name then you can just use standard xhr(ajax) requests and you don't have to worry about cross-domain stuff.
Alternatively, your proxy server can wrap the data in a jsonp callback and you can use the method mentioned above.
If you are using jQuery, then xhr and jsonp requests are built-in methods and so make doing the coding very easy. Other common js libraries should also support these. If you are coding all of this from scratch, its a little more work but not terribly difficult.
Now, once you get your data hopefully its just json. Then there's no parsing needed.
However, if you end up having to stick with an xml/rss version, and if you're jQuery, you can simply use jQuery.parseXML http://api.jquery.com/jQuery.parseXML/.
better convert xml to json. http://jsontoxml.utilities-online.info/
after converting if you need to print json object check this tutorial
http://www.w3schools.com/json/json_eval.asp

jQuery .getJSON Firefox 3 Syntax Error Undefined

I'm getting a syntax error (undefined line 1 test.js) in Firefox 3 when I run this code. The alert works properly (it displays 'work') but I have no idea why I am receiving the syntax error.
jQuery code:
$.getJSON("json/test.js", function(data) {
alert(data[0].test);
});
test.js:
[{"test": "work"}]
Any ideas? I'm working on this for a larger .js file but I've narrowed it down to this code. What's crazy is if I replace the local file with a remote path there is no syntax error (here's an example):
http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&tagmode=any&format=json&jsoncallback=?
I found a solution to kick that error
$.ajaxSetup({'beforeSend': function(xhr){
if (xhr.overrideMimeType)
xhr.overrideMimeType("text/plain");
}
});
Now the explanation:
In firefox 3 (and I asume only firefox THREE) every file that has the mime-type of "text/xml" is parsed and syntax-checked. If you start your JSON with an "[" it will raise an Syntax Error, if it starts with "{" it's an "Malformed Error" (my translation for "nicht wohlgeformt").
If I access my json-file from an local script - no server is included in this progress - I have to override the mime-type... Maybe you set your MIME-Type for that very file wrong...
How ever, adding this little piece of code will save you from an error-message
Edit: In jquery 1.5.1 or higher, you can use the mimeType option to achieve the same effect. To set it as a default for all requests, use
$.ajaxSetup({ mimeType: "text/plain" });
You can also use it with $.ajax directly, i.e., your calls translates to
$.ajax({
url: "json/test.js",
dataType: "json",
mimeType: "textPlain",
success: function(data){
alert(data[0].test);
} });
getJSON may be insisting on at least one name:value pair.
A straight array ["item0","item1","Item2"] is valid JSON, but there's nothing to reference it with in the callback function for getJSON.
In this little array of Zip codes:
{"result":[["43001","ALEXANDRIA"],["43002","AMLIN"],["43003","ASHLEY"],["43004","BLACKLICK"],["43005","BLADENSBURG"],["43006","BRINKHAVEN"]]}
... I was stuck until I added the {"result": tag. Afterward I could reference it:
<script>
$.getJSON("temp_test_json.php","",
function(data) {
$.each(data.result, function(i, item) {
alert(item[0]+ " " + i);
if (i > 4 ) return false;
});
});
</script>
... I also found it was just easier to use $.each().
This may sound really really dumb, but change the file extension for test.js from .js to .txt. I had the same thing happen with perfectly valid JSON data files with pretty well any extension except .txt (example: .json, .i18n). Since I've changed the extension, I get the data and use it just fine.
Like I said, it may sound dumb but it worked for me.
HI
I have this same error when testing the web page on my local PC, but once it is up on the hosting server the error no longer happens. Sorry - I have no idea of the reason, but thought it may help someone else track down the reason
Try renaming "test.js" to "test.json", which is what Wikipedia says is the official extension for JSON files. Maybe it's being processed as Javascript at some point.
Have you tried disabling all the Firefox extensions?
I usually get some errors in the Firebug console that are caused by the extensions, not by the webs being visited.
Check if there's ; at the end of the test.js. jQuery executes eval("(" + data + ")") and semicolon would prevent Firefox from finding closing parenthesis. And there might be some other unseen characters that prevents it from doing so.
I can tell you why this remote location working though, it's because it's executed in completely different manner. Since it has jsoncallback=? as the part of query parameters, jQuery thinks of it as of JSONP and actually inserts it into the DOM inside <script> tags. Try use "json/test.js?callback=?" as target, it might help too.
What kind of webserver are you running that on? I once had an issue reading a JSON file on IIS because it wasn't defined as a valid MIME type.
Try configuring the content type of the .js file. Firefox expects it to be text/plain, apparently. You can do that as Peter Hoffmann does above, or you can set the content type header server side.
This might mean a server-side configuration change (like apache's mime.types file), or if the json is served from a script, setting the content-type header in the script.
Or at least that seems to have made the error go away for me.
I had a similar problem but was looping through a for loop. I think the problem might be that the index is out of bound.
Kien
For the people who don't use jQuery, you need to call the overrideMimeType method before sending the request:
var r = new XMLHttpRequest();
r.open("GET", filepath, true);
r.overrideMimeType("text/plain");

Categories