Specific example: the haystack.js script from How Big is Your Haystack?
I've searched for an answer and everything seems to point to using the //# sourceURL=name.js comment. However, the mechanics of how to accomplish this elude me (maybe I'm just dense).
Everyone always points to Debugging JavaScript, but the demo is broken (same-origin error). Other common examples do not provide insight for working with an external script such as this.
I've tried using Live Edit to insert the sourceURL comment, but so far the eval script never shows up in the Sources tab.
Would someone please walk me through the steps to complete this task?
UPDATE: This has proved to be an interesting and annoying endeavour. This particular site makes the task needlessly difficult with the following complications:
The haystack.js script includes document.write() statements (which load the other scripts used). These must be removed before the script is reloaded, otherwise the DOM is cleared.
The author uses a queer, backtick cipher form of obfuscation on the code. Therefore, code modifications (including the sourceURL) have to be made after obfuscation is removed, but before the eval takes place.
I kludged a partial solution. After loading jQuery into the page, I run this script:
$.ajax({
url: '/js/haystack.js',
dataType: 'text'
}).done(function(data) {
// Remove document.write() statements and append sourceURL comment after obfuscation is removed
var refactored = data.replace(/return d/, "return d.replace(/document\.write[^;]+;/g, '') + '\\n//# sourceURL=haystack.js\\n';");
$('head').append('<script type="text/javascript">' + refactored + '</script>');
});
Now haystack.js appears in the (no domain) tree of the Sources tab. Breakpoints can be set, but there is odd behavior. Seems the DOM event handlers are still bound to the original script (breakpoints in reloaded script handlers are never reached). Executing pageInit() again rebinds the handlers to the modified script, but page updates are still erratic. Not sure why the behavior persists. I can step through the code and everything appears normal there, but page updates seem to lag behind the code. The fact that the code violates almost every javascript best practice is no doubt a factor.
This question really intrigued me. I hope my answer helps. I started with Set breakpoints and debug eval'd JavaScript and then expanded it a bit
Here is the plunker
Better than using eval, you can insert a script element into a document.
var js = "console.log('this is line 1');"
addCode(js); // Right now! Debuggable!
// Dynamically evaluate JavaScript-as-string in the browser
function addCode(js){
var e = document.createElement('script');
e.type = 'text/javascript';
e.src = 'data:text/javascript;charset=utf-8,'+escape(js);
document.head.appendChild(e);
}
It will then show up in the sources tab:
Using eval also work by adding the line //# sourceURL=dynamicScript.js at the end
See this plunker
var js = "console.log('this is line 1');\n" +
"//# sourceURL=dynamicScript.js;"
addCode(js); // Right now! Debuggable!
// Dynamically evaluate JavaScript-as-string in the browser
function addCode(js){
eval(js);
}
Notice that the script is listed under the (no domain) source folder.
Related
I came across some interesting behaviour of the javascript code on my XPages
//'rdoGeschlecht1' is present on page Basis (no problems there),
//but not on page 'Stufe1'.
var level = "Stufe1";
if(level == "Basis")
{
alert("1");
// var rdoGeschlecht1 = '#{javascript:getClientId("rdoGeschlecht1")}';
}
else if(level == "Stufe1")
{
alert("2");
}
The code above always ends in an error when executed on a page where the element is not present - "Ungültiger Komponentenname rdoGeschlecht1 kann in getClientId nicht aufgelöst werden." - it seems to me that Notes tries to resolve the object ID even if the line is not used in the actual execution and even when uncommented.
I have found a quick&dirty workaround of course, but I am surely not the first one to stumble upon this behaviour and I would really be interested in how experienced XPages programmers would be going about this?
The problem is you're commenting out the client-side JavaScript which is going to be run on the browser. But the server-side JavaScript code within #{javascript: (which needs to run on the server) is not commented out.
Perhaps it will help to explain what happens you put SSJS or EL in a string property or, in this case, a script block. Because the key is that the CSJS is not parsed on the server, it's just passed as a string to the browser.
The parser reads the string and looks for #{javascript: which tells it that the following code up to the closing } needs to be passed to the SSJS parser and the result added to the string that gets written to the browser. Any lines within that SSJS block that begin "//" will get omitted. But the parser will not take into account anything outside the #{javascript: because that is just text being passed to the browser. It is the browser that interprets the whole thing as client-side JavaScript.
Hopefully that clarified why it's working the way it is.
If you want to comment out a line in a script block that includes SSJS and you want to prevent the SSJS from running, you'll need to comment out the CSJS (so the browser doesn't run it) and the SSJS (so the server doesn't run it).
I am forced to use a web application written (over a decade ago I'm guessing) for IE6 and only works with IE (newer versions in quirks mode). I have been able to repair some of the more egregious javascript with a Safari extension that injects scripts to detach event handlers and replace them with DOM compliant versions.
I am now turning my attention to annoyances rather than the downright broken. The heavy handed use of alerts to inform the user of progress is painful. I thought it would be a fairly nice addition to my extension to override the window.alert function with some css popovers, but the challenge I am having is with pages that are sent back after an http post, where the first thing they do is display a success (or failure) alert.
According to this Apple documentation "a Start Script executes when the document has been created but before the webpage has been parsed". I would have thought that if the page hadn't been parsed, the scripts in the page's body's script tags wouldn't run, but this is not the behaviour I am seeing. Instead, it appears that any scripts in the page returned from the post response execute before my start script even loads.
To test this I have a very simple start script that logs to the console location.href and tries to replace window.alert with console.log.
The injected start script:
console.log(window.location.href + "loaded killAlert.js") ;
window.alert=function(str) { console.log(str) ; }
The test web page:
<html><head></head>
<body>
<script>alert("this is an alert message") ;</script>
nothing to see here... move along.
</body>
</html>
What happens is that when loading a test page with a script embedded, the alert executes before anything is written to console.log.
My questions—
When do start scripts actually get called?
Is there any way I can get them to execute before any scripts on the page?
While this seems like it should be fairly straight forward, but so far I haven't been able to find a way around the problem through reading documentation, search or experimenting. I'm hoping someone else has solved something similar.
<head>
<script>
(function() {
console.log(window.location.href + "loaded killAlert.js") ;
window.alert=function(str) { console.log(str) ; }
})();
</script>
</head>
Try calling it anonymously, it will execute the script immediately after the creation. Hope it helps.
I'm trying to create a Firefox extension that fires my Javascript code before any of the current page's Javascript is fired. My Javascript code will basically control whether or not the page's Javascript code can be executed or denied.
I first started out by trying to follow this answer, but I couldn't really figure out how to get it to work and realized I was relying on onDOMContentLoaded, which loads after the Javascript has already executed.
I then turned my attention toward XPCOM, but once again didn't really understand what the Firefox tutorials were telling me.
I've recently been trying to make an extension through Firebug, but I seem to hit the same problem... only having access to the Javascript after it's been parsed/executed. Here's the resulting code that I wrote. I think if I could access the file's objects in the onExamineResponse event, my problem could be solved, but I don't know how to do that... I'm talking about this code:
BeepbopListener.prototype = {
onRequest: function(context, file) {
...
},
onExamineResponse: function(context, file) {
FBTrace.sysout("onexamineresponse " + file); // this returns something like
// '[xpconnect wrapped (nsISupports, nsIHttpChannel, nsIRequest, nsIUploadChannel, nsITraceableChannel, nsIHttpChannelInternal)]'
// but I don't know how to access those elements...
var pattern = /\.js$/;
if (pattern.test(file.href) && FBTrace.DBG_BEEPBOP) {
FBTrace.sysout("ONEXAMINE DOESN'T EVEN GET IN THIS IF SO YOU WON'T SEE THIS");
}
},
...
};
So my question is... is there a tutorial out there that shows me how I can get access to all Javascript code on a page before it's executed? Also, if anyone has any helpful insight, I'd love to hear it. Oh, and if y'all need more code from me, just let me know, and I'll post it.
You can access a new document before any JavaScript code runs by listening to the content-document-global-created observer notification. However, the document will be empty at this point and JavaScript code will run as soon as the parser adds a <script> tag - you cannot really prevent it. Here are the options to control script execution that I am aware of.
1) Disable all JavaScript for a window using nsIDocShell.allowJavascript:
wnd.QueryInterface(Components.interfaces.nsIInterfaceRequestor)
.getInterface(Components.interfaces.nsIWebNavigation)
.QueryInterface(Components.interfaces.nsIDocShell)
.allowJavascript = false;
This is an all or nothing approach. Note that JavaScript stays disabled even when a new document loads into the same frame.
2) Implement the nsIContentPolicy interface in an XPCOM component and register it in the content-policy category (via nsICategoryManager). Your shouldLoad() function will be able to block scripts selectively - but it will only called for external scripts (meaning <script src="...">), not for inline scripts on the page.
3) Use JavaScript debugger service to intercept script execution. You could use jsdIDebuggerService.interruptHook to step through JavaScript execution and abort the script whenever you like. But that would slow down JavaScript execution very significantly of course. At the very least you should use jsdIDebuggerService.addFilter() to restrict it to a particular document, otherwise you will slow down the entire browser (including browser UI).
I'm trying to create a Firefox extension that fires my Javascript code before any of the current page's Javascript is fired. My Javascript code will basically control whether or not the page's Javascript code can be executed or denied.
Start by completely preventing the document from getting parsed altogether then on the side, fetch the same document, do any processing on this document and then inject the resulting document in the page. Here is how I currently do just that https://stackoverflow.com/a/36097573/6085033
I have a script that I want to be injected into the document BEFORE load. I.E; it should function as if
<script..>MYSCRIPT HERE</script>
<html>
.
.
I have made a script in chrome that executes correctly (I can alert(0) etc) but, it runs in different environment which is not what I want. I want it to run in the SAME environment as page.
Earlier, I had used a trick where in the startup script I had used
window.location="javascript:<MY SCRIPT HERE>"
which effectively changes the execution environment but for the past few days it isn't working. I think its a chrome bug fix. Is there any other workaround for this? Note: I can't add script tags dynamically to the page in the startup script because the document.body etc are unavailable.
I would like to provide you more details of why I need this and my previous solution.
HTML Page which I don't have control over:
.
.
<script>
function a(){//DOSOMETHING}
</script>
.
.
<script>
a(); <<------ I DONT WANT TO CALL THIS
</script>
Solution: startup JavaScript contains
window.location='javascript:const a=function(){};';<<-CONST used!!
That will force error of re-declaration of 'a' when the page actually loads, hence, when a() is called, nothing happens. (clever, right? -_-)
But now, I realized when I do window.location="js:..", even that runs in separate env!
You should be able to inject script into the page's JavaScript context by adding a script tag via your content script. In other words, your script could be injected at document_idle, and execute something like:
var s = document.createElement('script');
s.textContent = 'const a = function () {};';
document.documentElement.appendChild(s);
That script tag would be executed in the context of the page, not in the context of your script, and should allow you to achieve the result you're looking for.
Documentation for content scripts in general is available at https://developer.chrome.com/extensions/content_scripts
Chrome probably has a bug in the current build as PAEz pointed out. But yes, a lot of insight on how things can be injected into "document" using chrome extension. Thanks Mike for the post. Will keep that in mind the next time I'm injecting using greesemonkey or something :)
I have an ASP.NET MVC project that uses some simple AJAX functionality through jQuery's $.get method like so:
$.get(myUrl, null, function(result) {
$('#myselector').html(result);
});
The amount of content is relatively low here -- usually a single div with a short blurb of text. Sometimes, however, I am also injecting some javascript into the page. At some point when I dynamically include script into content that was itself dynamically added to the page, the script still runs, but it ceases to be available to the debugger. In VS2008, any breakpoints are ignored, and when I use the "debugger" statement, I get a messagebox saying that "no source code is available at this location." This fails both for the VS2008 debugger and the Firebug debugger in Firefox. I have tried both including the script inline in my dynamic content and also referencing a separate js file from this dynamic content -- both ways seemed to result in script that's unavailable to the debugger.
So, my question is twofold:
Is there any way to help the debugger recognize the existence of this script?
If not, what's the best way to include scripts that are used infrequently and in dynamically generated content in a way that is accessible to the debuggers?
I can not comment yet, but I can maybe help answer. As qwerty said, firefox console can be the way to go. I'd recommend going full bar and getting firebug. It hasn't ever missed code in my 3 years using it.
You could also change the way the injected javascript is added and see if that effects the debugger you're using. (I take it you're using Microsoft's IDE?).
In any case, I find the best way to inject javascript for IE is to put it as an appendChild in the head. In the case that isn't viable, the eval function (I hate using it as much as you do) can be used. Here is my AJAX IE fixer code I use. I use it for safari too since it has similar behavior. If you need that too just change the browser condition check (document.all for IE, Safari is navigator.userAgent.toLowerCase() == 'safari';).
function execajaxscripts(obj){
if(document.all){
var scripts = obj.getElementsByTagName('script');
for(var i=0; i<scripts.length; i++){
eval(scripts[i].innerHTML);
}
}
}
I've never used jquery, I preferred prototype then dojo but... I take it that it would look something like this:
$.get(myUrl, null, function(result) {
$('#myselector').html(result);
execajaxscripts(result);
});
The one problem is, eval debug errors may not be caught since it creates another instance of the interpreter. But it is worth trying.. and otherwise. Use a different debugger :D
This might be a long shot, but I don't have access to IE right now to test.
Try naming the anonymous function, e.g.:
$.get(myUrl, null, function anon_temp1(result) {
$('#myselector').html(result);
});
I'm surprised firebug is not catching the 'debugger' statement. I've never had any problems no matter how complicated the JS including method was
If this is javascript embedded within dynmically generated HTML, I can see where that might be a problem since the debugger would not see it in the initial load. I am surprised that you could put it into a seperate .js file and the debugger still failed to see the function.
It seems you could define a function in a seperate static file, nominally "get_and_show" (or whatever, possibly nested in a namespace of sorts) with a parameter of myUrl, and then call the function from the HTML. Why won't that trip the breakpoint (did you try something like this -- the question is unclear as to whether the reference to the .js in the dynamic HTML was just a func call, or the actual script/load reference as well)? Be sure to first load the external script file from a "hard coded" reference in the HTML file? (view source on roboprogs.com/index.html -- loads .js files, then runs a text insertion func)
We use firebug for debug javascript, profile requests, throw logs, etc.
You can download from http://getfirebug.com/
If firebug don't show your javascript source, post some url to test your example case.
I hope I've been of any help!
If you add // # sourceURL=foo.js to the end of the script that you're injecting then it should show up in the list of scripts in firebug and webkit inspector.
jQuery could be patched to do this automatically, but the ticket was rejected.
Here's a related question: Is possible to debug dynamic loading JavaScript by some debugger like WebKit, FireBug or IE8 Developer Tool?