Optimizely & Visual Website Optimizer are two cool sites that allow users to perform simple A/B Testing.
One of the coolest things they do is visual DOM editing. You can visually manipulate a webpage and save the changes offline. The changes are then applied during a random visitor page view via a JS load.
How do the visual editors work?
My name is Pete Koomen, and I'm one of the co-founders of Optimizely, so I can speak for how things work on our side. Let's say you want to create an experiment on http://www.mypage.com.
You might (this is optional) start by adding your Optimizely account snippet to that page, which looks like this and never changes:
<script src="//cdn.optimizely.com/js/XXXXXX.js"></script>
The Optimizely Editor loads http://www.mypage.com inside an iframe and uses window.postMessage to communicate with the page. This only works if that page already has a snippet like the one above on it. If that's not the case, the editor will timeout while waiting for a message from the iframe'd page, and will load it again via a proxy that actually inserts the snippet onto the page. This loading process allows the editor to work with pages that a. contain an account snippet b. do not contain an account snippet, or c. sit behind a firewall (c. requires the snippet.)
Our user at this point can make changes to the page, like modifying text, swapping out images, or moving elements around. Each change that is made with the editor is encoded as a line of JavaScript that looks something like the following:
$j("img#hplogo").css({"width":254, "height":112});
|__IDENTIFIER__||____________ACTION______________|
So, you can think of a "variation" of a page as a set of JavaScript statements that, when executed on that page, cause the desired variation to appear. If you're curious, you can actually view and edit this code directly by clicking on "Edit Code" in the bottom right-hand corner of the Optimizely Editor.
Now, when you've added your account snippet to this page and started your experiment, the JS file pointed to by your account snippet will automatically bucket each incoming visitor and will then execute the corresponding JavaScript as the page is loading.
There's a lot more logic that goes into bucketing the visitor and executing these changes as quickly as possible during page load, but this should serve as a basic overview!
Pete
Related
I manage a course management system(Blackboard, notably)and am having an issue that I am having to fix myself because Blackboard doesn't recognize the issue. To cut it short there is an issue with rendering content in a later Service Pack in Blackboard that stems from a javascript issue. Here is my dilemma:
The frame's dimensions for this content area are set by a function called "setIFrameHeightAndWidth". I found the file in the Blackboard installation, and have enumerated its content here in a pastebin link:
http://pastebin.com/F4vVeD4v
I am constantly making edits to these 2 lines
iframeElement.style.height = iframeElement.contentWindow.document.body.scrollHeight + frameHeight + 300 +'px';
iframeElement.style.width = '100%';(the 100% change I made myself)
But when I save this file and reload the page the changes do not always stick/apply. Here is what happens when I go to the specific page that loads content calling a function in this page.
Blackboard takes that page.js file and then copies it into 3 different files(for reasons unknown), and names them 2F88F5F765F4753D1239E6FC3F898242.js, 04785022C06B7A2CD3E35B74D652973C.js, and A4B16A1C4776F93BE8C1A0BF21AB7C41.js and puts them in our external Blackboard content directory, which for my dev server is e:\blackboard\content\vi\BBLEARN\branding__js__.
Those files seem to be copies or rather take on the properties of the previous page.js file. I've confirmed this but if I sometimes reload the page the changes do not stick, here is why I think this is happening and this is my question.
If I use Chrome or Firefox's inspector to look at the resources for the page, and search for the function setIFrameHeightAndWidth, it returns all 3 files, being the 3 I mentioned above with the alphanumeric names, and inside their changes are not being reflected. I suspect that if the page was visited earlier that those 3 pages get cached and I do not want this to happen. Clearing my cache fixes the problem but I do not want to burden our user base with this if its possible not to. I noticed at the top of the page.js file, there is a section that says:
Only include the contents of this file once - for example, if this is included in a lightbox we don't want to re-run
* all of this - just use the loaded version.
Is there something in the code that is preventing this from being called more than once? Is there a way I can prevent this specific page from being cache so that changes always get instantly reflected? Thank you.
I do not know weather there is any way to solve this or not, but I'm just asking it because I believe SO is a place of genius people. Anyways, we all know that if we use <noscript></noscript> tag within a HTML document and if any user who are viewing this page and have JavaScript disabled from their browser will see the no script message.
But what if JavaScript has been disabled by proxy? Many wifi networks needs to manually input some proxy to use internet and many of them disabled JS on that proxy from the proxy server. In this case if anybody visit the same page, the page will see that JavaScript has been enabled from browser, but disabled from proxy.
If there any way to check weather JavaScript has been disabled by any proxy(if using) and showing alert message for this? Also I will be glad if anybody can say that how to implement it with Wordpress and also without wordpress. :)
Thanks.
You can show the message by default and then remove or hide it with JavaScript, e.g.:
<div id="jsalert">JavaScript is disabled in your environment.</div>
<script>
(function() {
var elm = document.getElementById("jsalert");
elm.parentNode.removeChild(elm);
})();
</script>
<!-- Continue your content here -->
If script tags have been stripped by a proxy (which I'm fairly certain is very unusual; at least, I've never seen it), then of course the script won't be there to be run, and the div will show. If the script is present, it will remove the div.
By following the div with the script immediately (which is perfectly fine, no need for "DOM ready" stuff that will just delay things), the odds of the div "flashing" briefly on the page in the common case (where JavaScript is enabled and not stripped out) are very low. Not zero, but low.
If you believe the proxy doesn't strip out script tags but instead just blocks the downloads of JavaScript files (which would be dumb), you can change the above to use a JavaScript file, but beware that by doing that you either hold up the rendering of your page (if you use <script src="...">) or you increase (dramatically) the odds of the div "flashing" on the page briefly (if you load the script asynchronously).
This is just a specific use-case for a general practice called "progressive enhancement" (or sometimes "graceful degradation," but most people prefer the first). That's where you ensure that the page is presented correctly and usefully in the case where JavaScript is not available, and then use JavaScript to add behaviors to the page if JavaScript is enabled. In this case, the "useful" thing you're doing is saying that JavaScript isn't running for some reason, so it's a slightly different thing, but it's the same principle.
I have a page with a lots of javascript. However, the page once rendered remains static, there are no moving things or special effects, etc... It should be possible to render the same HTML without any javascript at all using only the plain HTML and CSS. This is exactly what I want - I would like to get a no javascript version of the particular page. Surely, I do not expect any dynamic behavior, so I am OK if buttons are dead, for example. I just want them rendered.
Now, I do not want an image. It needs to be an HTML with CSS, may be embedded with the HTML, which is fine too.
How can I do it?
EDIT
I am sorry, but I must have not been clear. My web site works with javascript and will not work without it. I do not want to check if it works without, I know it will not and I really do not care about it. This is not what I am asking. I am asking about a specific page, which I want to grab as pure HTML + CSS. The fact that its dynamic nature is lost is of no importance.
EDIT2
There is a suggestion to gram the HTML from the DOM inspector. This is what I did the first thing - in Chrome development utils copied as HTML the root html element and saved it to a file. Of course, this does not work, because it continues to reference the CSS files on the web. I guess I should have mentioned that I want it to work from the file system.
Next was to save the page as complete with all the environment using some kind of the Save menu (browser dependent). It saves the page and all the related files forming a closure, which can be open from the file system. But the html has to be manually cleaned up of all the javascript - tedious and error prone.
EDIT3
I seem to keep forgetting things. Images should be preserved, of course.
I have to do a similar task on a semi-regular basis. As yet I haven't found an automated method, but here's my workflow:
Open the page in Google Chrome (I imagine FireFox also has the relevant tools);
"Save Page As" (complete page), rename the html page to something nicer, delete any .js scripts which got downloaded, move everything into a single folder;
On the original page, open the Elements tab (DOM inspector), find and delete any tags which I know cause problems (Facebook "like" buttons for example) (I also try to delete script tags at this stage because it's easier) and copy as HTML (right-click the <html> tag. Paste this into (replace) the downloaded HTML file (remember to keep the DOCTYPE which doesn't get copied;
Search all HTML files for any remaining script sections and delete (also delete any noscript content), and search for on (that's with a space at the start but StackOverflow won't render it) to remove handlers (onload, onclick, etc);
Search for images (src=, url(), find common patterns in image filenames and use regular expressions to replace them globally. So for example src="/images/myimage.png" => |/images/||. This needs to be applied to all HTML and CSS files. Also make sure the CSS files have the correct path (href). While doing this I usually replace all href (links) with #;
Finally open the converted page in a browser (actually I tend to do this early on so that I can see if any change I make causes it to break), use the Console tab to check for 404 errors (images that didn't get downloaded or had a different name) and the Network tab to check if anything is still being loaded from the online version;
For any files which didn't get downloaded I go back to the original page and use the Resources tab to find them and download manually;
(Optional) Cull any content which isn't needed (tracker images/iframes, unused CSS, etc).
It's a big job. I'd love a tool which automated all that, but so far I haven't found one. The pages I download are quite badly made (shops) which have a lot of unusual code, so that's why there are so many steps. You might not need to follow every step.
I have a Javascript greeting that greets new users with a drop down banner like SO has. It only becomes visible after 3 seconds and when the X is clicked it disappears. Since I have not put meta description tags, on every page Google shows that greeting as the meta data. I dont understand why Google is using this seeing as it is not loaded staight away, will this stop happening if i use meta description?
Should I use Meta desciption? On the upside it might help this problem, but then Google wont be able to dynamically fetch data from the site (which happens to be a forum). It so happens that it is doing this anyway and I dont know why?
Thanks!
My best guess is the text from your greeting is being added to the page (a wordpress plugin?) on the server side as visible (so it appears even if javascript is disabled), hidden by javascript on pageload, then just being shown after 3 seconds (i.e. it is really there already and as such is the first major text google finds).
Try changing your greeting plugin/code to generate the div containing the greeting message after page-load, or at least to append it to the end of the document (or apply style="display:none;" as an inline-style so Google can see it) on the server-side then tweak the js to show it. It would no longer greet visitors with js disabled, but would also allow google to reach your main content without encountering the greeting.
It does this because it's the first readable bit of text found when parsing the DOM. I'm not sure if there is a delay google uses before it saves the page state to its cache but that shouldn't matter. I actually use this 'feature' of google to allow me to manipulate what the site listing says in the search listings. If you want it not to show up just move the code for the message to the bottom of your <body>s node list (i.e. put it just before you close the </body>).
display:none won't do anything it has to be moved so that it's not in the first few readable lines of text when the DOM node tree is parsed.
I am not a coder but, i am able to get my way around code most of the time. However, i found that this is the best place to ask questions relating to code stuff.
I have been working on a website for a client and i am at 95% - the only problem i have is facebook like-box. i have found several tutorials on the web to modify the like box css, and i have implemented most of the recommendations but, i have no favorable results.
Please - stackoverflow help!
I know jquery/javascript is a very powerful language. And facebook like uses javascript iframe/xfbml.
what code would you use, if you were to modify the like box css elements before loading them .
I say load cos i am loading my like box via ".load" ajax. So, when a user clicks the facebook button jquery loads it.
In short: how would i edit a css file on the fly, and then load the edited version afterwards.
thanks
The key problem that you'll have here is that FB's Like button is loaded inside an iframe - a self-contained HTML document within your page (if you use firebug or webkit inspector to inspect the like button, you'll see it's within <body>, <html>, then <iframe>).
The thing about these self-contained pages is that you can't access or manipulate them from the surrounding document (your page). You can change the 'src' attribute (telling the iframe to load a new page), but you can't apply or change styles on the elements inside the page. This is a security limitation that browsers have.
I know that it is possible to have a custom-styled like button, but I don't think it's done with the iframe method.