I manage a course management system(Blackboard, notably)and am having an issue that I am having to fix myself because Blackboard doesn't recognize the issue. To cut it short there is an issue with rendering content in a later Service Pack in Blackboard that stems from a javascript issue. Here is my dilemma:
The frame's dimensions for this content area are set by a function called "setIFrameHeightAndWidth". I found the file in the Blackboard installation, and have enumerated its content here in a pastebin link:
http://pastebin.com/F4vVeD4v
I am constantly making edits to these 2 lines
iframeElement.style.height = iframeElement.contentWindow.document.body.scrollHeight + frameHeight + 300 +'px';
iframeElement.style.width = '100%';(the 100% change I made myself)
But when I save this file and reload the page the changes do not always stick/apply. Here is what happens when I go to the specific page that loads content calling a function in this page.
Blackboard takes that page.js file and then copies it into 3 different files(for reasons unknown), and names them 2F88F5F765F4753D1239E6FC3F898242.js, 04785022C06B7A2CD3E35B74D652973C.js, and A4B16A1C4776F93BE8C1A0BF21AB7C41.js and puts them in our external Blackboard content directory, which for my dev server is e:\blackboard\content\vi\BBLEARN\branding__js__.
Those files seem to be copies or rather take on the properties of the previous page.js file. I've confirmed this but if I sometimes reload the page the changes do not stick, here is why I think this is happening and this is my question.
If I use Chrome or Firefox's inspector to look at the resources for the page, and search for the function setIFrameHeightAndWidth, it returns all 3 files, being the 3 I mentioned above with the alphanumeric names, and inside their changes are not being reflected. I suspect that if the page was visited earlier that those 3 pages get cached and I do not want this to happen. Clearing my cache fixes the problem but I do not want to burden our user base with this if its possible not to. I noticed at the top of the page.js file, there is a section that says:
Only include the contents of this file once - for example, if this is included in a lightbox we don't want to re-run
* all of this - just use the loaded version.
Is there something in the code that is preventing this from being called more than once? Is there a way I can prevent this specific page from being cache so that changes always get instantly reflected? Thank you.
Related
I have a website that has a fairly complex and often-changing structure. To give a sense of what this website is like, here is a pretend version of its file structure:
/Website Directory
/HTML
home.html
/Area1
area1home.html
/sub1Area1
sub1Area1home.html
/sub1Sub1Area1
s1s1a1p1.html
s1s1a1p2.html
s1s1a1p3.html
/subSubArea2
s1s1a2p1.html
s1s1a2p2.html
s1s1a2p3.html
s1s1a2p4.html
/sub1Area2
... (similar to /sub1Area1)
/Area2
... (similar to /Area1)
/images
... (structured similar to /HTML)
/CSS
style.css
/JavaScript
script.js
I'll be frequently adding and removing files and folders as I develop the site, and each page should have links that can allow the user to navigate the tree structure. I'll use a breadcrumbs link bar with dropdowns to accomplish this. However, because these files change so often, I don't want to just manually write in the links on each page. Rather, I would like to have it such that, when pages or folders are added or removed, the links on any page automatically get populated on the surrounding pages. So if you're at page s1s1a1p1.html and the breadcrumbs bar displays
Home > Area1 > Sub1Area1 > sub1Sub1Area1 > s1s1a1p1.html
and the Area1 button is a dropdown with the other areas (say, Area1, Area2, ... for options). Likewise for the Sub1Area1 button and all the others in the breadcrumbs bar.
And besides the links, I'll also have image files used to make a slide-show, and I'd like these to automatically update too. That would mean that somehow when an image is added tot he folder, automatically the page should include the pictures in the slide-show.
My question is, how is the best way to accomplish this? Should I write some JavaScript or jQuery script that explores the file structure relative to where on the server the script was called from, and writes out the links on the HTML? I guess it would have to be a server-side script if so, right? Would the cost of always running a script to re-build the HTML every time the page is accessed be a substantial cost? There might be 100 pages all together.
Or should I do something else entirely? Like not handle this on the server-side, but write all the files locally, and then after every edit, run a Python script that's designed to go through all the files and update the links--then the HTML files will have the links coded directly in, but I wouldn't have to manually edit them every time.
Per #Eric's comment, this seems ideal and not something I had been aware of: https://jekyllrb.com/tutorials/navigation/
I wanted to know if there was any way to control browser painting, for example I'd like to load elements at the top of the page first so users can see content straightaway. The elements at the bottom of the page can load last as the user will not see them until they scroll down.
I'm looking to optimize my site which currently has a 6 second load time and I'd like to get it down to 1 second. This is mostly being caused by JS and images. I know that reducing both these will mean I wont need to worry about directing the painting but out of interest I just wanted to know if it was possible?
Apologies if my understanding of browser painting is very basic
its not that difficult. all you need is ajax. load the inital markup and then load the rest of the page via ajax.
just load the page with little markup which you initally want to show to the user. then as user scrolls down you can make ajax calls and get xml or json or also html files and render them on you page, for example:
$(window).on( "scroll" , function() {
var $document = $(document);
var $window = $(this);
if( $document.scrollTop() >= $document.height() - $window.height() - 400 ) {
//make ajax call here and load the data
}
});
Also read this
After looking into this further I found this article
http://www.feedthebot.com/pagespeed/prioritize-visible-content.html
which provides a good way of directing which parts of the page are rendered first. By separating your content in to above and below the fold content you can decide what needs to be delivered first i.e. your main content rather than sidebar ads. Using inline style to display your above-the-fold content will make it appear very quickly since it won't need to wait for for an external request.
But this is only good for simple CSS, if pages require complex CSS then it's better to use an external file because:
"When you use external CSS files the entire file is cached (remembered) by the browser so it doesn't have to take the same steps over and over when a user goes to another page on your website. When you inline your CSS, this does not occur and the CSS is read and acted upon again when a visitor goes to another page of your website. This does not matter if your CSS is small and simple. If your CSS is large and complex, as they often tend to be, then you may want to consider that the caching of your CSS is a better choice."
http://www.feedthebot.com/pagespeed/inline-small-css.html
I have a page with a lots of javascript. However, the page once rendered remains static, there are no moving things or special effects, etc... It should be possible to render the same HTML without any javascript at all using only the plain HTML and CSS. This is exactly what I want - I would like to get a no javascript version of the particular page. Surely, I do not expect any dynamic behavior, so I am OK if buttons are dead, for example. I just want them rendered.
Now, I do not want an image. It needs to be an HTML with CSS, may be embedded with the HTML, which is fine too.
How can I do it?
EDIT
I am sorry, but I must have not been clear. My web site works with javascript and will not work without it. I do not want to check if it works without, I know it will not and I really do not care about it. This is not what I am asking. I am asking about a specific page, which I want to grab as pure HTML + CSS. The fact that its dynamic nature is lost is of no importance.
EDIT2
There is a suggestion to gram the HTML from the DOM inspector. This is what I did the first thing - in Chrome development utils copied as HTML the root html element and saved it to a file. Of course, this does not work, because it continues to reference the CSS files on the web. I guess I should have mentioned that I want it to work from the file system.
Next was to save the page as complete with all the environment using some kind of the Save menu (browser dependent). It saves the page and all the related files forming a closure, which can be open from the file system. But the html has to be manually cleaned up of all the javascript - tedious and error prone.
EDIT3
I seem to keep forgetting things. Images should be preserved, of course.
I have to do a similar task on a semi-regular basis. As yet I haven't found an automated method, but here's my workflow:
Open the page in Google Chrome (I imagine FireFox also has the relevant tools);
"Save Page As" (complete page), rename the html page to something nicer, delete any .js scripts which got downloaded, move everything into a single folder;
On the original page, open the Elements tab (DOM inspector), find and delete any tags which I know cause problems (Facebook "like" buttons for example) (I also try to delete script tags at this stage because it's easier) and copy as HTML (right-click the <html> tag. Paste this into (replace) the downloaded HTML file (remember to keep the DOCTYPE which doesn't get copied;
Search all HTML files for any remaining script sections and delete (also delete any noscript content), and search for on (that's with a space at the start but StackOverflow won't render it) to remove handlers (onload, onclick, etc);
Search for images (src=, url(), find common patterns in image filenames and use regular expressions to replace them globally. So for example src="/images/myimage.png" => |/images/||. This needs to be applied to all HTML and CSS files. Also make sure the CSS files have the correct path (href). While doing this I usually replace all href (links) with #;
Finally open the converted page in a browser (actually I tend to do this early on so that I can see if any change I make causes it to break), use the Console tab to check for 404 errors (images that didn't get downloaded or had a different name) and the Network tab to check if anything is still being loaded from the online version;
For any files which didn't get downloaded I go back to the original page and use the Resources tab to find them and download manually;
(Optional) Cull any content which isn't needed (tracker images/iframes, unused CSS, etc).
It's a big job. I'd love a tool which automated all that, but so far I haven't found one. The pages I download are quite badly made (shops) which have a lot of unusual code, so that's why there are so many steps. You might not need to follow every step.
Okay, so first some background info: I am trying to embed a webpage within another page. The sub-page is basically a small web application written in javascript and html that takes in several screens of input (radio buttons, text boxes, etc.) and gives a screen with results at the end. Each of these screens can be a different size.
There are two methods I have tried using to do the embedding:
1) Copy all of the html and javascript from the sub-page into the main page and stick it in a div/table/whatever.
2) Keep the sub-page in its own file and embed it using embed/object/iframe.
Using the first method the page behaves as it should; the only real problem (aside from being kind of a messy solution) is that the sub-page I am embedding is actually generated by an external application, and every so often the page is replaced with a newer version. This more or less rules out using the first method as a long-term solution.
Now the second method has its own problems. Since the embedded javascript page changes in height, the frame that is holding it needs to vary in size with it. I'm able to change the size using any of the solutions given here, however these do not update the size of the frame as the user progresses through each screen.
The closest solution I've been able to come up with so far is using a document.onclick handler to catch any clicking which might cause the next screen of the sub-page to come along. The handler pauses for a very short time (to allow the next screen to come up) and then calls the necessary resizing function. However this feels like a very hacky solution, and there is also a slightly noticeable delay during with the scroll bar shows up on the side of the frame when it hasn't expanded yet to fit the new content. I'm thinking there must be a better way to do this.
If the file is on the same server/domain, you could just load it in with jQuery. Here is some jQuery code:
<script type="text/javascript">
$(document).ready(function() {
$('#id-of-div').load('/path/to/page.html');
});
</script>
Just change id-of-div to the id of the div that you want the page to be loaded into and change /path/to/page.html to the actual URL to the page. (you don't need the domain of it, just the path to it)
I hope this helps.
If this answers your question, please remember to click the checkmark next to this to accept this answer.
Optimizely & Visual Website Optimizer are two cool sites that allow users to perform simple A/B Testing.
One of the coolest things they do is visual DOM editing. You can visually manipulate a webpage and save the changes offline. The changes are then applied during a random visitor page view via a JS load.
How do the visual editors work?
My name is Pete Koomen, and I'm one of the co-founders of Optimizely, so I can speak for how things work on our side. Let's say you want to create an experiment on http://www.mypage.com.
You might (this is optional) start by adding your Optimizely account snippet to that page, which looks like this and never changes:
<script src="//cdn.optimizely.com/js/XXXXXX.js"></script>
The Optimizely Editor loads http://www.mypage.com inside an iframe and uses window.postMessage to communicate with the page. This only works if that page already has a snippet like the one above on it. If that's not the case, the editor will timeout while waiting for a message from the iframe'd page, and will load it again via a proxy that actually inserts the snippet onto the page. This loading process allows the editor to work with pages that a. contain an account snippet b. do not contain an account snippet, or c. sit behind a firewall (c. requires the snippet.)
Our user at this point can make changes to the page, like modifying text, swapping out images, or moving elements around. Each change that is made with the editor is encoded as a line of JavaScript that looks something like the following:
$j("img#hplogo").css({"width":254, "height":112});
|__IDENTIFIER__||____________ACTION______________|
So, you can think of a "variation" of a page as a set of JavaScript statements that, when executed on that page, cause the desired variation to appear. If you're curious, you can actually view and edit this code directly by clicking on "Edit Code" in the bottom right-hand corner of the Optimizely Editor.
Now, when you've added your account snippet to this page and started your experiment, the JS file pointed to by your account snippet will automatically bucket each incoming visitor and will then execute the corresponding JavaScript as the page is loading.
There's a lot more logic that goes into bucketing the visitor and executing these changes as quickly as possible during page load, but this should serve as a basic overview!
Pete