I know this is going to be difficult to get help for, but anyway:
In short: this page renders OK the first time on Safari (both Mac and iPhone/iPad), the second (after refresh) time some things are not shown. Opening the page in Private mode -> works always. Opening it in Chrome -> works always
In long: the page is a temporary solution and hacked together... Currently it is driven by Caspio (a no-code rapid development environment). The no-code comes at a price however of a limited possibilities. We are rewriting the system in a proper front-end/back-end environment but for the time-being we should get this page to work.
The page consists of 2 blocks rendered by Caspio. I need to get some elements from block 2 and put them into an element of block 1 (and hide block 2 then) with JS. Again the options on how to get the elements are limited by what Caspio is providing, it's very dirty code which I hope to get rid of asap!
<script type="text/javascript">
window.addEventListener("load", function (e) {
const dataPageId = "46629000bb2da6866c8b4cc09dc1";
// Default image for the promotion (in case no image uploaded)
var theImage = document.createElement("img");
theImage.setAttribute("src", "images/noImageFound.png");
theImage.setAttribute("alt", "No Image");
// get the text form the placeholder virtual fields and hide it (not possible in Caspio to hide it)
// First get the title
var promoTitleVirtual = document.querySelectorAll("[id*='cbParamVirtual1']");
var promoTitleParagraph = document.createElement("h3");
var promoTitle = document.createTextNode(promoTitleVirtual[0].value);
promoTitleParagraph.appendChild(promoTitle);
promoTitleVirtual[0].style.display = "none";
// Now the description
var promoDescriptionVirtual = document.querySelectorAll("[id*='cbParamVirtual2']");
var promoDescriptionParagraph = document.createElement("span");
promoDescriptionParagraph.classList.add("w-screen")
var promoDescription = document.createTextNode(promoDescriptionVirtual[0].value);
promoDescriptionParagraph.appendChild(promoDescription);
promoDescriptionVirtual[0].style.display = "none";
// The Image
var images = document.getElementsByTagName("img");
for (i = 0; i < images.length; i++) {
if (images[i].src.includes(dataPageId)) {
theImage = images[i];
}
}
var promotionImage = document.getElementById("promotionImage");
// reposition the radio so it looks better
var promoAnswers = document.querySelectorAll(
"[class*='cbFormBlock24']"
);
promotionImage.appendChild(promoTitleParagraph);
promotionImage.appendChild(theImage);
promotionImage.appendChild(promoDescriptionParagraph);
theImage.parentNode.lastChild.style.width = theImage.width + "px";
promotionImage.appendChild(promoAnswers[0]);
});```
The image is always shown, the Title and the Description only the first time
Found the problem... can be interesting for those using the Caspio environment!
Apparently Caspio doesn't load all the data immediately so even though I wait until everything is loaded :
window.addEventListener("load", function (e)
it is not...
Hacky (but this code is it anyway, can't wait until we have this re-written) I added a :
setTimeout(function() {
...
}, 300)
to my code so it waits until Caspio is done with its business. (200 was ok, just put 300 to be on the safe side)
Related
I've scouted many forums and blogs and questions and sites and whatnot but cannot seem to find a solution that works for me - I am trying to load images using pure javascript without halting the rest of the page to load, and without relying on third party libraries.
On the site I work on, there may be between 0 - 30 images that may load, of different resolutions, and as you may imagine, might slow down performance to a halt on slower connections (which is what I am trying to prevent now - I want the user to see info on the page and worry less about images hooting up the performance on it)
on my latest attempt:
(function () {
// jquery is unavailable here. using javascript counterpart.
var carouselDivs = document.querySelectorAll('#caruselImagesDivs div[data-url]');
var carouselIndicators = document.querySelector('.carousel-indicators');
var carouselInner = document.querySelector('.carousel-inner');
for (var i = 0; i < carouselDivs.length; i++) {
var liIndicator = document.createElement('LI');
liIndicator.dataset.target = "#property_image_gallery";
liIndicator.dataset.slideTo = i + 1;
var divItem = document.createElement('DIV');
divItem.className = "item";
var image = document.createElement('IMG');
image.dataset.src = carouselDivs[i].dataset.url;
image.classname = 'img-responsive center-block';
// for some reason I thought this might work, but it hasn't.
image.onload = function () {
image.src = image.dataset.src;
image.onload = function () { };
}
image.src = '/Images/blankbeacon.jpg';
divItem.appendChild(image);
carouselIndicators.appendChild(liIndicator);
carouselInner.appendChild(divItem);
}
})();
I tried deferring the loading of the images too (the top code section hadn't had the onload event then):
function initImg() {
var imgs = document.querySelectorAll('#property_image_gallery .carousel-inner .item img');
for (var i = 0; i < imgs.length; i++) {
var imgSource = imgs[i].dataset.src;
imgs[i].src = imgSource;
}
}
window.onload = initImg
2 hours in. no results. I am stumped. What am I missing? how can I force the browser to just move on with life and load those images later on?
At first, you may load images one after one, using recursive functions:
function addimg(img){
img.onload=function(){
addimg(nextimg) ;
img.onload=null;//kill closure -> free the js memory
}
}
Start that if the html is loaded completely:
window.onload=addimg;
(pseudocode)
You can also use a image compressor tool to make the images load faster.
http://optimizilla.com/
This is a great article that might also help you
https://varvy.com/pagespeed/defer-images.html
Few suggestions:
If the images are not in the current viewport and are taking up too much initial bandwidth then i suggest to lazy load images when the user is in (or close to) the same viewport of the images.
You can also try deferring the images like what you are doing, but ensure the script is run right before the end body tag.
I also suggest doing things like making sure images are correctly compressed and resized (you have an image there that is 225kb which isnt ideal)
I'm using IE Edge's emulator mode to test some work and one of the project I work on requires IE8. The emulator is pretty useful to debug some stuff that the original IE8 is doing a good job at blackboxing. I'm trying to find a way around this bug since Microsoft isn't willing to fix it.
The problem is that IE8 emulator hangs on SVG image load. I'm currently using this SVG fallback library which works great on the real IE8 but I was wondering if there is a way to modify events or object prototypes using Javascript to change the behavior of the browsers before it tries to load SVG images when parsing HTML? Is there such a way to solve this issue or should I just live with this bug? I have this dirty workaround which does the trick but I'm hoping to find a more proactive solution.
var fixMySVG = setInterval(function () {
var elements = document.getElementsByTagName('img');
for (var i = 0; i < elements.length; i++) {
var element = elements[i];
element.src = element.src.replace(/^(.+)(\.svg)(\?.)*$/ig, '$1.' + 'png' + '$3');
}
if (document.readyState == 'complete') {
clearInterval(fixMySVG);
}
}, 100);
There is no error, the image is just stuck in an 'uninitialized' state (so I cannot use the onerror event). I'm also unaware of any onbeforeoload event I could use.
Is using interval the only solution?
Edit
I realize there is no perfect solution but to solve basic <img> and backgroundImage style, using interval seems to do an good job without performance hit. On top of that fall back images seems to load faster. I updated my SVG fallback to use interval instead of using onload events which solve both IE8 emulator and the real IE8.
It's a really odd bug, since there is no older-version emulation mode in Edge, just mobile one and user-agent string emulation, which will just allow you "to debug errors caused by browser sniffing", but in no way it is related to some feature non-support.
Using your fallback is one out of many options but there is no "clean" way to do this. On top of that it will not solve SVG images using <object>, <iframe> or <embded> elements, nor inline <svg> elements.
So this doesn't point directly to your issue, which should be fixed by IE team since it's a bug in their browser, but just for the hack, here is a way to change the src of an image before the fetching of the original one starts.
Disclaimer
Once again, this is a hack and should not be used in any production nor development site maybe just for an edge debugging case like yours and for experimentation but that's all !
Note : this will work in modern browsers, including Edge with IE8 user-string Emulation set, but not in the original IE8.
Before the dump
This code should be called in the <head> of your document, preferably at the top-most, since everything that is called before it will be called twice.
Read the comments.
<script id="replaceSrcBeforeLoading">
// We need to set an id to the script tag
// This way we can avoid executing it in a loop
(function replaceSrcBeforeLoading(oldSrc, newSrc) {
// first stop the loading of the document
if ('stop' in window) window.stop();
// IE didn't implemented window.stop();
else if ('execCommand' in document) document.execCommand("Stop");
// clear the document
document.removeChild(document.documentElement);
// the function to rewrite our actual page from the xhr response
var parseResp = function(resp) {
// create a new HTML doc
var doc = document.implementation.createHTMLDocument(document.title);
// set its innerHTML to the response
doc.documentElement.innerHTML = resp;
// search for the image you want to modify
// you may need to tweak it to search for multiple images, or even other elements
var img = doc.documentElement.querySelector('img[src*="' + oldSrc + '"]');
// change its src
img.src = newSrc;
// remove this script so it's not executed in a loop
var thisScript = doc.getElementById('replaceSrcBeforeLoading');
thisScript.parentNode.removeChild(thisScript);
// clone the fetched document
var clone = doc.documentElement.cloneNode(true);
// append it to the original one
document.appendChild(clone);
// search for all script elements
// we need to create new script element in order to get them executed
var scripts = Array.prototype.slice.call(clone.querySelectorAll('script'));
for (var i = 0; i < scripts.length; i++) {
var old = scripts[i];
var script = document.createElement('script');
if (old.src) {
script.src = old.src;
}
if (old.innerHTML) {
script.innerHTML = old.innerHTML;
}
old.parentNode.replaceChild(script, old);
}
}
// the request to fetch our current doc
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function() {
if (this.readyState == 4 && (this.status == 200 || this.status == 0)) {
var resp = this.responseText || this.response;
parseResp(resp);
}
};
xhr.open('GET', location.href);
xhr.send();
})('oldSrc.svg',
'newSrc.svg');
</script>
And a live example which won't work with the IE8 UA string since plnkr.co just doesn't allow this browser on his website :-/
I have a web page which is chock full of javascript, and a few references to resources like images for the javascript to work with. I use a websocket to communicate with the server; the javascript parses the socket's data and does things with the page presentation accordingly. It all works fine, except when it doesn't.
The problem appears to be that that page contains images which I want to display parts of, under javascript control. No matter how I play with defer, there are apparently situations in which the images don't seem to be fully downloaded before the javascript tries to use them. The result is images are missing when the page is rendered, some small percentage of the time.
I'm not very used to languages and protocols where you don't have strict control over what happens when, so the server and browser shipping stuff and executing stuff in an uncontrolled and asynch order annoys me. So I'd like to stop depending on apparently unreliable tricks like defer. What I'd like to do is just download the whole page, and then open my websocket and send my images and other resources down through it. When that process is complete, I'll know it's safe to accept other commands from the websocket and get on with doing what the page does. In other words I want to subvert the browsers asynch handling of resources, and handle it all serially under javascript control.
Pouring an image file from the server down a socket is easy and I have no trouble coming up with protocols to do it. Capturing the data as byte arrays, also easy.
But how do I get them interpreted as images?
I know there are downsides to this approach. I won't get browser caching of my images and the initial page won't load as quickly. I'm ok with that. I'm just tired of 95% working solutions and having to wonder if what I did works in every browser imaginable. (Working on everything from IE 8 to next year's Chrome is a requirement for me.)
Is this approach viable? Are there better ways to get strict, portable control of resource loading?
You still haven't been very specific about what resources you are waiting for other than images, but if they are all images, then you can use this loadMonitor object to monitor when N images are done loading:
function loadMonitor(/* img1, img2, img3 */) {
var cntr = 0, doneFn, self = this;
function checkDone() {
if (cntr === 0 && doneFn) {
// clear out doneFn so nothing that is done in the doneFn callback
// accidentally cause the callback to get called again
var f = doneFn;
doneFn = null;
f.call(self);
}
}
function handleEvents(obj, eventList) {
var events = eventList.split(" "), i;
function handler() {
--cntr;
for (i = 0; i < events.length; i++) {
obj.removeEventListener(events[i], handler);
}
checkDone();
}
for (i = 0; i < events.length; i++) {
obj.addEventListener(events[i], handler);
}
}
this.add = function(/* img1, img2, img3 */) {
if (doneFn) {
throw new Error("Can't call loadMonitor.add() after calling loadMonitor.start(fn)");
}
var img;
for (var i = 0; i < arguments.length; i++) {
img = arguments[i];
if (!img.src || !img.complete) {
++cntr;
handleEvents(img, "load error abort");
}
}
};
this.start = function(fn) {
if (!fn) {
throw new Error("must pass completion function as loadMonitor.start(fn)");
}
doneFn = fn;
checkDone();
};
// process constructor arguments
this.add.apply(this, arguments);
}
// example usage code
var cardsImage = new Image();
cardsImage.src = ...
var playerImage = new Image();
playerImage.src = ...
var tableImage = new Image();
var watcher = new loadMonitor(cardsImage, playerImage, tableImage);
// .start() tells the monitor that all images are now in the monitor
// and passes it our callback so it can now tell us when things are done
watcher.start(function() {
// put code here that wants to run when all the images are loaded
});
// the .src value can be set before or after the image has been
// added to the loadMonitor
tableImage.src = ...
Note, you must make sure that all images you put in the loadMonitor do get a .src assigned or the loadMonitor will never call its callback because that image will never finish.
Working demo: http://jsfiddle.net/jfriend00/g9x74d2j/
I want to implement a plug-in serial download pictures in MooTools. Let's say there are pictures with the img tag inside a div with the class imageswrapper. Need to consistently download each image after it loads the next and so on until all the images are not loaded.
window.addEvent('domready', function(){
// get all images in div with class 'imageswrapper'
var imagesArray = $$('.imageswrapper img');
var tempProperty = '';
// hide them and set them to the attribute 'data-src' to cancel the background download
for (var i=0; i<imagesArray.length; i++) {
tempProperty = imagesArray[i].getProperty('src');
imagesArray[i].removeProperty('src');
imagesArray[i].setProperty('data-src', tempProperty);
}
tempProperty = '';
var iterator = 0;
// select the block in which we will inject Pictures
var injDiv = $$('div.imageswrapper');
// recursive function that executes itself after a new image is loaded
function imgBomber() {
// exit conditions of the recursion
if (iterator > (imagesArray.length-1)) {
return false;
}
tempProperty = imagesArray[iterator].getProperty('data-src');
imagesArray[iterator].removeProperty('data-src');
imagesArray[iterator].setProperty('src', tempProperty);
imagesArray[iterator].addEvent('load', function() {
imagesArray[iterator].inject(injDiv);
iterator++;
imgBomber();
});
} ;
imgBomber();
});
There are several issues I can see here. You have not actually said what the issue is so... this is more of a code review / ideas for you until you post the actual problems with it (or a jsfiddle with it)
you run this code in domready where the browser may have already initiated the download of the images based upon the src property. you will be better off sending data-src from server directly before you even start
Probably biggest problem is: var injDiv = $$('div.imageswrapper'); will return a COLLECTION - so [<div.imageswrapper></div>, ..] - which cannot take an inject since the target can be multiple dom nodes. use var injDiv = document.getElement('div.imageswrapper'); instead.
there are issues with the load events and the .addEvent('load') for cross-browser. they need to be cleaned up after execution as in IE < 9, it will fire load every time an animated gif loops, for example. also, you don't have onerror and onabort handlers, which means your loader will stop at a 404 or any other unexpected response.
you should not use data-src to store the data, it's slow. MooTools has Element storage - use el.store('src', oldSource) and el.retrieve('src') and el.eliminate('src'). much faster.
you expose the iterator to the upper scope.
use mootools api - use .set() and .get() and not .getProperty() and .setProperty()
for (var i) iterators are unsafe to use for async operations. control flow of the app will continue to run and different operations may reference the wrong iterator index. looking at your code, this shouldn't be the case but you should use the mootools .each(fn(item, index), scope) from Elements / Array method.
Anyway, your problem has already been solved on several layers.
Eg, I wrote pre-loader - a framework agnostic image loader plugin that can download an array of images either in parallel or pipelined (like you are trying to) with onProgress etc events - see http://jsfiddle.net/dimitar/mFQm6/ - see the screenshots at the bottom of the readme.md:
MooTools solves this also (without the wait on previous image) via Asset.js - http://mootools.net/docs/more/Utilities/Assets#Asset:Asset-image and Asset.images for multiple. see the source for inspiration - https://github.com/mootools/mootools-more/blob/master/Source/Utilities/Assets.js
Here's an example doing this via my pre-loader class: http://jsfiddle.net/dimitar/JhpsH/
(function(){
var imagesToLoad = [],
imgDiv = document.getElement('div.injecthere');
$$('.imageswrapper img').each(function(el){
imagesToLoad.push(el.get('src'));
el.erase('src');
});
new preLoader(imagesToLoad, {
pipeline: true, // sequential loading like yours
onProgress: function(img, imageEl, index){
imgDiv.adopt(imageEl);
}
});
}());
Here's a conundrum I've discovered.
I have a script that opens a file in InDesign, does some work to it, then closes it. To help speed it up, I have turned off displaying the file by using the false argument while opening the file, like so:
var document = app.open(oFile, false);
Sometimes, while doing some work on an open file, the script may need to need to resize a certain page from 11 inches tall to 12.5 inches tall, thusly:
if (padPrinted) {
for (var p = 0; p < outputRangeArray.length; p++) {
var padPage = document.pages.item(outputRangeArray[p]);
if (padPage.bounds[2] - padPage.bounds[0] === 11) {
padPage.select();
var myY1 = padPage.bounds[0] -= 0.75;
var myX1 = padPage.bounds[1];
var myY2 = padPage.bounds[2] += 0.75;
var myX2 = padPage.bounds[3];
padPage.reframe(CoordinateSpaces.INNER_COORDINATES, [[myX1*72, myY1*72], [myX2*72, myY2*72]]);
}
}
}
This has been working flawlessly for me for quite some time, but now it sometimes errors on the line padPage.select() with the message:
No document windows are open.
If I go back to the line which opens the file and delete the false argument, then the script works fine.
So, I'd like to know if there's any way to get around this. I'd like to have the documents open without displaying them, but still have the ability to resize a page when I need to. Any ideas?
Why do you call padPage.select();? It doesn't look like your code needs it.
Edit:
On page to page 42 of the Adobe InDesign CS6 Scripting Guide: Javascript, there is a sample snippet that reframes the page and doesn't call select(). The snippet comes from a sample script in the InDesign CS6 Scripting SDK (scroll to the bottom).
The path of the sample script is Adobe InDesign CS6 Scripting SDK\indesign\scriptingguide\scripts\JavaScript\documents\PageReframe.jsx
Inspecting this script, we see that it never calls select(). In fact, the PageResize.jsx never calls select() either.
Also, while InDesign Server can resize and reframe pages, you'll notice that the select() function is missing entirely. It would seem that select() affects only the GUI.
In the face of all this evidence, I would wager that the scripting guide is wrong when it says "you must select the page". Try removing that line and see if it works.
Edit 2
On an unrelated note, the following lines might be troublesome:
var myY1 = padPage.bounds[0] -= 0.75;
var myX1 = padPage.bounds[1];
var myY2 = padPage.bounds[2] += 0.75;
The += and -= operators will attempt to modify the bounds directly, but the bounds are read-only and can only be modified with methods such as resize or reframe. I would recommend changing it to this:
var myY1 = padPage.bounds[0] - 0.75;
var myX1 = padPage.bounds[1];
var myY2 = padPage.bounds[2] + 0.75;