I've got PJax up and running on my test site - it works a treat. However it relies heavily on a lot of javascript widgets and hence leaks memory.
Since I don't have time right now to re-write every widget, I thought that a simple solution would be to do a normal page load after, say 20 pjax page transitions. A simple plan....but it doesn't seem to be possible.
$.pjax.disable();
....still fetches the content via AJAX, but doesn't change the page.
$(document).pjax();.
...doesn't change the behaviour
$.pjax.handleClick = function (event, container, options) { return; };
...doesn't change the behaviour
$.pjax.state.timeout = 0;
...doesn't change the behaviour
delete $.pjax;
...breaks navigation
$.pjax.defaults.timeout=0;
...doesn't change the behaviour
How do I suspend pjax?
If you add a listener for pjax:beforeSend, you can capture the requested URL, set location.href yourself and return false to cancel the pjax behavior. That is how I'm doing it with the following code:
var pageLoadCounter = 0;
var MAX_PAGE_LOADS = 20;
$(".pjaxContainer").on("pjax:beforeSend", function (e, xhr, settings) {
if (++pageLoadCounter > MAX_PAGE_LOADS) {
// URI can be found at https://github.com/medialize/URI.js
var uri = URI(settings.url);
// Remove _pjax from query string before reloading
uri.removeSearch("_pjax");
location.href = uri.toString();
return false;
}
});
I've discovered that changing the id of the pjax container div gives me the desired result - although this seems like a bit of a kludge. It would also be possible by changing the timeout of the ajax request to 0 - but I still need to work out how to do this.
I did ask on the PJax github page about this but so far have not received a response.
Related
Preliminary context sharing
I am asked to manually perform a very repetitive action on a website that I do not own and for which I do not have any API access.
The only hope I have to automate these actions is to write some JavaScript and execute it on the browser just to automate the actions that I would be doing manually otherwise.
Please sorry in advance if this question already has an answer somewhere else, I'm a backend developer and in my limited knowledge of front-end I didn't manage to find any equivalence.
Explanation of the issue
Say I have to post several entries, one by one, into a form. I have written the following code (over simpified just for demonstration purposes):
//This array of Json objects is produced by an upstream service
var inputs = [
{
...
},
{
...
},
{
...
}
]
for (i = 0; i < inputs.length; i++) {
fillSomeForms(inputs[i])
clickSubmit() //<-- this will make the page reload, and so the script execution stop
}
The problem that I have here is very basic: after the first for iteration, when I invoke clickSubmit(), the page reloads (because the submission is a POST followed by a redirect to a "submit next" page) and so the JS stops executing.
I have tried to look around on the web for similar issues, and I've seen people tweaking the localStorage in order to resume the execution of their script.
However, that seems to assume the script being a resource of the front-end code, which is not the case for me (I don't own the code, I simply inject this JS into the browser's developer console and execute it to save some time).
Is there any way to reach this purpose? I am not necessarily looking for a clean solution, just for something that could get this work and spare us some monkey work (nothing of what I'm doing here is clean, but the system administrators do not want to provide access to the REST APIs that the platform actually provide to do so).
When you inject into the console, load a copy of the page into an iframe, and submit your forms from that copy:
const inputs = [ /* a convenient inputs array */ ];
const pageCopy = document.body.appendChild( document.createElement( "iframe" ) );
pageCopy.addEventListener( "load", () => {
//The page copy has finished loading / reloading, let's submit more stuff
if( inputs.length > 0 ) {
const moreInput = inputs.pop();
console.log( "Submitting inputs: ", moreInput );
//this shouldn't work, but let's clone the current DOM into the iframe...
pageCopy.contentDocument.body.parentElement.innerHTML =
document.body.parentElement.innerHTML;
fillSomeFormsInPageCopy( pageCopy.contentDocument, moreInput );
pageCopy.contentDocument.querySelector( "#submitButtonId" ).click();
console.log( "Clicked submit. Will wait for iframe to finish reloading..." );
//Okay, we clicked and the iframe is reloading. This event will fire again as soon as it's done reloading, ready to submit more form data
}
else if( inputs.length === 0 ) {
console.log( "Finished submitting all the inputs in the array!" );
}
} );
pageCopy.src = document.location.href;
Please understand I can't test this code. (I'm not even sure the click() event can be fired across an iframe boundary, for security, but I hope it can.)
Hopefully you can understand how to use the pageCopy's document to find your form elements and set their values. E.g., you can use
pageCopy.contentDocument.getElementById( "form-entry-id-1" ).value =
moreInput[ "form-entry-id-1" ];
In case it may help someone in the future, I finally was able to work around the problem by opening a new tab (and working in that tab) per iteration of my loop.
Something like this:
while (inputs.length > 0) {
const singleInput = inputs.pop();
const newWindow = window.open('about:blank', '_blank');
newWindow.addEventListener('load', () => {
newWindow.document.body.parentElement.innerHTML = document.body.parentElement.innerHTML;
fillForm(newWindow.document, singleInput) //<-- the function fill form uses the document in parameter to perform the different get/set
newWindow.document.getElementById("submit-button").click();
});
}
I have a function named back() which will be used for ajax calls. Actually I have an array stack contains last 5 search results and that back function will switch to the previous result set (according to that array stack) and it even changes the URL using window.history.pushState() when you click on the back button.
That back button I was talking about, is an element inside the browser which revokes back() function. Now I want to revoke back() function also when user click on the back button of the browser. Something like this:
window.onhashchange = function() {
back(); // this function also changes the url
}
But sadly window.onhashchange will be revokes twice when I click on the back of the browser. Because window.onhashchange will be revoked when you change the URL using window.history.pushState().
Anyway, how can I detect what things changes the URL? Either my JS code or the back button of the browser?
You can use performance.navigation.type
At any given point, for example on document.onload, you can read the value of type and, if it's:
0 The page was accessed by following a link, a bookmark, a form submission, a script, or typing the URL in the address bar.
1 The page was accessed by clicking the Reload button or via the Location.reload() method.
2 The page was accessed by navigating into the history.
255 any other way.
Just beware that support is limited according to the compatibilty table.
However, from the looks of it, it seems the table is outdated. It says it is not supported on chrome and I just tested it and works as expected on my chrome version (67.0)
One of solution is to implement onunload event with localstorage option.
This is from my head maybe you will need correction but this is base !
var history = [];
window.onload = function(){
var handler;
if ( localStorage.getItem('history') == null ) {
// FIRST TIME
history[0] = window.location.href;
localStorage.setItem("history", JSON.stringify(history));
}
else {
handler = localStorage.getItem('history');
handler = JSON.parse(handler);
history = handler;
// Just compare now
if (history[history.length-1] == window.location.href) {
// no change
} else {
history.push(window.location.href);
}
}
}
window.onunload = function(){
localStorage.setItem('history', JSON.stringify(history));
}
Note :
Since 25 May 2011, the HTML5 specification states that calls to
window.alert(), window.confirm(), and window.prompt() methods may be
ignored during this event. See the HTML5 specification for more
details.
Hey guys just testing our pages out using the grunt-phantomcss plugin (it's essentially a wrapper for PhantomJS & CasperJS).
We have some stuff on our sites that comes in dynamically (random profile images for users and random advertisements) sooo technically the page looks different each time we load it, meaning the build fails. We would like to be able to jump in and using good ol' DOM API techniques and 'grey out'/make opaque these images so that Casper/Phantom doesn't see them and passes the build.
We've already looked at pageSettings.loadImages = false and although that technically works, it also takes out every image meaning that even our non-ad, non-profile images get filtered out.
Here's a very basic sample test script (doesn't work):
casper.start( 'http://our.url.here.com' )
.then(function(){
this.evaluate(function(){
var profs = document.querySelectorAll('.profile');
profs.forEach(function( val, i ){
val.style.opacity = 0;
});
return;
});
phantomcss.screenshot( '.profiles-box', 'profiles' );
});
Would love to know how other people have solved this because I am sure this isn't a strange use-case (as so many people have dynamic ads on their sites).
Your script might actually work. The problem is that profs is a NodeList. It doesn't have a forEach function. Use this:
var profs = document.querySelectorAll('.profile');
Array.prototype.forEach.call(profs, function( val, i ){
val.style.opacity = 0;
});
It is always a good idea to register to page.error and remote.message to catch those errors.
Another idea would be to employ the resource.requested event handler to abort all the resources that you don't want loaded. It uses the underlying onResourceRequested PhantomJS function.
casper.on("resource.requested", function(requestData, networkRequest){
if (requestData.url.indexOf("mydomain") === -1) {
// abort all resources that are not on my domain
networkRequest.abort();
}
});
If your page handles unloaded resources well, then this should be a viable option.
I want to implement a plug-in serial download pictures in MooTools. Let's say there are pictures with the img tag inside a div with the class imageswrapper. Need to consistently download each image after it loads the next and so on until all the images are not loaded.
window.addEvent('domready', function(){
// get all images in div with class 'imageswrapper'
var imagesArray = $$('.imageswrapper img');
var tempProperty = '';
// hide them and set them to the attribute 'data-src' to cancel the background download
for (var i=0; i<imagesArray.length; i++) {
tempProperty = imagesArray[i].getProperty('src');
imagesArray[i].removeProperty('src');
imagesArray[i].setProperty('data-src', tempProperty);
}
tempProperty = '';
var iterator = 0;
// select the block in which we will inject Pictures
var injDiv = $$('div.imageswrapper');
// recursive function that executes itself after a new image is loaded
function imgBomber() {
// exit conditions of the recursion
if (iterator > (imagesArray.length-1)) {
return false;
}
tempProperty = imagesArray[iterator].getProperty('data-src');
imagesArray[iterator].removeProperty('data-src');
imagesArray[iterator].setProperty('src', tempProperty);
imagesArray[iterator].addEvent('load', function() {
imagesArray[iterator].inject(injDiv);
iterator++;
imgBomber();
});
} ;
imgBomber();
});
There are several issues I can see here. You have not actually said what the issue is so... this is more of a code review / ideas for you until you post the actual problems with it (or a jsfiddle with it)
you run this code in domready where the browser may have already initiated the download of the images based upon the src property. you will be better off sending data-src from server directly before you even start
Probably biggest problem is: var injDiv = $$('div.imageswrapper'); will return a COLLECTION - so [<div.imageswrapper></div>, ..] - which cannot take an inject since the target can be multiple dom nodes. use var injDiv = document.getElement('div.imageswrapper'); instead.
there are issues with the load events and the .addEvent('load') for cross-browser. they need to be cleaned up after execution as in IE < 9, it will fire load every time an animated gif loops, for example. also, you don't have onerror and onabort handlers, which means your loader will stop at a 404 or any other unexpected response.
you should not use data-src to store the data, it's slow. MooTools has Element storage - use el.store('src', oldSource) and el.retrieve('src') and el.eliminate('src'). much faster.
you expose the iterator to the upper scope.
use mootools api - use .set() and .get() and not .getProperty() and .setProperty()
for (var i) iterators are unsafe to use for async operations. control flow of the app will continue to run and different operations may reference the wrong iterator index. looking at your code, this shouldn't be the case but you should use the mootools .each(fn(item, index), scope) from Elements / Array method.
Anyway, your problem has already been solved on several layers.
Eg, I wrote pre-loader - a framework agnostic image loader plugin that can download an array of images either in parallel or pipelined (like you are trying to) with onProgress etc events - see http://jsfiddle.net/dimitar/mFQm6/ - see the screenshots at the bottom of the readme.md:
MooTools solves this also (without the wait on previous image) via Asset.js - http://mootools.net/docs/more/Utilities/Assets#Asset:Asset-image and Asset.images for multiple. see the source for inspiration - https://github.com/mootools/mootools-more/blob/master/Source/Utilities/Assets.js
Here's an example doing this via my pre-loader class: http://jsfiddle.net/dimitar/JhpsH/
(function(){
var imagesToLoad = [],
imgDiv = document.getElement('div.injecthere');
$$('.imageswrapper img').each(function(el){
imagesToLoad.push(el.get('src'));
el.erase('src');
});
new preLoader(imagesToLoad, {
pipeline: true, // sequential loading like yours
onProgress: function(img, imageEl, index){
imgDiv.adopt(imageEl);
}
});
}());
I am looking for a quick way to grab some data off of one Web page and throw it into another. I don't have access to the query string in the URL of the second page, so passing the data that way is not an option. Right now, I am using a Greasemonkey user script in tandem with a JS bookmarklet trigger: javascript:doIt();
// ==UserScript==
// #include public_site
// #include internal_site
// ==/UserScript==
if (document.location.host.match(internal_site)) {
var datum1 = GM_getValue("d1");
var datum2 = GM_getValue("d2");
}
unsafeWindow.doIt = function() {
if(document.location.host.match(public_site)) {
var d1 = innerHTML of page element 1;
var d2 = innerHTML of page element 2;
//Next two lines use setTimeout to bypass GM_setValue restriction
window.setTimeout(function() {GM_setValue("d1", d1);}, 0);
window.setTimeout(function() {GM_setValue("d2", d2);}, 0);
}
else if(document.location.host.match(internal_site)) {
document.getElementById("field1").value = datum1;
document.getElementById("field2").value = datum2;
}
}
While I am open to another method, I would prefer to stay with this basic model if possible, as this is just a small fraction of the code in doIt() which is used on several other pages, mostly to automate date-based form fills; people really like their "magic button."
The above code works, but there's an interruption to the workflow: In order for the user to know which page on the public site to grab data from, the internal page has to be opened first. Then, once the GM cookie is set from the public page, the internal page has to be reloaded to get the proper information into the internal page variables. I'm wondering if there's any way to GM_getValue() at bookmarklet-clicktime to prevent the need for a refresh. Thanks!
Can you move the bookmarklet to a button or link -- that Greasemonkey will add to the page(s)?
Then you could set click-event handlers to fire GM_getValue().
It looks like the current method is exploiting a "security hole" -- one that may be closed in the future. You might consider doing everything in a Firefox extension, instead.
Possibly useful link: http://articles.sitepoint.com/article/ten-tips-firefox-extensions/1