Failed to clear temp storage - javascript

Failed to clear temp storage: It was determined that certain files are unsafe for access within a Web application, or that too many calls are being made on file resources. SecurityError
I'm getting this error in console. I have a script name script.js which makes ajax calls to retrieve data from php.
Any idea why?
Here's my jQuery script
$(document).ready(function() {
var loading = false;
var docHeight = $(window).height();
$('.timeline').css({minHeight: docHeight});
function get_tl_post() {
if (loading==false) {
loading = true;
$.ajax({
type:"POST",
url:"timeline.php",
data:"data=instagram",
beforeSend:function(){
$('.loader').fadeIn("slow");
},
complete:function(){
loading = false;
$('.loader').fadeOut("slow");
},
success:function(data) {
if(data=="error")
{
get_tl_post();
}
$(data).hide().appendTo(".timeline").fadeIn(1000);
}
});
}
}
$(window).scroll(function(){
if ($(window).scrollTop() == $(document).height() - $(window).height()) {
get_tl_post();
}
});
});

This is Due to Network Mapping of your resources.
In other words, you might have added workspace folder in your chrome dev tools.
Now when you are trying to make changes in some files it makes the Request to the File-System. This works fine for a while. However in some scenarios you remove your network mapping.
Then when you trying to open that web page on the browser, it might or might not ask for remapping of network resources and still try to update the File System.
And that's when you get this error.
There is nothing wrong with your script.
Now the only solution to this could be Removing cache, then restarting System.
If the problem still persist, you can simply re install chrome and this should be fixed.
Moreover, sometimes network mapping can cause several other issues as well.
For example, making the CSS file size to whooping 75MB or above. So you have to take precautions when playing with network mapping.
Optionally if you are on Mac... or Even on Windows and have sh
commands available.
sudo find / -type f -size +50000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'
Hit this in your Terminal to find out the culprit individual file which is over 50MB. you could then remove them.
Note : What the above command does it, it will find all the individual files which are more than 50MB and print them on your terminal one by one.

If I was to guess I would say your timeline.php script is always returning "error" so you are making too many calls recursively and the browser blocks them.
Try to eliminate the recursive function call and see if that will fix the problem.
Remove the following 3 lines and try again:
if (data == "error")
{
get_tl_post();
}

If your ajax call fails for some reason, this could lead to too many recursive calls of the get_tl_post();.
I suggest that you use the error property for error handling, and to avoid situations of recursively calling your function. An idea could be to set a policy like "if the request failed/data are with errors, wait for an amount of time, then retry. If X retries are made, then show an error code and stop requesting".
Below is an example of untested code, in order to show you the idea:
var attempts = 0;
$.ajax({
//Rest of properties
success: function(data) {
if(data == "error") {
if(attempts < 3) {
setTimeout(function(){
get_tl_post();
++attempts;
}, 2000);
} else {
//output failure here.
}
}
//Rest of code....
}
});

Related

Retrieve html content of a page several seconds after it's loaded

I'm coding a script in nodejs to automatically retrieve data from an online directory.
Knowing that I had never done this, I chose javascript because it is a language I use every day.
I therefore from the few tips I could find on google use request with cheerios to easily access components of dom of the page.
I found and retrieved all the necessary information, the only missing step is to recover the link to the next page except that the one is generated 4 seconds after loading of page and link contains a hash so that this step Is unavoidable.
What I would like to do is to recover dom of page 4-5 seconds after its loading to be able to recover the link
I looked on the internet, and much advice to use PhantomJS for this manipulation, but I can not get it to work after many attempts with node.
This is my code :
#!/usr/bin/env node
require('babel-register');
import request from 'request'
import cheerio from 'cheerio'
import phantom from 'node-phantom'
phantom.create(function(err,ph) {
return ph.createPage(function(err,page) {
return page.open(url, function(err,status) {
console.log("opened site? ", status);
page.includeJs('http://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js', function(err) {
//jQuery Loaded.
//Wait for a bit for AJAX content to load on the page. Here, we are waiting 5 seconds.
setTimeout(function() {
return page.evaluate(function() {
var tt = cheerio.load($this.html())
console.log(tt)
}, function(err,result) {
console.log(result);
ph.exit();
});
}, 5000);
});
});
});
});
but i get this error :
return ph.createPage(function (page) {
^
TypeError: ph.createPage is not a function
Is what I am about to do is the best way to do what I want to do? If not what is the simplest way? If so, where does my error come from?
If You dont have to use phantomjs You can use nightmare to do it.
It is pretty neat library to solve problems like yours, it uses electron as web browser and You can run it with or without showing window (You can also open developer tools like in Google Chrome)
It has only one flaw if You want to run it on server without graphical interface that You must install at least framebuffer.
Nightmare has method like wait(cssSelector) that will wait until some element appears on website.
Your code would be something like:
const Nightmare = require('nightmare');
const nightmare = Nightmare({
show: true, // will show browser window
openDevTools: true // will open dev tools in browser window
});
const url = 'http://hakier.pl';
const selector = '#someElementSelectorWitchWillAppearAfterSomeDelay';
nightmare
.goto(url)
.wait(selector)
.evaluate(selector => {
return {
nextPage: document.querySelector(selector).getAttribute('href')
};
}, selector)
.then(extracted => {
console.log(extracted.nextPage); //Your extracted data from evaluate
});
//this variable will be injected into evaluate callback
//it is required to inject required variables like this,
// because You have different - browser scope inside this
// callback and You will not has access to node.js variables not injected
Happy hacking!

ajax call to localhost from site loaded over https fails in chrome

-------------------- UPDATE 2 ------------------------
I see now that what I am trying to accomplish is not possible with chrome. But I am still curios, why is the policy set stricter with chrome than for example Firefox? Or is it perhaps that firefox doesn't actually make the call either, but javascript-wise it deems the call failed instead of all together blocked?
---------------- UPDATE 1 ----------------------
The issue indeed seems to be regarding calling http from https-site, this error is produced in the chrome console:
Mixed Content: The page at 'https://login.mysite.com/mp/quickstore1' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://localhost/biztv_local/video/video_check.php?video=253d01cb490c1cbaaa2b7dc031eaa9f5.mov&fullscreen=on'. This request has been blocked; the content must be served over HTTPS.
Then the question is why Firefox allows it, and whether there is a way to make chrome allow it. It has indeed worked fine until just a few months ago.
Original question:
I have some jQuery making an ajax call to http (site making the call is loaded over https).
Moreover, the call from my https site is to a script on the localhost on the clients machine, but the file starts with the
<?php header('Access-Control-Allow-Origin: *'); ?>
So that's fine. Peculiar setup you might say but the client is actually a mediaplayer.
It has always worked fine before, and still works fine in firefox, but since about two months back it isn't working in chrome.
Has there been a revision to policies in chrome regarding this type of call? Or is there an error in my code below that firefox manages to parse but chrome doesn't?
The error only occurs when the file is NOT present on the localhost (ie if a regular web user goes to this site with their own browser, naturally they won't have the file on their localhost, most won't even have a localhost) so one theory might be that since the file isn't there, the Access-Control-Allow-Origin: * is never encountered and therefore the call in its entirety is deemed insecure or not allowed by chrome, therefore it is never completed?
If so, is there an event handler I can attach to my jQuery.ajax method to catch that outcome instead? As of now, complete is never run if the file on localhost isn't there.
before : function( self ) {
var myself = this;
var data = self.slides[self.nextSlide-1].data;
var html = myself.getHtml(data);
$('#module_'+self.moduleId+'-slide_'+self.slideToCreate).html(html);
//This is the fullscreen-always version of the video template
var fullscreen = 'on';
//console.log('runnin beforeSlide method for a video template');
var videoCallStringBase = "http://localhost/biztv_local/video/video_check.php?"; //to call mediaplayers localhost
var videoContent='video='+data['filename_machine']+'&fullscreen='+fullscreen;
var videoCallString = videoCallStringBase + videoContent;
//TODO: works when file video_check.php is found, but if it isn't, it will wait for a video to play. It should skip then as well...
//UPDATE: Isn't this fixed already? Debug once env. is set up
console.log('checking for '+videoCallString);
jQuery.ajax({
url: videoCallString,
success: function(result) {
//...if it isn't, we can't playback the video so skip next slide
if (result != 1) {
console.log('found no video_check on localhost so skip slide '+self.nextSlide);
self.skip();
}
else {
//success, proceed as normal
self.beforeComplete();
}
},
complete: function(xhr, data) {
if (xhr.status != 200) {
//we could not find the check-video file on localhost so skip next slide
console.log('found no video_check on localhost so skip slide '+self.nextSlide);
self.skip();
}
else {
//success, proceed as normal
self.beforeComplete();
}
}, //above would cause a double-slide-skip, I think. Removed for now, that should be trapped by the fail clause anyways.
async: true
});

Where are screenshots from phantom.js saved?

Just starting out with Phantom.js after installing via Homebrew on my mac.
I'm trying out the examples to save screenshots of websites via https://github.com/ariya/phantomjs/wiki/Quick-Start
var page = require('webpage').create();
page.open('http://google.com', function () {
page.render('google.png');
phantom.exit();
});
But I don't see the images anywhere. Will they be in the same directory as the .js file?
PhantomJS usually renders the images to the same directory as the script that you're running. So yes, it should be in the same directory as the JavaScript file that you're running using PhantomJS.
EDIT
It appears that that particular example is flawed. The problem is that page.render(...); takes some time to render the page, but you're calling phantom.exit() before it has finished rendering. I was able to get the expected output by doing this:
var page = require('webpage').create();
page.open('http://google.com', function () {
page.render('google.png');
setTimeout(function() { phantom.exit(); }, 5000) // wait five seconds and then exit;
});
Unfortunately this isn't ideal, so I was able to come up with something that's a hair better. I say a "hair", because I'm basically polling to see when the page has finished rendering:
var done = false; //flag that tells us if we're done rendering
var page = require('webpage').create();
page.open('http://google.com', function (status) {
//If the page loaded successfully...
if(status === "success") {
//Render the page
page.render('google.png');
console.log("Site rendered...");
//Set the flag to true
done = true;
}
});
//Start polling every 100ms to see if we are done
var intervalId = setInterval(function() {
if(done) {
//If we are done, let's say so and exit.
console.log("Done.");
phantom.exit();
} else {
//If we're not done we're just going to say that we're polling
console.log("Polling...");
}
}, 100);
The code above works because the callback isn't immediately executed. So the polling code will start up and start to poll. Then when the callback is executed, we check to see the status of the page (we only want to render if we were able to load the page successfully). Then we render the page and set the flag that our polling code is checking on, to true. So the next time the polling code runs, the flag is true and so we exit.
This looks to be a problem with the way PhantomJS is running the webpage#render(...) call. I suspected that it was a non-blocking call, but according to the author in this issue, it is a blocking call. If I had to hazard a guess, perhaps the act of rendering is a blocking call, but the code that does the rendering might be handing off the data to another thread, which handles persisting the data to disk (so this part might be a non-blocking call). Unfortunately, this call is probably still executing when execution comes back to the main script and executes phantom.exit(), which means that the aforementioned asynchronous code never gets a chance to finish what it's doing.
I was able to find a post on the PhantomJS forums that deals with what you're describing. I can't see any issue that has been filed, so if you'd like you can go ahead and post one.
I have the very same issue as the author of this post, and none of the code examples worked for me. Kind of disorienting to have the second example in Phantom.js documentation not work. Installed using Home Brew on Snow Leopard.
I found a working example
var page = require("webpage").create();
var homePage = "http://www.google.com/";
page.settings.javascriptEnabled = false;
page.settings.loadImages = false;
page.open(homePage);
page.onLoadFinished = function(status) {
var url = page.url;
console.log("Status: " + status);
console.log("Loaded: " + url);
page.render("google.png");
phantom.exit();
};
Just a quick help for people who come here looking for the directory where PhantomJS or CasperJS's screenshots are saved: it is in the scripts directory by default. However, you do have control.
If you want to control where they are saved you can just alter the filename like so:
page.render('screenshots/google.jpg'); // saves to scriptLocation/screenshots/
page.render('/opt/google.jpg'); // saves to /screenshots (in the root)
Or if you use CasperJS you can use:
casper.capture('/opt/google.jpg',
undefined,
{ // imgOptions
format: 'jpg',
quality: 25
});
Hope this saves someone the trouble!
I'm not sure if something has changed, but the example on http://phantomjs.org/screen-capture.html worked fine for me:
var page = require('webpage').create();
page.open('http://github.com/', function() {
page.render('github.png');
phantom.exit();
});
I am running phantomjs-2.1.1-windows.
However, what led me to this thread was initially the image file was never getting created on my disk just like the original question. I figured it was some kind of security issues so I logged into a VM and started by putting everything in the same directory. I was using a batch file to kick off phantomjs.exe with my .js file passed in as a parameter. With all of the files in the same directory on my VM, it worked great. Through trial and error, I found that my batch file had to be in the same directory as my .js file. Now all is well on my host machine as well.
So in summary...it was probably a security related problem for me. No need for adding any kind of timeout.
My answer to the original question would be IF you have everything set up correctly, the file will be in the same directory as the .js file that phantomjs.exe runs. I tried specifying a fully qualified path for the image file but that didn't seem to work.
The root cause is that page.render() may not be ready to render the image even during the onLoadFinished() event. You may need to wait upwards of several seconds before page.render() can succeed. The only reliable way I found to render an image in PhantomJS is to repeatedly invoke page.render() until the method returns true, indicating it successfully rendered the image.
Solution:
var page = require("webpage").create();
var homePage = "http://www.google.com/";
page.onLoadFinished = function(status) {
var rendered, started = Date.now(), TIMEOUT = 30 * 1000; // 30 seconds
while(!((rendered = page.render('google.png')) || Date.now() - started > TIMEOUT));
if (!rendered) console.log("ERROR: Timed out waiting to render image.")
phantom.exit();
};
page.open(homePage);
Steal from rasterize.js example, it works more reliable than the accepted answer for me.
var page = require('webpage').create();
page.open('http://localhost/TestForTest/', function (status) {
console.log("starting...");
console.log("Status: " + status);
if (status === "success") {
window.setTimeout(function () {
page.render('myExample.png');
phantom.exit();
}, 200);
} else {
console.log("failed for some reason.");
}
});
Plenty of good suggestions here. The one thing I'd like to add:
I was running a phantomjs docker image, and the default user was "phantomjs", not root. I was therefore trying to write to a location that I didn't have permission on (it was the pwd on the docker host)...
> docker run -i -t -v $(pwd):/pwd --rm wernight/phantomjs touch /pwd/foo.txt
touch: cannot touch '/pwd/foo.txt': Permission denied
The code(s) above all run without error, but if they don't have permission to write to the destination then they will silently ignore the request...
So for example, taking #vivin-paliath's code (the current accepted answer):
var done = false; //flag that tells us if we're done rendering
var page = require('webpage').create();
page.open('http://google.com', function (status) {
//If the page loaded successfully...
if(status === "success") {
//Render the page
page.render('google.png');
console.log("Site rendered...");
//Set the flag to true
done = true;
}
});
//Start polling every 100ms to see if we are done
var intervalId = setInterval(function() {
if(done) {
//If we are done, let's say so and exit.
console.log("Done.");
phantom.exit();
} else {
//If we're not done we're just going to say that we're polling
console.log("Polling...");
}
}, 100);
And running it as the default user produces:
docker run -i -t -v $(pwd):/pwd -w /pwd --rm wernight/phantomjs phantomjs google.js
Polling...
Polling...
Polling...
Polling...
Polling...
Polling...
Polling...
Polling...
Polling...
Polling...
Polling...
Polling...
Site rendered...
Done.
But no google.png and no error. Simply adding -u root to the docker command solves this, and I get the google.png in my CWD.
For completeness, the final command:
docker run -u root -i -t -v $(pwd):/pwd -w /pwd --rm
wernight/phantomjs phantomjs google.js

How to throw an exception from JavaScript's callback if it is executed via Selenium's executeAsyncScript

I'm using Selenium WebDriver to automate some scripts. Most of the functionality is executed using JavaScript endpoints and I could not move this logic to Python:
// Python Selenium script
data = self.driver.execute_async_script('selenium.get_data(arguments)')
// JavaScript placed somewhere on the page tested
selenium.get_data = function(args) {
$.ajax({
url:'http://servername/ajax_data',
success: function(data) {
args[0](data);
},
error: function(data) {
// here I would like to notify selenium that everything is broken
}
});
};
There is usually some point in the JavaScript where I know that everything is already broken and I would like to raise exception in Selenium script.
Of course I could wait for timeout, but my scripts' timeouts are pretty long and I could not provide additional debug information in the error message.
I've already checked documentation (http://selenium.googlecode.com/svn/trunk/docs/api/java/org/openqa/selenium/JavascriptExecutor.html) and tests' source code (http://code.google.com/p/selenium/source/browse/trunk/java/client/test/org/openqa/selenium/ExecutingAsyncJavascriptTest.java?r=15810).
For now I'm using workaround, that just does not seem good:
error: function(data) {
window.alert(JSON.stringify(data));
}
Do you know any better solutions?

Failed to clear temp storage: SecurityError in Chrome [duplicate]

Failed to clear temp storage: It was determined that certain files are unsafe for access within a Web application, or that too many calls are being made on file resources. SecurityError
I'm getting this error in console. I have a script name script.js which makes ajax calls to retrieve data from php.
Any idea why?
Here's my jQuery script
$(document).ready(function() {
var loading = false;
var docHeight = $(window).height();
$('.timeline').css({minHeight: docHeight});
function get_tl_post() {
if (loading==false) {
loading = true;
$.ajax({
type:"POST",
url:"timeline.php",
data:"data=instagram",
beforeSend:function(){
$('.loader').fadeIn("slow");
},
complete:function(){
loading = false;
$('.loader').fadeOut("slow");
},
success:function(data) {
if(data=="error")
{
get_tl_post();
}
$(data).hide().appendTo(".timeline").fadeIn(1000);
}
});
}
}
$(window).scroll(function(){
if ($(window).scrollTop() == $(document).height() - $(window).height()) {
get_tl_post();
}
});
});
This is Due to Network Mapping of your resources.
In other words, you might have added workspace folder in your chrome dev tools.
Now when you are trying to make changes in some files it makes the Request to the File-System. This works fine for a while. However in some scenarios you remove your network mapping.
Then when you trying to open that web page on the browser, it might or might not ask for remapping of network resources and still try to update the File System.
And that's when you get this error.
There is nothing wrong with your script.
Now the only solution to this could be Removing cache, then restarting System.
If the problem still persist, you can simply re install chrome and this should be fixed.
Moreover, sometimes network mapping can cause several other issues as well.
For example, making the CSS file size to whooping 75MB or above. So you have to take precautions when playing with network mapping.
Optionally if you are on Mac... or Even on Windows and have sh
commands available.
sudo find / -type f -size +50000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'
Hit this in your Terminal to find out the culprit individual file which is over 50MB. you could then remove them.
Note : What the above command does it, it will find all the individual files which are more than 50MB and print them on your terminal one by one.
If I was to guess I would say your timeline.php script is always returning "error" so you are making too many calls recursively and the browser blocks them.
Try to eliminate the recursive function call and see if that will fix the problem.
Remove the following 3 lines and try again:
if (data == "error")
{
get_tl_post();
}
If your ajax call fails for some reason, this could lead to too many recursive calls of the get_tl_post();.
I suggest that you use the error property for error handling, and to avoid situations of recursively calling your function. An idea could be to set a policy like "if the request failed/data are with errors, wait for an amount of time, then retry. If X retries are made, then show an error code and stop requesting".
Below is an example of untested code, in order to show you the idea:
var attempts = 0;
$.ajax({
//Rest of properties
success: function(data) {
if(data == "error") {
if(attempts < 3) {
setTimeout(function(){
get_tl_post();
++attempts;
}, 2000);
} else {
//output failure here.
}
}
//Rest of code....
}
});

Categories