Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I've tried:
node-webshot
phantomjs
I could do it locally but I couldn't take screenshots of other websites that are based on angularjs.
Bounty
Be able to take a screenshot of any angularjs app which includes jquery and angular on the page. Every single site here: http://builtwith.angularjs.org/ should look as if I loaded it in my browser.
Must be able to get the screenshot via the terminal so it could be run in a background process like a worker or something.
One random server (or whatever should be able to go to an offsite website and take a screenshot of it.)
It just needs to take an url that will inevitably host an angularjs app and output what you'd expect to see in your browser.
Does not need to be phantomjs or node-webshot.
Update 1
As of last night this is how I'm doing it.
node-webkit (nodejs inside of chromium) compiled to linux-32
leave open on a random laptop
when it detects a screenshot needs to be taken (via firebase temporarily) it opens a iframe with that url
waits 10 seconds (reasonable time to load a site/app)
uses node-webkit api to screenshot itself
I have some work to do on this solution.
Update 2
This appears to be a potential solution but I've found that most of these solutions require opening a browser and taking the screenshot versus a headless browser like phantomjs.
http://browsershots.org/documentation#HowToCreateANewScreenshotFactory
Browserstack.com
Update 3
I'm continuing development on a production ready solution for this on github.
https://github.com/clouddueling/angular-snapshot
If you take this code and build it with node-webkit.app you will be able to run a screenshot server.
Have you tried wkhtmltopdf? It comes with a tool called wkhtmltoimage. It uses QtWebKit (A Qt port of the WebKit rendering engine) to render a web page, and converts the result to PDF or image format of your choice, all done at server side.
Because it uses WebKit, it renders everything (images, css and even javascript) just like a modern browser does. You can fine tune the parameters such as tweaking the JavaScript execution grace period.
In my use case, the results have been very satisfying and are almost identical to what browsers would render.
Here's a list of command options:
Name:
wkhtmltoimage 0.11.0 rc2
Synopsis:
wkhtmltoimage [OPTIONS]... <input file> <output file>
Description:
Converts an HTML page into an image,
General Options:
--allow <path> Allow the file or files from the specified
folder to be loaded (repeatable)
--checkbox-checked-svg <path> Use this SVG file when rendering checked
checkboxes
--checkbox-svg <path> Use this SVG file when rendering unchecked
checkboxes
--cookie <name> <value> Set an additional cookie (repeatable)
--cookie-jar <path> Read and write cookies from and to the
supplied cookie jar file
--crop-h <int> Set height for croping
--crop-w <int> Set width for croping
--crop-x <int> Set x coordinate for croping
--crop-y <int> Set y coordinate for croping
--custom-header <name> <value> Set an additional HTTP header (repeatable)
--custom-header-propagation Add HTTP headers specified by
--custom-header for each resource request.
--no-custom-header-propagation Do not add HTTP headers specified by
--custom-header for each resource request.
--debug-javascript Show javascript debugging output
--no-debug-javascript Do not show javascript debugging output
(default)
--encoding <encoding> Set the default text encoding, for input
-H, --extended-help Display more extensive help, detailing
less common command switches
-f, --format <format> Output file format
--height <int> Set screen height (default is calculated
from page content) (default 0)
-h, --help Display help
--htmldoc Output program html help
--images Do load or print images (default)
--no-images Do not load or print images
-n, --disable-javascript Do not allow web pages to run javascript
--enable-javascript Do allow web pages to run javascript
(default)
--javascript-delay <msec> Wait some milliseconds for javascript
finish (default 200)
--load-error-handling <handler> Specify how to handle pages that fail to
load: abort, ignore or skip (default
abort)
--disable-local-file-access Do not allowed conversion of a local file
to read in other local files, unless
explecitily allowed with --allow
--enable-local-file-access Allowed conversion of a local file to read
in other local files. (default)
--manpage Output program man page
--minimum-font-size <int> Minimum font size
--password <password> HTTP Authentication password
--disable-plugins Disable installed plugins (default)
--enable-plugins Enable installed plugins (plugins will
likely not work)
--post <name> <value> Add an additional post field (repeatable)
--post-file <name> <path> Post an additional file (repeatable)
-p, --proxy <proxy> Use a proxy
--quality <int> Output image quality (between 0 and 100)
(default 94)
--radiobutton-checked-svg <path> Use this SVG file when rendering checked
radiobuttons
--radiobutton-svg <path> Use this SVG file when rendering unchecked
radiobuttons
--readme Output program readme
--run-script <js> Run this additional javascript after the
page is done loading (repeatable)
--disable-smart-width Use the specified width even if it is not
large enough for the content
--enable-smart-width Extend --width to fit unbreakable content
(default)
--stop-slow-scripts Stop slow running javascripts (default)
--no-stop-slow-scripts Do not Stop slow running javascripts
--transparent Make the background transparent in pngs
--user-style-sheet <url> Specify a user style sheet, to load with
every page
--username <username> HTTP Authentication username
-V, --version Output version information an exit
--width <int> Set screen width, note that this is used
only as a guide line. Use
--disable-smart-width to make it strict.
(default 1024)
--window-status <windowStatus> Wait until window.status is equal to this
string before rendering page
--zoom <float> Use this zoom factor (default 1)
Specifying A Proxy:
By default proxy information will be read from the environment variables:
proxy, all_proxy and http_proxy, proxy options can also by specified with the
-p switch
<type> := "http://" | "socks5://"
<serif> := <username> (":" <password>)? "#"
<proxy> := "None" | <type>? <sering>? <host> (":" <port>)?
Here are some examples (In case you are unfamiliar with the BNF):
http://user:password#myproxyserver:8080
socks5://myproxyserver
None
Contact:
If you experience bugs or want to request new features please visit
<http://code.google.com/p/wkhtmltopdf/issues/list>, if you have any problems
or comments please feel free to contact me: <uuf6429#gmail.com>
Use browserstack to test your application in all browsers without having to install each one, including mobile browsers, different phones, tablets, etc.
There is support for Selenium automated testing and screenshots. Local testing is supported, no public URL is needed.
The screenshots API is available for configuring the screenshots you need, Screenshooter is a a tool for generating BrowserStack screenshots from the command line.
There is a trial period for this as it's a commercial product, but it's very well made and worth every penny. You can subscribe for only one month. I have used personally and I highly recommend it.
Although not personally tried it myself, I have seen service deployed in production that takes screenshots using Webdriver from Selenium.
Build the selenium Webdriver https://code.google.com/p/selenium/
Use the RESTful API to communicate with the server. There are specific calls where you can issue request to fetch a website, and take a screenshot of the current instance
everything is done in the background, so I think it fits your requirement.
Probably this will help https://bitbucket.org/vodolaz095/site-shooter
This is nodejs+phantomjs application to make site screenshots
You need a heroku free tier service to run this.
BTW, you can try this application - https://pageshooter.herokuapp.com
i think it can make screenshots of angularjs sites
Node-Webshot uses PhantomJS which in turn uses QtWebkit which doesn't work with AngularJS.
More info: https://github.com/angular/angular.js/issues/2985
Suggestion. Make sure the PhantomJS you have bundled within Node-Webshot is absolutely the latest version. If not, replace the PhantomJS with the latest version and prey for them to have fixed it by now.
If you have access to the command line options of PhantomJS, you could try a few of them in here: https://github.com/ariya/phantomjs/wiki/API-Reference
The ones particularly riging the bell are:
--ignore-ssl-errors=true
--local-to-remote-url-access=true
--web-security=false
Related
I've integrated Sentry with my website a few days ago and I noticed that sometimes users receive this error in their console:
ChunkLoadError: Loading chunk <CHUNK_NAME> failed.
(error: <WEBSITE_PATH>/<CHUNK_NAME>-<CHUNK_HASH>.js)
So I investigated the issue around the web and discovered some similar cases, but related to missing chunks caused by release updates during a session or caching issues.
The main difference between these cases and mine is that the failed chunks are actually reachable from the browser, so the loading error does not depend on the after-release refresh of the chunk hashes but (I guess), from some network related issue.
This assumption is reinforced by this stat: around 90% of the devices involved are mobile.
Finally, I come to the question: Should I manage the issue in some way (e. g. retrying the chunk loading if failed) or it's better to simply ignore it and let the user refresh manually?
2021.09.28 edit:
A month later, the issue is still occurring but I have not received any report from users, also I'm constantly recording user sessions with Hotjar but nothing relevant has been noticed so far.
I recently had a chat with Sentry support that helped me excluding the network related hypotesis:
Our React SDK does not have offline cache by default, when an error is captured it will be sent at that point. If the app is not able to connect to Sentry to send the event, it will be discarded and the SDK will no try to send it again.
Rodolfo from Sentry
I can confirm that the issue is quite unusual, I share with you another interesting stat: the user affected since the first occurrence are 882 out of 332.227 unique visitors (~0,26%), but I noticed that the 90% of the occurrences are from iOS (not generic mobile devices as I noticed a month ago), so if I calculate the same proportion with iOS users (794 (90% of 882) out of 128.444) we are near to a 0,62%. Still small but definitely more relevant on iOS.
This is most likely happening because the browser is caching your app's main HTML file, like index.html which serves the webpack bundles and manifest.
First I would ensure your web server is sending the correct HTTP response headers to not cache the app's index.html file (let's assume it is called that). If you are using NGINX, you can set the appropriate headers like this:
location ~* ^.+.html$ {
add_header Cache-Control "no-store max-age=0";
}
This file should be relatively small in size for a SPA, so it is ok to not cache this as long as you are caching all of the other assets the app needs like the JS and CSS, etc. You should be using content hashes on your JS bundles to support cache busting on those. With this in place visits to your site should always include the latest version of index.html with the latest assets including the latest webpack manifest which records the chunk names.
If you want to handle the Chunk Load Errors you could set up something like this:
import { ErrorBoundary } from '#sentry/react'
const App = (children) => {
<ErrorBoundary
fallback={({ error, resetError }) => {
if (/ChunkLoadError/.test(error.name)) {
// If this happens during a release you can show a new version alert
return <NewVersionAlert />
// If you are certain the chunk is on your web server or CDN
// You can try reloading the page, but be careful of recursion
// In case the chunk really is not available
if (!localStorage.getItem('chunkErrorPageReloaded')) {
localStorage.setItem('chunkErrorPageReloaded', true)
window.location.reload()
}
}
return <ExceptionRedirect resetError={resetError} />
}}>
{children}
</ErrorBoundary>
}
If you do decide to reload the page I would present a message to the user beforehand.
The chunk is reachable doesn't mean the user's browser can parse it. For example, if the user's browser is old. But the chunk contains new syntax.
Webpack loads the chunk by jsonp. It insert <script> tag into <head>. If the js chunk file is downloaded but cannot parsed. A ChunkLoadError will be throw.
You can reproduce it by following these steps. Write an optional chain and don't compile it. Ensure it output to a chunk.
const obj = {};
obj.sub ??= {};
Open your app by chrome 79 or safari 13.0. The full error message looks like this:
SyntaxError: Unexpected token '?' // 13.js:2
MAX RELOADS REACHED // chunk-load-handler.js:24
ChunkLoadError: Loading chunk 13 failed. // trackConsoleError.js:25
(missing: http://example.com/13.js)
I'm launching Firefox via command line and I'd like to launch a specific Firefox Profile with a proxy. According to this answer on Stackoverflow, Firefox proxy settings are stored in pref.js in the Firefox Profile folder and it is necessary to edit this file to launch FF with a proxy.
I've edited the file as follows:
user_pref("network.proxy.ftp", "1.0.0.1");
user_pref("network.proxy.ftp_port", 00000);
user_pref("network.proxy.gopher", "1.0.0.1");
user_pref("network.proxy.gopher_port", 00000);
user_pref("network.proxy.http", "1.0.0.1");
user_pref("network.proxy.http_port", 22222);
user_pref("network.proxy.no_proxies_on", "localhost, 1.0.0.1");
user_pref("network.proxy.socks", "1.0.0.1");
user_pref("network.proxy.socks_port", 00000);
user_pref("network.proxy.ssl", "1.0.0.1");
user_pref("network.proxy.ssl_port", 00000);
user_pref("network.proxy.type", 1);
Note: the IP address and port used above are for demonstration purposes.
However, I'm encountering two problems:
1) Firefox completely ignores these settings and launches FF without any proxy at all
2) When Firefox exits the text modification is reverted/deleted
Note: When I edited the text file above, Firefox was not running. I know there's a disclaimer at the top of prefs.js:
If you make changes to this file while the application is running, the
changes will be overwritten when the application exits.
But there were no live instances of Firefox running at the time I edited the above file.
Manually creating different FF Profiles (as suggested by another user) with different proxies is not an option as everything needs to be done programmatically, without manual intervention.
Does Firefox still support linking proxy via pref.js? If not, what is the current working solution to launch Firefox via command line with a proxy in Java?
Thanks
A proxy-autoconfig file is what you are looking for.
Docs here.
Define a file name.pac, that contains the javascript function
function FindProxyForURL(url, host)
Inside the file you can use any javscript you'd like to decide what proxy to use. Set the path to your .pac file in the firefox settings, under auto-config proxy. Remember to use a file url.
To setup automatic file switching, simply configure firefox to point towards a single file, and overwrite the file programmatically every time you want it to change. You could keep copies of all options, and simply copy an option file into the target file right before running.
An example of a super simple pac file is this:
function FindProxyForURL (url, host) {
return 'PROXY proxy.example.com:8080; DIRECT';
}
It will always return the identical proxy for all endpoints.
Passwords are not explicitly supported by the pac standard, but there are different ways to approach this. Firefox will prompt you for a login if it thinks it needs one, and you could also embed the password into the url (username:password#proxy.example.com). Additionally, a tool like proxy login automator could allow you to use passwords and to dynamically set the proxy without having to fight with firefox.
Is there a way I can put some code on my page so when someone visits a site, it clears the browser cache, so they can view the changes?
Languages used: ASP.NET, VB.NET, and of course HTML, CSS, and jQuery.
If this is about .css and .js changes, then one way is "cache busting" by appending something like "_versionNo" to the file name for each release. For example:
script_1.0.css // This is the URL for release 1.0
script_1.1.css // This is the URL for release 1.1
script_1.2.css // etc.
or after the file name:
script.css?v=1.0 // This is the URL for release 1.0
script.css?v=1.1 // This is the URL for release 1.1
script.css?v=1.2 // etc.
You can check this link to see how it could work.
Look into the cache-control and the expires META Tag.
<META HTTP-EQUIV="CACHE-CONTROL" CONTENT="NO-CACHE">
<META HTTP-EQUIV="EXPIRES" CONTENT="Mon, 22 Jul 2002 11:12:01 GMT">
Another common practices is to append constantly-changing strings to the end of the requested files. For instance:
<script type="text/javascript" src="main.js?v=12392823"></script>
Update 2012
This is an old question but I think it needs a more up to date answer because now there is a way to have more control of website caching.
In Offline Web Applications (which is really any HTML5 website) applicationCache.swapCache() can be used to update the cached version of your website without the need for manually reloading the page.
This is a code example from the Beginner's Guide to Using the Application Cache on HTML5 Rocks explaining how to update users to the newest version of your site:
// Check if a new cache is available on page load.
window.addEventListener('load', function(e) {
window.applicationCache.addEventListener('updateready', function(e) {
if (window.applicationCache.status == window.applicationCache.UPDATEREADY) {
// Browser downloaded a new app cache.
// Swap it in and reload the page to get the new hotness.
window.applicationCache.swapCache();
if (confirm('A new version of this site is available. Load it?')) {
window.location.reload();
}
} else {
// Manifest didn't changed. Nothing new to server.
}
}, false);
}, false);
See also Using the application cache on Mozilla Developer Network for more info.
Update 2016
Things change quickly on the Web.
This question was asked in 2009 and in 2012 I posted an update about a new way to handle the problem described in the question. Another 4 years passed and now it seems that it is already deprecated. Thanks to cgaldiolo for pointing it out in the comments.
Currently, as of July 2016, the HTML Standard, Section 7.9, Offline Web applications includes a deprecation warning:
This feature is in the process of being removed from the Web platform.
(This is a long process that takes many years.) Using any of the
offline Web application features at this time is highly discouraged.
Use service workers instead.
So does Using the application cache on Mozilla Developer Network that I referenced in 2012:
Deprecated This feature has been removed from the Web standards.
Though some browsers may still support it, it is in the process of
being dropped. Do not use it in old or new projects. Pages or Web apps
using it may break at any time.
See also Bug 1204581 - Add a deprecation notice for AppCache if service worker fetch interception is enabled.
Not as such. One method is to send the appropriate headers when delivering content to force the browser to reload:
Making sure a web page is not cached, across all browsers.
If your search for "cache header" or something similar here on SO, you'll find ASP.NET specific examples.
Another, less clean but sometimes only way if you can't control the headers on server side, is adding a random GET parameter to the resource that is being called:
myimage.gif?random=1923849839
I had similiar problem and this is how I solved it:
In index.html file I've added manifest:
<html manifest="cache.manifest">
In <head> section included script updating the cache:
<script type="text/javascript" src="update_cache.js"></script>
In <body> section I've inserted onload function:
<body onload="checkForUpdate()">
In cache.manifest I've put all files I want to cache. It is important now that it works in my case (Apache) just by updating each time the "version" comment. It is also an option to name files with "?ver=001" or something at the end of name but it's not needed. Changing just # version 1.01 triggers cache update event.
CACHE MANIFEST
# version 1.01
style.css
imgs/logo.png
#all other files
It's important to include 1., 2. and 3. points only in index.html. Otherwise
GET http://foo.bar/resource.ext net::ERR_FAILED
occurs because every "child" file tries to cache the page while the page is already cached.
In update_cache.js file I've put this code:
function checkForUpdate()
{
if (window.applicationCache != undefined && window.applicationCache != null)
{
window.applicationCache.addEventListener('updateready', updateApplication);
}
}
function updateApplication(event)
{
if (window.applicationCache.status != 4) return;
window.applicationCache.removeEventListener('updateready', updateApplication);
window.applicationCache.swapCache();
window.location.reload();
}
Now you just change files and in manifest you have to update version comment. Now visiting index.html page will update the cache.
The parts of solution aren't mine but I've found them through internet and put together so that it works.
For static resources right caching would be to use query parameters with value of each deployment or file version. This will have effect of clearing cache after each deployment.
/Content/css/Site.css?version={FileVersionNumber}
Here is ASP.NET MVC example.
<link href="#Url.Content("~/Content/Css/Reset.css")?version=#this.GetType().Assembly.GetName().Version" rel="stylesheet" type="text/css" />
Don't forget to update assembly version.
I had a case where I would take photos of clients online and would need to update the div if a photo is changed. Browser was still showing the old photo. So I used the hack of calling a random GET variable, which would be unique every time. Here it is if it could help anybody
<img src="/photos/userid_73.jpg?random=<?php echo rand() ?>" ...
EDIT
As pointed out by others, following is much more efficient solution since it will reload images only when they are changed, identifying this change by the file size:
<img src="/photos/userid_73.jpg?modified=<? filemtime("/photos/userid_73.jpg")?>"
A lot of answers are missing the point - most developers are well aware that turning off the cache is inefficient. However, there are many common circumstances where efficiency is unimportant and default cache behavior is badly broken.
These include nested, iterative script testing (the big one!) and broken third party software workarounds. None of the solutions given here are adequate to address such common scenarios. Most web browsers are far too aggressive caching and provide no sensible means to avoid these problems.
Updating the URL to the following works for me:
/custom.js?id=1
By adding a unique number after ?id= and incrementing it for new changes, users do not have to press CTRL + F5 to refresh the cache. Alternatively, you can append hash or string version of the current time or Epoch after ?id=
Something like ?id=1520606295
<meta http-equiv="pragma" content="no-cache" />
Also see https://stackoverflow.com/questions/126772/how-to-force-a-web-browser-not-to-cache-images
Here is the MDSN page on setting caching in ASP.NET.
Response.Cache.SetExpires(DateTime.Now.AddSeconds(60))
Response.Cache.SetCacheability(HttpCacheability.Public)
Response.Cache.SetValidUntilExpires(False)
Response.Cache.VaryByParams("Category") = True
If Response.Cache.VaryByParams("Category") Then
'...
End If
Not sure if that might really help you but that's how caching should work on any browser. When the browser request a file, it should always send a request to the server unless there is a "offline" mode. The server will read some parameters like date modified or etags.
The server will return a 304 error response for NOT MODIFIED and the browser will have to use its cache. If the etag doesn't validate on server side or the modified date is below the current modified date, the server should return the new content with the new modified date or etags or both.
If there is no caching data sent to the browser, I guess the behavior is undetermined, the browser may or may not cache file that don't tell how they are cached. If you set caching parameters in the response it will cache your files correctly and the server then may choose to return a 304 error, or the new content.
This is how it should be done. Using random params or version number in urls is more like a hack than anything.
http://www.checkupdown.com/status/E304.html
http://en.wikipedia.org/wiki/HTTP_ETag
http://www.xpertdeveloper.com/2011/03/last-modified-header-vs-expire-header-vs-etag/
After reading I saw that there is also a expire date. If you have problem, it might be that you have a expire date set up. In other words, when the browser will cache your file, since it has a expiry date, it shouldn't have to request it again before that date. In other words, it will never ask the file to the server and will never receive a 304 not modified. It will simply use the cache until the expiry date is reached or cache is cleared.
So that is my guess, you have some sort of expiry date and you should use last-modified etags or a mix of it all and make sure that there is no expire date.
If people tends to refresh a lot and the file doesn't get changed a lot, then it might be wise to set a big expiry date.
My 2 cents!
I implemented this simple solution that works for me (not yet on production environment):
function verificarNovaVersio() {
var sVersio = localStorage['gcf_versio'+ location.pathname] || 'v00.0.0000';
$.ajax({
url: "./versio.txt"
, dataType: 'text'
, cache: false
, contentType: false
, processData: false
, type: 'post'
}).done(function(sVersioFitxer) {
console.log('Versió App: '+ sVersioFitxer +', Versió Caché: '+ sVersio);
if (sVersio < (sVersioFitxer || 'v00.0.0000')) {
localStorage['gcf_versio'+ location.pathname] = sVersioFitxer;
location.reload(true);
}
});
}
I've a little file located where the html are:
"versio.txt":
v00.5.0014
This function is called in all of my pages, so when loading it checks if the localStorage's version value is lower than the current version and does a
location.reload(true);
...to force reload from server instead from cache.
(obviously, instead of localStorage you can use cookies or other persistent client storage)
I opted for this solution for its simplicity, because only mantaining a single file "versio.txt" will force the full site to reload.
The queryString method is hard to implement and is also cached (if you change from v1.1 to a previous version will load from cache, then it means that the cache is not flushed, keeping all previous versions at cache).
I'm a little newbie and I'd apreciate your professional check & review to ensure my method is a good approach.
Hope it helps.
In addition to setting Cache-control: no-cache, you should also set the Expires header to -1 if you would like the local copy to be refreshed each time (some versions of IE seem to require this).
See HTTP Cache - check with the server, always sending If-Modified-Since
There is one trick that can be used.The trick is to append a parameter/string to the file name in the script tag and change it when you file changes.
<script src="myfile.js?version=1.0.0"></script>
The browser interprets the whole string as the file path even though what comes after the "?" are parameters. So wat happens now is that next time when you update your file just change the number in the script tag on your website (Example <script src="myfile.js?version=1.0.1"></script>) and each users browser will see the file has changed and grab a new copy.
Force browsers to clear cache or reload correct data? I have tried most of the solutions described in stackoverflow, some work, but after a little while, it does cache eventually and display the previous loaded script or file. Is there another way that would clear the cache (css, js, etc) and actually work on all browsers?
I found so far that specific resources can be reloaded individually if you change the date and time on your files on the server. "Clearing cache" is not as easy as it should be. Instead of clearing cache on my browsers, I realized that "touching" the server files cached will actually change the date and time of the source file cached on the server (Tested on Edge, Chrome and Firefox) and most browsers will automatically download the most current fresh copy of whats on your server (code, graphics any multimedia too). I suggest you just copy the most current scripts on the server and "do the touch thing" solution before your program runs, so it will change the date of all your problem files to a most current date and time, then it downloads a fresh copy to your browser:
<?php
touch('/www/sample/file1.css');
touch('/www/sample/file2.js');
?>
then ... the rest of your program...
It took me some time to resolve this issue (as many browsers act differently to different commands, but they all check time of files and compare to your downloaded copy in your browser, if different date and time, will do the refresh), If you can't go the supposed right way, there is always another usable and better solution to it. Best Regards and happy camping. By the way touch(); or alternatives work in many programming languages inclusive in javascript bash sh php and you can include or call them in html.
For webpack users:-
I added time with chunkhash in my webpack config. This solved my problem of invalidating cache on each deployment. Also we need to take care that index.html/ asset.manifest is not cached both in your CDN or browser. Config of chunk name in webpack config will look like this:-
fileName: [chunkhash]-${Date.now()}.js
or If you are using contenthash then
fileName: [contenthash]-${Date.now()}.js
This is the simple solution I used to solve in one of my applications using PHP.
All JS and CSS files are placed in a folder with version name. Example : "1.0.01"
root\1.0.01\JS
root\1.0.01\CSS
Created a Helper and Defined the version Number there
<?php
function system_version()
{
return '1.0.07';
}
And Linked JS and SCC Files like below
<script src="<?= base_url(); ?>/<?= system_version();?>/js/generators.js" type="text/javascript"></script>
<link rel="stylesheet" type="text/css" href="<?= base_url(); ?>/<?= system_version(); ?>/css/view-checklist.css" />
Whenever I make changes to any JS or CSS file, I change the System Verson in Helper and rename the folder and deploy it.
I had the same problem, all i did was change the file names which are linked to my index.html file and then went into the index.html file and updated their names, not the best practice but if it works it works. The browser sees them as new files so they get redownloaded on to the users device.
example:
I want to update a css file, its named styles.css, change it to styless.css
Go into index.html and update , and change it to
in case interested I've found my solution to get browsers refreshing .css and .js in the context of .NET MVC (.net fw 4.8) and the use of bundles.
I wanted to make browsers refresh cached files only after a new assembly is deployed.
Buinding on Paulius Zaliaduonis response, my solution is as follows:
store your application base url in the web config app settings (the HttpContext is not yet available at runtime during the RegisterBundle...), then make this parameter changing according to the configuration (debug, staging, release...) by the xml transform
In BundleConfig RegisterBundles get the assembly version by the means of reflection, and...
...change the default tag format of both styles and scripts so that the bundling system generates link and script tags appending a query string parameter on them.
Here is the code
public static void RegisterBundles(BundleCollection bundles)
{
string baseUrl = system.Configuration.ConfigurationManager.AppSettings["by.app.base.url"].ToString();
string assemblyVersion = Assembly.GetExecutingAssembly().GetName().Version.ToString();
Styles.DefaultTagFormat = $"<link href='{baseUrl}{{0}}?v={assemblyVersion}' rel='stylesheet'/>";
Scripts.DefaultTagFormat = $"<script src='{baseUrl}{{0}}?v={assemblyVersion}'></script>";
}
You'll get tags like
<script src="https://example.org/myscriptfilepath/script.js?v={myassemblyversion}"></script>
you just need to remember to to build a new version before deploying.
Ciao
Do you want to clear the cache, or just make sure your current (changed?) page is not cached?
If the latter, it should be as simple as
<META HTTP-EQUIV="Pragma" CONTENT="no-cache">
I've installed Pimcore on a VPS through Liquid Web. I loaded the sample data install which also uses the nightly build code. While everything installed fine, the public facing website appears fine and functions well, as does the login screen for the admin panel, once you log in, you see three black pulsing dots in the middle of a white screen, where eventually they disappear and you're simply left with a white screen.
Upon inspection of the error console, I'm seeing this error:
Failed to load resource: the server responded with a status of 404 (Not Found)
/website/var/tmp/minified_javascript_core_b18dd1d6984052da2ab5abc79f0c4a17.js?_dc=3704
Other scripts are also failing because this script isn't being loaded, so I'm fairly sure that once this script loads the others will work just fine.
When I try to directly access this JS file, I see this message:
HTTP/1.1 404 Not Found Filtered by error handler (static file exception)
I have verified that the file exists in the filesystem, so I know for sure that it's there, leading me to believe that the filesystem has that directory and/or file locked down. Permissions etc, are all set to their appropriate values.
Pimcore Version 4
It's been a few years and this project surfaced in our pipeline again. The actual cause for why this breaks was because we are also running the ModSecurity suite on our host. Accessing the interface .js file was triggering rule 2000009 where the pattern /var/tmp was being matched.
Possible solution (if you're using WHM/CPanel as we are):
Configure your /etc/apache2/conf.d/modsec2/whitelist.conf file to include the following rule (add more in the same place if needed).
<LocationMatch '/website'>
SecRuleRemoveById 2000009
</LocationMatch>
Be sure that you restart your HTTP service after making this update.
Enjoy!
Is it possible to to take a screenshot of a webpage with JavaScript and then submit that back to the server?
I'm not so concerned with browser security issues. etc. as the implementation would be for HTA. But is it possible?
Google is doing this in Google+ and a talented developer reverse engineered it and produced http://html2canvas.hertzen.com/ . To work in IE you'll need a canvas support library such as http://excanvas.sourceforge.net/
I have done this for an HTA by using an ActiveX control. It was pretty easy to build the control in VB6 to take the screenshot. I had to use the keybd_event API call because SendKeys can't do PrintScreen. Here's the code for that:
Declare Sub keybd_event Lib "user32" _
(ByVal bVk As Byte, ByVal bScan As Byte, ByVal dwFlags As Long, ByVal dwExtraInfo As Long)
Public Const CaptWindow = 2
Public Sub ScreenGrab()
keybd_event &H12, 0, 0, 0
keybd_event &H2C, CaptWindow, 0, 0
keybd_event &H2C, CaptWindow, &H2, 0
keybd_event &H12, 0, &H2, 0
End Sub
That only gets you as far as getting the window to the clipboard.
Another option, if the window you want a screenshot of is an HTA would be to just use an XMLHTTPRequest to send the DOM nodes to the server, then create the screenshots server-side.
Another possible solution that I've discovered is http://www.phantomjs.org/ which allows one to very easily take screenshots of pages and a whole lot more. Whilst my original requirements for this question aren't valid any more (different job), I will likely integrate PhantomJS into future projects.
Pounder's if this is possible to do by setting the whole body elements into a canvase then using canvas2image ?
http://www.nihilogic.dk/labs/canvas2image/
A possible way to do this, if running on windows and have .NET installed you can do:
public Bitmap GenerateScreenshot(string url)
{
// This method gets a screenshot of the webpage
// rendered at its full size (height and width)
return GenerateScreenshot(url, -1, -1);
}
public Bitmap GenerateScreenshot(string url, int width, int height)
{
// Load the webpage into a WebBrowser control
WebBrowser wb = new WebBrowser();
wb.ScrollBarsEnabled = false;
wb.ScriptErrorsSuppressed = true;
wb.Navigate(url);
while (wb.ReadyState != WebBrowserReadyState.Complete) { Application.DoEvents(); }
// Set the size of the WebBrowser control
wb.Width = width;
wb.Height = height;
if (width == -1)
{
// Take Screenshot of the web pages full width
wb.Width = wb.Document.Body.ScrollRectangle.Width;
}
if (height == -1)
{
// Take Screenshot of the web pages full height
wb.Height = wb.Document.Body.ScrollRectangle.Height;
}
// Get a Bitmap representation of the webpage as it's rendered in the WebBrowser control
Bitmap bitmap = new Bitmap(wb.Width, wb.Height);
wb.DrawToBitmap(bitmap, new Rectangle(0, 0, wb.Width, wb.Height));
wb.Dispose();
return bitmap;
}
And then via PHP you can do:
exec("CreateScreenShot.exe -url http://.... -save C:/shots domain_page.png");
Then you have the screenshot in the server side.
This might not be the ideal solution for you, but it might still be worth mentioning.
Snapsie is an open source, ActiveX object that enables Internet Explorer screenshots to be captured and saved. Once the DLL file is registered on the client, you should be able to capture the screenshot and upload the file to the server withing JavaScript. Drawbacks: it needs to register the DLL file at the client and works only with Internet Explorer.
We had a similar requirement for reporting bugs. Since it was for an intranet scenario, we were able to use browser addons (like Fireshot for Firefox and IE Screenshot for Internet Explorer).
This question is old but maybe there's still someone interested in a state-of-the-art answer:
You can use getDisplayMedia:
https://github.com/ondras/browsershot
The SnapEngage uses a Java applet (1.5+) to make a browser screenshot. AFAIK, java.awt.Robot should do the job - the user has just to permit the applet to do it (once).
And I have just found a post about it:
Stack Overflow question JavaScript code to take a screenshot of a website without using ActiveX
Blog post How SnapABug works – and what they should do
I found that dom-to-image did a good job (much better than html2canvas). See the following question & answer: https://stackoverflow.com/a/32776834/207981
This question asks about submitting this back to the server, which should be possible, but if you're looking to download the image(s) you'll want to combine it with FileSaver.js, and if you want to download a zip with multiple image files all generated client-side take a look at jszip.
You can achieve that using HTA and VBScript. Just call an external tool to do the screenshotting. I forgot what the name is, but on Windows Vista there is a tool to do screenshots. You don't even need an extra install for it.
As for as automatic - it totally depends on the tool you use. If it has an API, I am sure you can trigger the screenshot and saving process through a couple of Visual Basic calls without the user knowing that you did what you did.
Since you mentioned HTA, I am assuming you are on Windows and (probably) know your environment (e.g. OS and version) very well.
If you are willing to do it on the server side, there are options like PhantomJS, which is now deprecated. The best way to go would be Headless Chrome with something like Puppeteer on Node.JS. Capturing a web page using Puppeteer would be as simple as follows:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://example.com');
await page.screenshot({path: 'example.png'});
await browser.close();
})();
However it requires headless chrome to be able to run on your servers, which has some dependencies and might not be suitable on restricted environments. (Also, if you are not using Node.JS, you might need to handle installation / launching of browsers yourself.)
If you are willing to use a SaaS service, there are many options such as
Restpack
UrlBox
Screenshot Layer
A great solution for screenshot taking in Javascript is the one by https://grabz.it.
They have a flexible and simple-to-use screenshot API which can be used by any type of JS application.
If you want to try it, at first you should get the authorization app key + secret and the free SDK
Then, in your app, the implementation steps would be:
// include the grabzit.min.js library in the web page you want the capture to appear
<script src="grabzit.min.js"></script>
//use the key and the secret to login, capture the url
<script>
GrabzIt("KEY", "SECRET").ConvertURL("http://www.google.com").Create();
</script>
Screenshot could be customized with different parameters. For example:
GrabzIt("KEY", "SECRET").ConvertURL("http://www.google.com",
{"width": 400, "height": 400, "format": "png", "delay", 10000}).Create();
</script>
That's all.
Then simply wait a short while and the image will automatically appear at the bottom of the page, without you needing to reload the page.
There are other functionalities to the screenshot mechanism which you can explore here.
It's also possible to save the screenshot locally. For that you will need to utilize GrabzIt server side API. For more info check the detailed guide here.
As of today Apr 2020 GitHub library html2Canvas
https://github.com/niklasvh/html2canvas
GitHub 20K stars | Azure pipeles : Succeeded | Downloads 1.3M/mo |
quote : " JavaScript HTML renderer The script allows you to take "screenshots" of webpages or parts of it, directly on the users browser. The screenshot is based on the DOM and as such may not be 100% accurate to the real representation as it does not make an actual screenshot, but builds the screenshot based on the information available on the page.
I made a simple function that uses rasterizeHTML to build a svg and/or an image with page contents.
Check it out :
https://github.com/orisha/tdg-screen-shooter-pure-js