I'm using Google Maps JS API v3 for a project. Is there a way to ask the map to cache tiles on the client's machine so that when they refresh the browser, the tiles don't have to all download again?
Many of my clients are on cellular connections where redownloading the map takes a considerable amount of time.
Thanks!
By default google maps return's cached images (you can see this in the network tab of the console).
If you user's having trouble caching the images, it's probably because they disabled the cache
This is actually possible with HTML5 and its cache-manifest feature. I'd suggest this question (and answer) be updated.
Google coders themselves have tackled this problem and unfortunately the information isn't well disseminated.
Required Readings
First take a look at the Google Code blogpost here: http://googlecode.blogspot.com/2010/04/google-apis-html5-new-era-of-mobile.html
Then have a read at Missouri State's own post: http://blogs.missouristate.edu/web/2010/05/12/google-maps-api-v3-developing-for-mobile-devices/
The Technique
You must cache every URL used by Google Maps
Employ methods to battle Chrome's and Firefox's stubborn caching methods by removing it from "offline websites"
All customizations must be client-side in javascript
Your cache file will look like (as per Missouri State):
CACHE MANIFEST
/map/mobile/examples/template.aspx
/map/mobile/examples/template.css
/map/mobile/examples/template.js
NETWORK:
http://maps.gstatic.com/
http://maps.google.com/
http://maps.googleapis.com/
http://mt0.googleapis.com/
http://mt1.googleapis.com/
http://mt2.googleapis.com/
http://mt3.googleapis.com/
http://khm0.googleapis.com/
http://khm1.googleapis.com/
http://cbk0.googleapis.com/
http://cbk1.googleapis.com/
http://www.google-analytics.com/
http://gg.google.com/
Caveats
You will need to be entirely HTML5-based and recognize the impacts this will have on your users. This situation is handy where either your users are up-to-date on browser standards/devices or you have control over user choices.
Hope this helps.
The previous answer re the cache-manifest feature is incorrect. If you read the spec at http://www.w3.org/TR/html5/offline.html, under "5.7.3 The cache manifest syntax" you'll see that the NETWORK section of the manifest file actually lists resources that should NOT be cached:
# here is a file for the online whitelist -- it isn't cached, and
# references to this file will bypass the cache, always hitting the
# network (or trying to, if the user is offline).
NETWORK:
comm.cgi
The previous poster's example is actually saying:
1) cache the following files:
/map/mobile/examples/template.aspx
/map/mobile/examples/template.css
/map/mobile/examples/template.js
2) fetch the following from the network:
http://maps.gstatic.com/
http://maps.google.com/
http://maps.googleapis.com/
http://mt0.googleapis.com/
http://mt1.googleapis.com/
http://mt2.googleapis.com/
http://mt3.googleapis.com/
http://khm0.googleapis.com/
http://khm1.googleapis.com/
http://cbk0.googleapis.com/
http://cbk1.googleapis.com/
http://www.google-analytics.com/
http://gg.google.com/
Related
The number of the problem related questions is huge. I spent a lot of time and probably took a look on each of them, however I haven’t found a solution for my case.
There is a domain (let's call it domain.com). Google maps is connected to it with the key for years: <script src="//google.com/jsapi?key=the_key_is_here"></script>
There is a new domain (let's call it domain-new.com) - the full clone of the first domain. But the Google Map doesn't work on it and there is an error in the console: Google Maps API error: MissingKeyMapError.
What methods of solution have I tried:
Creating a new key
Include all Google Maps APIs in console.cloud.google.com
Specify all possible referrers in console.cloud.google.com:
domain.com
*.domain.com/*
domain.com/contact
domain-new.com
*.domain-new.com/*
domain-new.com/contact
Specifying the versions of the included script (?v=3/?v=3.exp/?v=3.26)
Replace the included script with: <script async defer src="//maps.googleapis.com/maps/api/js?v=3.exp&libraries=places&key=the_key_is_here"></script>
All the changes were made simultaneously for domain.com and domain-new.com. The map worked successfully on domain.com, but there still was an error on domain-new.com.
Therefore, the exactly same map with the same key works on domain.com at the moment, but nevertheless it doesn't work on domain-new.com. All relevant documentation for the Google Maps API was read and no suitable solutions have been found. In conclusion, I hope your will help me.
My client side application uses the Facebook SDK for JavaScript loaded directly from the officially-documented URL (https://connect.facebook.net/en_US/sdk.js) and specifies v2.9 as the version during initialization.
The init snippet looks like this:
FB.init({
appId: '[redacted]',
cookie: true,
xfbml: false,
version: 'v2.9'
})
The Graph API docs reference graph.facebook.com in all example HTTP request snippets and make no reference to z-m-graph.facebook.com; however, I have begun observing requests to https://z-m-graph.facebook.com/v2.9/.
It appears that endpoints are pre-configured in the sdk.js script; here's a snippet I found in sdk.js while searching for "z-m-graph" under the Sources tab in Chrome dev tools:
__d("UrlMapConfig", [], {
"www": "www.facebook.com",
"m": "m.facebook.com",
"connect": "connect.facebook.net",
"business": "business.facebook.com",
"api_https": "z-m-api.facebook.com",
"api_read_https": "z-m-api.facebook.com",
"graph_https": "z-m-graph.facebook.com",
"an_https": "an.facebook.com",
"fbcdn_http": "z-m-static.xx.fbcdn.net",
"fbcdn_https": "z-m-static.xx.fbcdn.net",
"cdn_http": "staticxx.facebook.com",
"cdn_https": "staticxx.facebook.com"
});
I cannot reproduce this config mapping deterministically. Sometimes the api_ endpoints use z-m-* and sometimes they do not.
UPDATE (2017-10-17T15:36+00:00)
re: Why do I care that the SDK is attempting to access a different Graph API host than expected?
I use the SDK to make Graph API calls as part of a registration / login experience. Due to the secure nature of this page I follow OWASP guidelines and implement a strict Content Security Policy (CSP).
Following the security principle of least privilege, the CSP only allows connections to and assets from hosts I expect the application to require. As z-m-graph.facebook.com is not documented or referenced anywhere and graph.facebook.com is used specifically and exclusively in all examples and instructions, graph.facebook.com is permitted while z-m-graph.facebook.com is not.
I'd love help chasing down answers / leads to the following:
How can I force the SDK to always use graph.facebook.com?
What is z-m-graph.facebook.com?
Where can I find documentation for z-m-graph.facebook.com?
Are there other hosts that the JS SDK may attempt to use for Graph API requests? What are they?
Thanks for taking a look!
From what I've noticed there's
graph.facebook.com
z-m-graph.facebook.com and
b-graph.facebook.com
Z.M should mean zero mode which is used in countries where network providers enable you to use Facebook free with a capped data and no viewable media (most of the times) Facebook made this mode for mass adoption in developing countries. Note: free.facebook.com does not use z-m-graph.facebook.com but pure HTML doc since it's freebasics program specifies only HTML and no JavaScript for very limited phones
B should mean basic I don't know much about it but seems to have some similarities with the above or used in slow or limited connections ( not so sure)
What I've noticed with messing around with the Whitehat settings on Facebook app is they all end up at the same end point on the server (95% sure) have the same body parameters in the post and most importantly if assuming your ISP supports it if you toggle between free mode the app switch to z-m (zero mode) and sometimes b (basic) if photos don't show up or maybe network is slowed and finally graph.facebook.com when you toggle back to data mode so z-m-graph.facebook.com has no docs because very few Devs would need it since it is very ISP dependent in Nigeria out of the top three major ISP's only two support Facebook z-m graph point so I can't use z-m graph point on the other network so it doesn't make sense to write docs for it but it has similarities to the other hope this is good for you ; )
What domains/protocols in the img-src directive of the Content-Security-Policy header are required to allow Google AdWords conversion tracking?
From testing, when we call google_trackConversion, it looks like the browser creates an image with a src that follows a chain of 302 redirects between various domains...
www.googleadservices.com ->
googleads.g.doubleclick.net ->
www.google.com ->
www.google.co.uk
The final .co.uk looks suspicious to me. As we're testing from the UK, we're concerned that tracking called from other countries will redirect to other domains.
What is the complete list of domains that we need to open up in order for the tracking to work?
As requested in comments, an example path component of the first request is:
pagead/conversion/979383382/?random=1452934690748&cv=8&fst=1452934690748&num=1&fmt=3&label=jvoMCNP4umIQ1uiA0wM&guid=ON&u_h=1080&u_w=1920&u_ah=1033&u_aw=1920&u_cd=24&u_his=18&u_tz=0&u_java=false&u_nplug=5&u_nmime=7&frm=0&url=https%3A//beta.captevate.com/payment%3Flevel%3Da00&async=1
and repeating the conversion a second time, the path component of the first request is
pagead/conversion/979383382/?random=1452934959209&cv=8&fst=1452934959209&num=1&fmt=3&label=jvoMCNP4umIQ1uiA0wM&guid=ON&u_h=1080&u_w=1920&u_ah=1033&u_aw=1920&u_cd=24&u_his=26&u_tz=0&u_java=false&u_nplug=5&u_nmime=7&frm=0&url=https%3A//beta.captevate.com/payment%3Flevel%3Da00&async=1
I used a free VPN service to connect from a couple of countries (Netherlands and Singapore), and the last redirect doesn't occur: the final request to www.google.com is a 200. However, I obviously haven't tried connected from every country, so my original question stands.
Unfortunately, there aren't many ways around this. Resources require either whitelisting (in the case of remote resources, like this one) or inlining tricks (i.e. nonce or sha256-...) when CSP is active. At the end of the day, though, CSP can probably still make your site safer and protect most resources.
Depending on what you are trying to do, though, you may still be able to achieve your goal.
Here are some options:
Whitelist all images.
Of course, you could simply place a "*" in your img-src directive, but I imagine you already know that and are choosing not to because it defeats CSP's protection for images.
Load the image via alternate means.
If all you are after is specifically locking down images, and, say, don't care so much about XMLHttpRequest, you could load the pixel via POST or even via a <script> tag with a custom type (using the AdWords image tag tracking method). This takes advantage of the fact that Google only needs the browser to complete the HTTP request/response (and redirect) cycle for analytics purposes, and you don't really care about parsing or executing the resulting content, which is a 1x1 transparent pixel anyways. This allows you to lock down your img-src directive (if that is indeed your goal) while still allowing whatever domain Google would like to use for redirects.
I know this only moves your problem, but it's useful if your main threat is malicious images.
Place all Google domains in your img-src.
As suggested below. Header lengths will be a problem (even if the specs say you're fine, implementors are not always so generous), and more importantly, you may encounter spurious failures as Google changes their list of domains, which is certainly not a public or easily noticeable action (besides your ad conversions not coming through!). Since I imagine your job isn't to update that list constantly, you probably don't want to go with this option.
Report failures for a few months and then roll with it.
Because CSP supports reporting URIs and the Content-Security-Policy-Report-Only variant, you can roll it out in report-only mode and wait for reports to come in. If you already have good data about your userbase (and it doesn't change much), this can be a good option - once you see those reports stabilize on a list of domains, seal it in a regular CSP header. Optionally, you can place a reporting URI on the final header to catch any additional failures. The downside of this strategy, of course, is that you don't get protection while in report-only mode, and when you switch to enforcing it, failures cause lost conversion data and you're playing catch up.
Static pixel with reverse proxy
Okay. Well, with the above options not being so great (I admit it), it's time to think outside the box. The problem here is that HTTP optimization techniques applied by Google (sharding/geo-pinning domains) are at odds with good security practice (i.e. CSP). The root cause of the domain ambiguity is the client's geographic location, so why not pin it yourself?
Assuming you have advanced control of your own HTTP server, you could use the static pixel approach for tracking and proxy the request yourself, like so:
User ---> GET http://your-page/
User <--- <html>...
pixel: http://your-page/pixel?some=params
User ---> http://your-page/pixel?some=params
---> fetch http://googleads.g.doubleclick.net/pagead/viewthroughconversion/12345/?some=params
<--- redirect to http://google.com, or http://google.co.uk
User <--- return redirect
Using a static pixel (like approach #2) and putting your proxy, say, in the US or UK should ensure that the source IP is geographically pinned there, and Google's anycast frontend should route you to a stable endpoint. Placing a proxy in between the user and Google also gives you a chance to force-rewrite the redirect if you want to.
To simplify proxy setup (and add some performance spice), you could opt for something like Fastly with Origin Shielding instead of building it yourself. If you add the DoubleClick backend and proxy from there, you can pin the originating requests from the CDN to come only from a certain geographic region. Either way, your user should see a stable set of redirects, and you can trim down that list of Google domains to just img-src 'self' *.google.com *.doubleclick.net *.googleadservices.net.
Edit: It is also worth noting that Fastly (and a growing list of other CDN providers) peer directly with Google Cloud at a few of their Points-of-Presence, offering an optimized path into Google's networks for your proxied traffic.
What are you trying to achieve by locking down your img-src?
CSP is a great security option but most issues are with javascript (which can cause all sorts of issues), css (which can be used to hide or overly elements with injected content) or framing options (which can be used for click-jacking by similarly overlying content). Images are a much smaller risk IMHO.
There are few security risks that I can think of with loading images, which boil down to:
Tracking and the privacy implications of that. Though you are already using Google Adwords which tracks so much. And those that care about this typically block it in their browser.
Loading of insecure content (I presume you are using HTTPS exclusively or this whole conversation is a bit pointless?). This can be remediated with a more loose CSP policy of just https for img-src.
Loading an image and subsequently overlying part of your website with that rogue image. But that requires javascript and/or CSS injection too - which should be locked down in CSP.
Ultimately unless you have a XSS vulnerability people shouldn't be able to easily load images into your pages. And even if they could I think the risks are small.
So, I would be tempted to just have a "img-src 'self' https:;" rather than try any of the other work arounds the others have suggested - all of which have downsides and are not very future proof.
Ultimately if you are that concerned about security of your site that locking down images is a high priority I would question whether you should be running Google Adwords.
However if there is a specific threat you are trying to protect against, while at the same time still allowing Adwords, then provide details of that and there may be other ways around it. At the moment you've asked for a solution to particular problem without necessarily explaining the actual underlying problem which may have solutions other than the one you are asking about.
You can use Wikipedia's List of Google domains. There are many domains unrelated to Google Adwords, but I don't think allowing domains like youtube.com could cause problems.
Currently the list is:
google.com
google.ac
google.ad
google.ae
google.com.af
google.com.ag
google.com.ai
google.al
google.am
google.co.ao
google.com.ar
google.as
google.at
google.com.au
google.az
google.ba
google.com.bd
google.be
google.bf
google.bg
google.com.bh
google.bi
google.bj
google.com.bn
google.com.bo
google.com.br
google.bs
google.bt
google.co.bw
google.by
google.com.bz
google.ca
google.com.kh
google.cc
google.cd
google.cf
google.cat
google.cg
google.ch
google.ci
google.co.ck
google.cl
google.cm
google.cn
g.cn
google.com.co
google.co.cr
google.com.cu
google.cv
google.com.cy
google.cz
google.de
google.dj
google.dk
google.dm
google.com.do
google.dz
google.com.ec
google.ee
google.com.eg
google.es
google.com.et
google.fi
google.com.fj
google.fm
google.fr
google.ga
google.ge
google.gf
google.gg
google.com.gh
google.com.gi
google.gl
google.gm
google.gp
google.gr
google.com.gt
google.gy
google.com.hk
google.hn
google.hr
google.ht
google.hu
google.co.id
google.iq
google.ie
google.co.il
google.im
google.co.in
google.io
google.is
google.it
google.je
google.com.jm
google.jo
google.co.jp
google.co.ke
google.ki
google.kg
google.co.kr
google.com.kw
google.kz
google.la
google.com.lb
google.com.lc
google.li
google.lk
google.co.ls
google.lt
google.lu
google.lv
google.com.ly
google.co.ma
google.md
google.me
google.mg
google.mk
google.ml
google.com.mm
google.mn
google.ms
google.com.mt
google.mu
google.mv
google.mw
google.com.mx
google.com.my
google.co.mz
google.com.na
google.ne
google.com.nf
google.com.ng
google.com.ni
google.nl
google.no
google.com.np
google.nr
google.nu
google.co.nz
google.com.om
google.com.pk
google.com.pa
google.com.pe
google.com.ph
google.pl
google.com.pg
google.pn
google.co.pn
google.com.pr
google.ps
google.pt
google.com.py
google.com.qa
google.ro
google.rs
google.ru
google.rw
google.com.sa
google.com.sb
google.sc
google.se
google.com.sg
google.sh
google.si
google.sk
google.com.sl
google.sn
google.sm
google.so
google.st
google.sr
google.com.sv
google.td
google.tg
google.co.th
google.com.tj
google.tk
google.tl
google.tm
google.to
google.tn
google.com.tr
google.tt
google.com.tw
google.co.tz
google.com.ua
google.co.ug
google.co.uk
google.com
google.com.uy
google.co.uz
google.com.vc
google.co.ve
google.vg
google.co.vi
google.com.vn
google.vu
google.ws
google.co.za
google.co.zm
google.co.zw
admob.com
adsense.com
adwords.com
android.com
blogger.com
blogspot.com
chromium.org
chrome.com
chromebook.com
cobrasearch.com
googlemember.com
googlemembers.com
com.google
feedburner.com
doubleclick.com
igoogle.com
foofle.com
froogle.com
googleanalytics.com
google-analytics.com
googlecode.com
googlesource.com
googledrive.com
googlearth.com
googleearth.com
googlemaps.com
googlepagecreator.com
googlescholar.com
gmail.com
googlemail.com
keyhole.com
madewithcode.com
panoramio.com
picasa.com
sketchup.com
urchin.com
waze.com
youtube.com
youtu.be
yt.be
ytimg.com
youtubeeducation.com
youtube-nocookie.com
like.com
google.org
google.net
466453.com
gooogle.com
gogle.com
ggoogle.com
gogole.com
goolge.com
googel.com
duck.com
googlee.com
googil.com
googlr.com
googl.com
gmodules.com
googleadservices.com
googleapps.com
googleapis.com
goo.gl
googlebot.com
googlecommerce.com
googlesyndication.com
g.co
whatbrowser.org
localhost.com
withgoogle.com
ggpht.com
youtubegaming.com
However, if you want to be sure if that's really all domains, you should ask Google directly.
Unfortunately there is no clean workarround, it only accept wildcards * on the left of the domain.
You can disable this feature on GTM or Universal Analytics but if you do use Google Adds it requires this to calculate the segments to target your add, otherwise your adds will be very expensive (and not targeted)
So: You can check all valid google domains here https://www.google.com/supported_domains
and add them on the white list on img-src and connect-src on your CSP Policy, had cross your fingers that goggle will not add more (youc could monitor this url for changes with any of thee services that does this)
This nightmare ends in mid 2023 when they will deprecate Universal Analytics, GA4 does not use this.
Not sure if you guys are using anything to report the CSP failures, we discovered this service https://report-uri.com/ the free tier gives a reasonable endpoint to report on failures, once we went live we burned our quota in 2 days thought... but it did help to find holes in our CSP.
It did crash our server, we had to increase the HTTP header size once we put twice all google domains
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I've tried:
node-webshot
phantomjs
I could do it locally but I couldn't take screenshots of other websites that are based on angularjs.
Bounty
Be able to take a screenshot of any angularjs app which includes jquery and angular on the page. Every single site here: http://builtwith.angularjs.org/ should look as if I loaded it in my browser.
Must be able to get the screenshot via the terminal so it could be run in a background process like a worker or something.
One random server (or whatever should be able to go to an offsite website and take a screenshot of it.)
It just needs to take an url that will inevitably host an angularjs app and output what you'd expect to see in your browser.
Does not need to be phantomjs or node-webshot.
Update 1
As of last night this is how I'm doing it.
node-webkit (nodejs inside of chromium) compiled to linux-32
leave open on a random laptop
when it detects a screenshot needs to be taken (via firebase temporarily) it opens a iframe with that url
waits 10 seconds (reasonable time to load a site/app)
uses node-webkit api to screenshot itself
I have some work to do on this solution.
Update 2
This appears to be a potential solution but I've found that most of these solutions require opening a browser and taking the screenshot versus a headless browser like phantomjs.
http://browsershots.org/documentation#HowToCreateANewScreenshotFactory
Browserstack.com
Update 3
I'm continuing development on a production ready solution for this on github.
https://github.com/clouddueling/angular-snapshot
If you take this code and build it with node-webkit.app you will be able to run a screenshot server.
Have you tried wkhtmltopdf? It comes with a tool called wkhtmltoimage. It uses QtWebKit (A Qt port of the WebKit rendering engine) to render a web page, and converts the result to PDF or image format of your choice, all done at server side.
Because it uses WebKit, it renders everything (images, css and even javascript) just like a modern browser does. You can fine tune the parameters such as tweaking the JavaScript execution grace period.
In my use case, the results have been very satisfying and are almost identical to what browsers would render.
Here's a list of command options:
Name:
wkhtmltoimage 0.11.0 rc2
Synopsis:
wkhtmltoimage [OPTIONS]... <input file> <output file>
Description:
Converts an HTML page into an image,
General Options:
--allow <path> Allow the file or files from the specified
folder to be loaded (repeatable)
--checkbox-checked-svg <path> Use this SVG file when rendering checked
checkboxes
--checkbox-svg <path> Use this SVG file when rendering unchecked
checkboxes
--cookie <name> <value> Set an additional cookie (repeatable)
--cookie-jar <path> Read and write cookies from and to the
supplied cookie jar file
--crop-h <int> Set height for croping
--crop-w <int> Set width for croping
--crop-x <int> Set x coordinate for croping
--crop-y <int> Set y coordinate for croping
--custom-header <name> <value> Set an additional HTTP header (repeatable)
--custom-header-propagation Add HTTP headers specified by
--custom-header for each resource request.
--no-custom-header-propagation Do not add HTTP headers specified by
--custom-header for each resource request.
--debug-javascript Show javascript debugging output
--no-debug-javascript Do not show javascript debugging output
(default)
--encoding <encoding> Set the default text encoding, for input
-H, --extended-help Display more extensive help, detailing
less common command switches
-f, --format <format> Output file format
--height <int> Set screen height (default is calculated
from page content) (default 0)
-h, --help Display help
--htmldoc Output program html help
--images Do load or print images (default)
--no-images Do not load or print images
-n, --disable-javascript Do not allow web pages to run javascript
--enable-javascript Do allow web pages to run javascript
(default)
--javascript-delay <msec> Wait some milliseconds for javascript
finish (default 200)
--load-error-handling <handler> Specify how to handle pages that fail to
load: abort, ignore or skip (default
abort)
--disable-local-file-access Do not allowed conversion of a local file
to read in other local files, unless
explecitily allowed with --allow
--enable-local-file-access Allowed conversion of a local file to read
in other local files. (default)
--manpage Output program man page
--minimum-font-size <int> Minimum font size
--password <password> HTTP Authentication password
--disable-plugins Disable installed plugins (default)
--enable-plugins Enable installed plugins (plugins will
likely not work)
--post <name> <value> Add an additional post field (repeatable)
--post-file <name> <path> Post an additional file (repeatable)
-p, --proxy <proxy> Use a proxy
--quality <int> Output image quality (between 0 and 100)
(default 94)
--radiobutton-checked-svg <path> Use this SVG file when rendering checked
radiobuttons
--radiobutton-svg <path> Use this SVG file when rendering unchecked
radiobuttons
--readme Output program readme
--run-script <js> Run this additional javascript after the
page is done loading (repeatable)
--disable-smart-width Use the specified width even if it is not
large enough for the content
--enable-smart-width Extend --width to fit unbreakable content
(default)
--stop-slow-scripts Stop slow running javascripts (default)
--no-stop-slow-scripts Do not Stop slow running javascripts
--transparent Make the background transparent in pngs
--user-style-sheet <url> Specify a user style sheet, to load with
every page
--username <username> HTTP Authentication username
-V, --version Output version information an exit
--width <int> Set screen width, note that this is used
only as a guide line. Use
--disable-smart-width to make it strict.
(default 1024)
--window-status <windowStatus> Wait until window.status is equal to this
string before rendering page
--zoom <float> Use this zoom factor (default 1)
Specifying A Proxy:
By default proxy information will be read from the environment variables:
proxy, all_proxy and http_proxy, proxy options can also by specified with the
-p switch
<type> := "http://" | "socks5://"
<serif> := <username> (":" <password>)? "#"
<proxy> := "None" | <type>? <sering>? <host> (":" <port>)?
Here are some examples (In case you are unfamiliar with the BNF):
http://user:password#myproxyserver:8080
socks5://myproxyserver
None
Contact:
If you experience bugs or want to request new features please visit
<http://code.google.com/p/wkhtmltopdf/issues/list>, if you have any problems
or comments please feel free to contact me: <uuf6429#gmail.com>
Use browserstack to test your application in all browsers without having to install each one, including mobile browsers, different phones, tablets, etc.
There is support for Selenium automated testing and screenshots. Local testing is supported, no public URL is needed.
The screenshots API is available for configuring the screenshots you need, Screenshooter is a a tool for generating BrowserStack screenshots from the command line.
There is a trial period for this as it's a commercial product, but it's very well made and worth every penny. You can subscribe for only one month. I have used personally and I highly recommend it.
Although not personally tried it myself, I have seen service deployed in production that takes screenshots using Webdriver from Selenium.
Build the selenium Webdriver https://code.google.com/p/selenium/
Use the RESTful API to communicate with the server. There are specific calls where you can issue request to fetch a website, and take a screenshot of the current instance
everything is done in the background, so I think it fits your requirement.
Probably this will help https://bitbucket.org/vodolaz095/site-shooter
This is nodejs+phantomjs application to make site screenshots
You need a heroku free tier service to run this.
BTW, you can try this application - https://pageshooter.herokuapp.com
i think it can make screenshots of angularjs sites
Node-Webshot uses PhantomJS which in turn uses QtWebkit which doesn't work with AngularJS.
More info: https://github.com/angular/angular.js/issues/2985
Suggestion. Make sure the PhantomJS you have bundled within Node-Webshot is absolutely the latest version. If not, replace the PhantomJS with the latest version and prey for them to have fixed it by now.
If you have access to the command line options of PhantomJS, you could try a few of them in here: https://github.com/ariya/phantomjs/wiki/API-Reference
The ones particularly riging the bell are:
--ignore-ssl-errors=true
--local-to-remote-url-access=true
--web-security=false
I arrive to this problem quite a lot of times, where some of the users have a corrupt application cache (HTML 5).
I do update the manifest file every time there is a new release still some times some users get a corrupt application cache.
I such a case I want to fully clear what is there in their application cache and load all the fresh content from the server.
Is there a way to that using Javascript?
According to the following article on
http://www.w3schools.com/html5/html5_app_cache.asp
there are three ways on wich the application cache will be reset, these are:
The user clears the browser cache
The manifest file is modified
The application cache is programmatically updated
More information about programmatically updating the application cache can be found here:
http://www.html5rocks.com/en/tutorials/appcache/beginner/
It looks something like this:
var appCache = window.applicationCache;
appCache.update(); //this will attempt to update the users cache and changes the application cache status to 'UPDATEREADY'.
if (appCache.status == window.applicationCache.UPDATEREADY) {
appCache.swapCache(); //replaces the old cache with the new one.
}
This one is quite old but as I see a wrong answer being up-voted, I felt like giving some hint....
If ones has the trouble of looking at the spec, you can see that there's no way for code to force the browser to reload the cache, unless there's a change in the manifest, and that's when "appCache.status == window.applicationCache.UPDATEREADY" is true.
Look here http://www.w3.org/TR/2011/WD-html5-20110525/offline.html
"updateready The resources listed in the manifest have been newly redownloaded, and the script can use swapCache() to switch to the new cache."
So, reading it carefully, you find that the applicationCache gets to that status when the resources where just downloaded... that is.. a previous "downloading" event occurred... and previous to that one a "checking"....