Context:
I am creating a Chrome extension that hides certain elements of certain sites
In this specific case, I'm trying to hide the main feed of YouTube's home and trending pages
The script has no trouble on all other sites, including Twitter, Facebook etc.
But on YouTube, it's causing the page to crash
Roughly speaking, what the script does is:
Observes any mutation on document (childList: true, subtree: true, characterData: false)
Searches for the existence of certain nodes in the document
Changes some of their styles to hide them (or if already hidden, does nothing)
Adds a small menu into the node with a button to unhide the node
The MutationObserver is never disconnected because it needs to keep watching in the case of single-page apps where the page stays the same but different nodes come and go
So it keeps checking that the hidden nodes are still hidden every time there's any new mutation to the document or its subtree (heavy load on performance, I know - but it works fine on every other site)
YouTube issue:
YouTube always throws up a warning as follows, even when I am not running my script on it (in other words, YouTube's code is already a bit suspect):
[Violation] Added non-passive event listener to a scroll-blocking <some> event. Consider marking event handler as 'passive' to make the page more responsive. See <URL>
The specific event is either touchstart or wheel. This error can display in the 100s of times even when I am not running my script.
When I run my script, this error seems to blow up even more, and display more times than usual
Eventually, the entire page crashes or takes far longer to load than it should (but it does sometimes eventually make it all the way, showing that my extension is not completely breaking down)
There's also another warning that tends to show, [Violation] 'readystatechange' handler took <N>ms
This warning shows far fewer times than the other (see screenshot below)
Interestingly, usually loading youtube.com home page when starting off in a new tab is fine, and my extension successfully hides (i.e. changes styles on + injects some extra HTML into) the node it's meant to hide
I then get a crash or extremely slow page load when I try to navigate within YouTube, e.g. specifically going to the Trending page using the left-hand side menu, OR occasionally when I hit refresh on the page
Things I've tried:
Overriding the default addEventListener method on EventTarget.prototype, which I have so far failed to do successfully - not sure I understand how to do this despite trying a few methods from SO
Blocking the script that this error originates from (desktop_polymer_inlined_html_polymer_flags_v2.js) using the Chrome WebRequest API, but that doesn't work because it breaks the whole page
Questions:
Is it likely that this 'non-passive event listener' warning is interplaying with my script to cause the crashing of the page? Or that my script is causing more listeners to be added than the page would usually?
How can I stop this error from happening (e.g. how do I prevent the event listeners from being created by YouTube's JS)?
Does anyone know anything about the way YouTube is built that would make it crash if you try to 1) modify a style on an element directly 2) add another element into a parent element 3) continually check styles on an element? Builtwith.com was not much help.
Is there something else I am missing here? Another way I can change my content script to make it interplay better with YouTube?*
*I know a tempting answer will be 'don't observe the document'/'observe it less' but this is more or less non-negotiable in terms of the way the browser extension needs to work.
Screenshot:
Chrome profiling:
Note: having looked into them individually, none of the functions that are taking up the huge amount of time are part of my extension. So perhaps YouTube is reacting badly to the DOM modifications that my extension performs.
Related
I recently had an issue whereby a user was complaining that they couldn't access a certain page because the link wasn't where it was supposed to be.
After some head-scratching, I had them disable all browser extensions and sure enough the problem went away. Re-enable extensions one by one...
AdBlock.
For some reason, it was blocking the links to the pages the user wanted to access.
Now, I don't run ads and never plan to, so usually I just tell people with this problem to whitelist the site and all is well. But if someone never knew there was a problem to begin with, I would actually lose traffic because of this. So how can I avoid this?
The only thing I can really think of is to detect AdBlock and pop up a small notice explaining that AdBlock is known to corrupt the website and that, since we don't run ads, they may want to disable it for the site. I mean, the site is a game, and this isn't the first time a browser extension has broken it, but I don't think first-time visitors would be too happy to see a popup asking to disable their blocker, you know?
So any solution that would actually prevent AdBlock from corrupting the site in the first place would be great.
You cannot prevent Chrome extensions from running. They operate in their own separate thread, with a privileged API, and hidden from page scripts.
Detecting adblockers is awkward. The easiest way is to create a 'sacrificial element' - a div with a class like 'ad_unit', for example - add it to the DOM and then wait a frame to see if it has been hidden (with display: none, for example, or a getBoundingClientRect check).
Element checking is tricky, though, because strictly speaking there's no guarantee an adblocker will run synchronously or before your checking code.
Because adblockers run in a privileged mode, their operation does not trigger events in the nonprivileged script space. To put it more simply: you can't use DOMMutationEvents to spy when a foreign extension messes with your page.
The other option is to try and load a 'sacrificial file' - an image with a URI that looks like an advert, say - and then attach an onError handler to the element. If it throws an error that looks suspicious (I think it's ERR_BLOCKED_BY_CLIENT on Chrome), then you show your warning message.
Your final choice is to try and avoid incurring Adblock's wrath in the first place. Adblockers generally use open blacklists of URIs and CSS selectors, like EasyList (https://easylist.to/easylist/easylist.txt) - this is what AdblockPlus uses and a fair few others. You could just try and make sure your DOM elements never have IDs or classes that collide with any of those selectors. It's a big list, though, and it can change at any time.
IE at its best:
There is a USB stick with an HTML document on it. When the user opens it in IE11 and scripts are blocked, a prompt appears to allow those scripts to run.
When you click on allow, the site seems to get reloaded, but it also looks like a new tab is opened/ closed.
As soon as JS is enabled, you get redirected to an online version of the site.
Now, on the site there is a video which starts autoplaying after 10 seconds. But in IE11, a few seconds later the same video starts playing parallely so you here the sound twice.
When you check the DOM and remove the <video> tag (there is only 1), one video stops playing. The one that started later though keeps playing. Even when I visit another website the video keeps playing.
Only closing the browser stops the video.
This behaviour does not occur when I allow scripts to be executed directly.
Using video.js and jQuery.
Any ideas?
HTML Elements do not require JavaScript and/or ActiveX for serving their content. They are automated.
After the page is loaded and rendered as plain HTML, allowing JavaScript, will trigger DOM construction and re-render the content. However, DOM may duplicate the instance of media object and start running a parallel stream, that is, a dupe of the stream already initiated by HTML automation. !Not visible by DOM.
So, there's no new tab being opened or failing to close, it's simply an HTML Automated instance handling the initial stream.
And this will happen only when running HTML pages locally.
Finally:
The best way of avoiding this - expected but unwanted behavior - would be to:
Set the Video Element "autoplay" property to "false".
p.s.:
About the following issue of which you say: "As soon as JS is enabled, you get redirected to an online version of the site."
That's something no browser can or will do for you on its own. So you'll have to remove the code that is triggering the browser navigation to the online content yourself.
Good luck.
Is it possible to cause Google Chrome to prevent painting... as in, to keep the page exactly the same, no animations or content changes.
The reason I ask is because I have created an extension for people who find it difficult to read webpages when things are animating/flashing/changing/etc.
It currently works by taking a screenshot and layering it over the page (position absolute, with a high value z-index).
But because captureVisibleTab cannot capture the whole page (issue 45209), the screenshot needs to be re-created every time the user scrolls the page.
However the change in iOS 8 Safari to not pause printing while scrolling got me thinking there may be another way around this by trying to emulate the pre iOS 8 behaviour (something I preferred, as Reader View does not always work, or stop animated gifs).
You cannot stop the execution thread, its browser who decides it.
However to prevent CPU Cycles What chrome does is, Pauses the javascript execution thread when window is blurred. But since you are showing captured with higher z-index you window will still be active.
One possible way :
Disable the script for that url when the page is loaded.
You might miss the dynamic content but as you asked "no animations or content changes". Any dom or style manipulations by javscript causes repaint of the page. Disabling it might be one solution. However not pretty sure about how to stop css animations.
I have also seen extensions that can capture full webpage image or pdf. you can capture the full page and show them irrelevant of whatever changing in the background
Im playing around with making my first chrome extension. Im making a small extension that monitors the webrequests a page makes. This means that im listening to the: chrome.webRequest.onBeforeRequest.addListener event
I am a little confused on how to execute this code on every page i load. It works on any page if i open the extension web page and run the code in this context. However i would like it to run regardless of having the page open. How do i go around doing this?
I looked at content_scripts, but havent figured out if they are the proper path to take - and ven if they are how do i send a message from my content script to my web page notifying it to run the code. As far as i understand this the content script is first run after the page has been loaded and therefore it does not matter if i call my page and add the listeners, because the show is already over - is this correct?
The wa i understand this is that i cannot add listeners in the content script - hence the need to make this messaging thing - is this correct?
Thank you.
You would put the onBeforeRequest listener in a background page, specifically the persistent variant of it. When the event is invoked, whatever you have in the handler will be run.
I am researching the possibility that I might be able to use a Chrome extension to automate browsing and navigation (conditionally). My hope is that the extension can load a remote page (in the background) and inject a javascript to evaluate clickable links and click (by calling the click method) the appropriate (evaluated by some javascript logic) link, then repeat process for the resulting page.
My ability to javascript is not the problem - but I am struggling to discern whether (or not) a chrome extension can load pages in the back and inject script into them (making the DOM accessible).
I would be pleased if anyone could confirm (or deny) the ability to do so - and if so, some helpful pointers on where I should research next.
#Rob W - it seems the experimental features fit the bill perfectly. But my first tests seem to show the features are still very experimental ... ie. no objects get returned from callbacks:
background.html
function getAllosTabs(osTabs){
var x = osTabs;
alert(x.length); // error: osTabs is undefined
}
function createOffScreenTabCallback(offscreenTab){
document.write("offscreen tab created");
chrome.experimental.offscreenTabs.getAll(getAllosTabs);
alert(offscreenTab); // error: offscreenTab is undefined
}
var ostab = chrome.experimental.offscreenTabs.create({"url":"http://www.google.com"}, createOffScreenTabCallback)
alert(ostab); // error: ostab is undefined
Some further digging into the chromium source code on github revealed a limitation creating offscreenTab from background:
Note that you can't create offscreen tabs from background pages, since they
don't have an associated WebContents. The lifetime of offscreen tabs is tied
to their creating tab, so requiring visible tabs as the parent helps prevent
offscreen tab leaking.
So far it seems like it is unlikely that I can create an extension that browses (automatically and conditionally) in the background but I'll still keep trying - perhaps creating it from script in the popup might work. It won't run automatically at computer startup but it will run when the browser is open and the user clicks the browseraction.
Any further suggestions are highly welcome.
Some clarifications:
there's no "background" tabs except extension's background page (with iframes) but pages loaded in iframes can know they are being loaded in frames and can break or break at even the best counter-framebreaker scripts
offscreenTab is still experimental and is very much visible, as its intended use is different from what you need it for
content scripts, and chrome.tabs.update() are the only way handle the automated navigation part; aside being extremely harsh to program, problems and limitations are numerous, including CSP (Content-Security-Policy), their isolated context isolating event data, etc.
Alternatives... not many really. The thing is you're using your user's computer and browser to do your things and regardless of how dirty they are not, chrome's dev team still won't like it and will through many things at you and your extension (like the v2 manifest).
You can use NPAPI to start another instance chrome.exe --load-extension=iMacros.