I am in the process of developing a web extension (for firefox) and use console.log a lot during the development process. I do not want my extension to be detected by the website itself, therefore my question:
Can js functionality of the website capture console.log output I generate from within a content script?
Thanks!
Related
What would be fastest and most less consuming (CPU, RAM) way to get JavaScript rendered HTML page and save it on drive based on URL with ordinary browser in headless (Google Chrome or Firefox) mode?
Idea is to also have proxy options in browser changed per request as well.
I'm well aware of Selenium, Puppeteer, PhantomJS and similar solutions. This needs to be done with REAL browser, remotely managed through some API on Linux environment.
I've found only JS API implementations for building addons but haven't found any solutions except Remote browser for which I'm not sure weather is updated any more.
Any pointers, snippets or whatever are more than welcome since I can't find anything.
Is it necessary for the JavaScript rendered HTML
Page to be functional after it is saved?
Just take a screenshot using Python and save it on drive.
HTML :
<html> <script scr="http://someurl.com/jscript.js"></script></html>
I'm trying to extract source code of jscript.js in chrome extension.
But there is no attribute of DOM Object holding source of js.
Is there a way to extract the source code of JavaScript which is loaded on page?
(By using DOM object or some internal object, except re-downloading the script)
Because.. Some web-server returns different source code depends on the request-packet (Usually.. BAD servers do that). So If I tried to download it with different request, I can't get the same one that was loaded on the browser.
According to Is external JavaScript source available to scripting context inside HTML page?, it's not normally possible without redownloading since it's not exposed to the DOM.
An extension, however, can hook into information available to the browser.
The simplest would be to create a DevTools extension. It would only work when the DevTools are open on the page, but then you can easily access the source with chrome.devtools.inspectedWindow.getResources().
Somewhat harder, but one can use chrome.debugger API to achieve the same while DevTools are closed. It's a low-level API, but it allows doing everything DevTools can do. I don't have a ready example, but Debugger Protocol docs will help.
Neither is possible from a content script.
You can also go directly to the extension on your file system:
Where to find extensions installed folder for Google Chrome on Mac?
for example on my mac book computer:
pwd
output: ~/Library/Application\ Support/Google/Chrome/Default/Extensions/hkbhjllliedcceblibllaodamehmbfgm/1.7.1_0
My goal to control my Chrome web browser with Python program.
Especially I'd like to directly get html data of web page that i see.
Currently I'm using chrome extension + python program to resolve it.
See diagram of my Current system.
chrome extension copies [document.documentElement.innerHTML] into clipboard,
python program retrieves the clipboard data and do some work(mouse click or keystroke).
I know the webbrowser module and urllib module in python.
However webbrowser module only open pages in my browser,
and urllib does not interact directly to my chrome browser.
Is there any method to get html data in my active chrome browser directly? like this?
I am currently creating a Chrome extension (which uses javascripts mainly) that allows users to scrape the images on a webpage and download them. I have finished the link scraping part, and the code will return an array like:
["http://example.com/image1.jpg","http://example.com/image2.jpg"]
But how do I download all of the links in ONE CLICK? I tried listing all photos on a new tab and let the users to Ctrl+S save the page. But this greatly affects the UI and I do not like it. I do not host webpage so server side script may not be working.. Any other solutions?
As far as I know, Chrome extensions technically can't save files to disk like Firefox.
The only way to do this is using NPAPI
Unfortunately, extensions using npapi will most likely not be accepted by the Web Store due to security problems. Of course it'll be okay if you use it for yourself or host the extension on your website.
You can install and examize the code of the following extensions, maybe you can even use the provided npapi too:
Screen Capture (by Google) https://chrome.google.com/webstore/detail/cpngackimfmofbokmjmljamhdncknpmg
Chrome Toolbox (by Google) https://chrome.google.com/webstore/detail/fjccknnhdnkbanjilpjddjhmkghmachn
Awesome Screenshot: Capture & Annotate https://chrome.google.com/webstore/detail/alelhddbbhepgpmgidjdcjakblofbmce
Download Asisstant (by Google) - got killed I guess.
I have a project I am doing that requires delivery on a DVD and through the web. I have been using Flash to drive a menu system and javascript to load pages or other actions on the web. However, when I move it to a DVD I receive a Security Error 2060 - the swf is unable to
communicate with the html page it is loaded onto and so none of the javascript is parsed. I am using ExternalInterface calls and jquery on the html page.
Searching online I have made sure that Flash when publishing "Allow local files only" and on the html page I'm using swfobject with a param field of allowscriptaccess of "always" - looking at the generated code on pages it shows that the allowscriptaccess is there.
Is there some security setting that I can program in that will give my Flash application the ability to function the same from a DVD as it would from the web and communicate with Javascript? If I need to compile two different swfs that would be okay.
I suspect you are having a local sandbox problem. Have you gone through the information at http://www.adobe.com/devnet/flashplayer/security.html?
If you can do PC-only, then investigate Server2Go. This is a standalone WAMP stack that works well from a CD/DVD. Your page will then run in the internet zone, and you should not run into the same security problems.