Alright, first off this is not a malicious question I'm asking. I have no intentions of using any info for ill gains.
I have an application that contains an embedded browser. This browser runs within the application's process, so I can't access it via Selenium WebDriver or anything like that. I know that it's possible to dynamically append scripts and html to loaded web pages via WebDriver, because I've done it.
In the embedded browser, I don't have access to the pages that get loaded. Instead, I can create my own html/javascript pages and execute them, to manipulate the application that houses the browser. I'm having trouble manipulating the existing pages within the browser.
Is there a way to dynamically add javascript to a page when you navigate to it and have it execute right after the page loads?
Something like
page1.navigateToUrl(executeThisScriptOnLoad)
page2 then executes the passed script.
I guess it is not possible to do it without knowledge of destination site. Although you can send data to the site and then use eval() function to evaluate sent data on destination page.
Related
I am new to javascript and Node.js but I am trying to figure out if there is an alternative to document.getElementById() in Node that has the same function. If it cannot be done in Node, is it possible to create a pure js file to manipulate the DOM and a separate Node file. For extra information, what I am trying to do is to convert csv lines into a json object and then update the webpage with new information which is why I want to use document.getElementById().
document.getElementById() is a function that exists in a browser. There is no such function in nodejs.
It is possible to get a 3rd party module that will parse an HTML web page, create a DOM and then allow you to access the DOM programmatically to see what's in the web page. Cheerio and Puppeteer are two such 3rd party modules, each with differing levels of features. Puppeteer actually uses the Chromium browser engine and can even run Javascript in the page and generate screenshots. Cheerio parses the HTML and lets you access just what it creates (without Javascript running).
It sounds like maybe you're a bit confused about how web pages work. A browser running on the end user's computer loads a web page. Once the page is loaded, at that point the server's job is done. The web page exists only in the browser on the user's computer. The server can't directly, on its own, change that web page.
To change that web page (without reloading it), you would have to have supporting Javascript code in the web page (that runs in the user's browser). For example, you could have your Javascript make an Ajax call from the web page that would request certain data from the server. When the server gets that request, it could generate the data and return JSON back to the browser. The Javascript in the browser would then receive that JSON, parse it into a Javascript object and then use the DOM to insert new objects into the existing web page based on the data it received.
Note that all changes to the existing web page in the browser are made by the Javascript running in the web page in the user's browser, not directly by the server. The server can supply data, but cannot directly change the user's web page itself. Of course, the user could request an update page and the browser would request a new version of the whole page and the server could then supply a page that had different data in it, but that would involve reloading the whole page.
There are also template engines that exist for nodejs so that when your server is generating a web page, the template engine can help you create a set of HTML for that web page that incorporates dynamic data. This doesn't dynamically change a web page that is already sitting in a browser being displayed. Instead, it helps you generate a web page from scratch that incorporates dynamic data into the web page when it is first downloaded. Examples of templates engines that work with Express in nodejs are Pug, EJS, Nunjucks, handlebars and many others.
I have a Chrome extension that used to run a background script that would call an API for a website using the users session or cookie.
It'd simply perform a get request and then pull various image URLs from the page using Cheerio.
The owners of the site though have now changed how the pages work. On load, they call a JSON API, and the source of the page uses JavaScript to render the page.
The issues I have now is that when I call a get request, it simply gets the page source, rather than the rendered HTML.
Does anyone know how I can get the rendered HTML? I'd rather not open a tab with the page in chrome, grab the data with a content script and then close it (automatically of course) as there are hundreds of pages to go through, and that's quite intensive on CPU resources
I am trying to set up a web application that contains 2 html pages. One is the login page and another present the data. However, I have difficulty re-loading or doing DOM manipulation once the user click the login button (which called a server-side function). I have read from previous instruction that google app script could only host one HTML and dose not support DOM manipulation. I tried to do reload the page but it dose not work. Is there any way to bypass the limitation ?
I have an ActionScript program that I want to access some external JavaScript functions. By external, I mean that the ActionScript/swf aren't going to be loaded via the HTML/JavaScript. Everything I see recommends ExternalInterface, but that seems to imply that your JS loads your swf. Is there a way to call a JavaScript function by URL?
I'm not sure what do you mean by calling a JavaScript function by URL, what you probably need is a JSON based web interface / service.
How / where do you plan to run the Flash content if not embedded in the HTML? You'll need one place or an other to actually run that JS code, be it on client side in the browser or server side (in which case you need the webservice).
So your page will contain some JavaScript to execute, but your Flash app will not be running in a browser? Can't you just use navigateToURL to open an HTML page containing JavaScript that executes on page load?
I am trying to write a web widget which will allow users to display customized information (from my website) in their own web page. The mechanism I want to use (for creating the web widget) is javascript.
So basically, I want to be able to write some javascript code like this (this is what the end user copies into their HTML page, to get my widget displayed in their page)
<script type="text/javascript">
/* javascript here to fetch page from remote url and insert into DOM */
</script>
I have two questions:
how do I write a javascript code to fetch the page from the remote url?
Ideally this will be PLAIN javascript (i.e. not using jQuery etc - since I dont want to force the user to get third party scripts jQuery which may conflict with other scripts on their page etc)
The page I am fetching contains inline javascript, which gets executed in an body.onLoad event, as well as other functions which are used in response to user actions - my questions are:
i). will the body.onLoad event be triggered for the retrieved document?.
ii). If the retrieved page is dumped directly into the DOM, then the document will contain two <body> sections, which is no longer valid (X)HTML - however, I need the body.onLoad event to be triggered for the page to be setup correctly, and I also need the other functions in the retrieved page, for the retrieved page to be able to respond to the user interaction.
Any suggestions/tips on how I can solve these problems?
There are two approaches to this.
The host site uses an <iframe> tag to include your page in a fixed-size box inside their page. It operates in its own document with its own <body> and onload event; it is in your site's security context so it can use AJAX to call back to your server if it needs to for some reason.
This is easy; the guest page doesn't even especially need to know it is being included in an iframe.
The host site uses <script src="http://your-site/thing.js"></script> to run a script from your server. Your script creates a load of content directly inside the host document using document.write() or DOM methods. Either way you know when you've finished putting them in place so you don't need onload.
You are running in the host's security context, so you can't AJAX to your server or look at your server's cookies directly; any such data must be served as part of the script. (You can look at the host server's cookies and cross-site-script into any of their pages, and conversely if there is any sensitive data in your script the host site gets to see it too. So there is an implicit trust relationship any time one site takes scripting content from another.)