HTML file call to calculation function - javascript

This is my first time delving into an HTML editor with goal of learning it. I opened the website option-price.com and right clicked in chrome to get the source.
I am not able to figure where the real calculation happens when I hit the button Calculate.

Mostly such content is embedded.like most of the web pages don't allow you to see the source code.
But
Firebug is a discontinued free and open-source web browser extension for Mozilla Firefox that facilitated the live debugging, editing, and monitoring of any website's CSS, HTML, DOM, XHR, and JavaScript.

You need to open the console (option + cmd + J on Mac and Chromium), and not the source.
Then you might want to look at the Sources tab, and try and find the Javascript (file or from the HTML (here index.php) responsible for what you want to debug. However it seems for this very site that the Calculate does a server call to do the calculation.

Such calculations would not typically be on the front-end - they would likely be happening on whatever back-end they are using (ex. PHP, Django, etc.) by making a server call, which is definitely not displayed in the source.

Related

Building a bot that fetches data from browser and saves it as text file

My problem is a bit complex. Hard to explain with words, so I broke it down into steps with pictures at each step.
Select a single date from these boxes. Hit submit
I will land on a page with a table. Copy the <tbody> element from the developer console.
Paste it into a text file. Save the text file with the date that was selected.
Repeat steps 1-3 for as many times as needed, selecting a new date each time (01-15-2018, 01-14-2018, 01-13-2018, and so on...)
Is it even possible to build a bot that does this? If yes, what tools would I use?
I know a fair amount of JavaScript and Python, so I'd prefer to use those 2 if possible.
Would need to know the URL you're looking at/look at the page source. If the date is supplied as any part of a request, and the response contains this data you're looking for, it should be simple to farm and analyze that data from a python script.
Walk through your clicks with the network tab of your browser's developer tools and you should see a request go out when you hit submit. Expedia just uses query parameters, and so the entire URL that you'll need pops up in the URL bar of your browser after hitting submit...
Tools:
If request-based:
Python
Requests module
If something cached/more complicated, there are tools for automating clicks and saving the results...I would guess that this won't be necessary though...
Update:
AJAX calls are HTTP requests and responses, and so you should be able to observe them in your networking tab of our web browser developer tools, and then mimic that request from a script, rather than from your browser.
The readability of the requests/responses and/or any measures the organization has implemented to make any application other than a browser unable to get the same response would be potential impediments, but even those should be imitable. If your browser is making the request, then there is no reason your python script can't make the same.
The method you seem to be interested in, although it sounds more complicated to me, is possible with automation tools like Selenium, as the other poster answered. Best of luck.
It is possible:
Take a look at selenium library (its commonly used for automated testing) for python. It should be able to select single dates, hit the submit button then go through the HTML code and grab data in the tag. After that you can use python by itself to store this data in a text file with a name of your choice in a location of your choice.

Get live html feed from website

When a webpage like https://poloniex.com/exchange#btc_eth is opened in the browser, we see that the browser constantly shows updated buy and sell orders. Also, in the Elements section in the chrome console, these updates are visible in the HTML tables.
Is there a way I can use a nodejs script run on my pc (so not in the browser console) to get these live html table updates from that website, without having to do a GET request every time?
If the chrome browser is able to do it, nodejs / jQuery / ajax should be able to do it as well. I tried the XMLHttpRequest nmp module but no luck yet.
It's possible they are using token authentication which means you wouldn't be able to get all the connection info you need just from their client-side code. Have you downloaded it and looked at it yet?
If you find it's not possible to call their services, there are other free products designed for webscraping. AutoHotKey is one that can open a web page and traverse its DOM. I believe it has the ability to run in the background, but don't quote me.

Web scraping of modal window(dialogue box) using jsoup

I am studying about the project in which I have to extract the data from the website . The project is in java and the website is in java script . I am using Jsoup to extract the data from the website But there are some modal windows(dialogue box , pop up windows) present in the web page.So Is it possible to extract the data of modal windows using jsoup?????
So if answer is yes , then how could I do it?? please provide links and if not, then what are the other best ways to do it???
Thanks for your help. I really appreciate it.
I assume that the modal is generated by Javascript.
Jsoup is just a parser. This means that it will make an HTTP request (GET or POST, whatever you tell it to do) and the server (website) will respond with the initial html. By saying initial, I mean the html before any javascript is executed.
Javascript can generate html (like the modal in question), but this is not visible to Jsoup because a parser can only read, it cannot execute code. The browser is able to generate the modal because it includes a Javascript execution engine that parses and executes Javascript.
When you visit a web page you don't know what is dynamic (generated by Javascript) and what is static (fetched by the server as is).
A little trick to check what is dynamic and what is static (static is visible to Jsoup) is to do the following:
Visit the web page you want to parse (with chrome if possible, mozilla will work too I think).
Press Ctrl + U. This will open a new tab.
The new tab will contain some mesh of html, css and js. This is what the server fetches to the browser and is also visible to Jsoup.
If the modal is in there, then great, it is visible to Jsoup. If not, then you have to use a library that acts as a headless browser.
A headless browser is essentially a browser without the graphical interface. It can parse and execute Javascript. It "sees" what a normal browser sees.
The most common library used is selenium webdriver. Be careful, selenium is a testing framework that has a lot of parts. What you need is the webdriver.
There a lot of examples out there with ready made code to get you started.

Javascript or other method to read entire HTML page with speech (offline)

I'm trying to add a visual impaired option to an HTML5 based kiosk that runs offline. The idea is that once a a button is clicked each page's text is read out loud (only 1 text box per page which is loaded from external txt file) speak.js seems like an option but the voice quality isn't that great. I had a look at some chrome plug ins but they all require you to select the text first. I'd like to try jTalk but still waiting on a download link from the creator - not sure if that will work anyway as this needs to run locally on the windows 7 pc serving as the kiosk. The chrome plugins seem to work nicely but they all require the text to be selected and I'm not sure how I could control text being read or not via n HTMl/JS link anyway. therefore ideally I'd like a Jscript library that would let me execute the command on a page per page basis.
Any ideas / suggestions?
Thanks!
I think http://www.chromevox.com might have what you are looking for. It's a chrome extension for vision-impaired users and has an api you can use.

How do you keep content from your previous web page after clicking a link?

I'm sorry if this is a newbie question but I don't really know what to search for either. How do you keep content from a previous page when navigating through a web site? For example, the right side Activity/Chat bar on facebook. It doesn't appear to refresh when going to different profiles; it's not an iframe and doesn't appear to be ajax (I could be wrong).
Thanks,
I believe what you're seeing in Facebook is not actual "page loads", but clever use of AJAX or AHAH.
So ... imagine you've got a web page. It contains links. Each of those links has a "hook" -- a chunk of JavaScript that gets executed when the link gets clicked.
If your browser doesn't support JavaScript, the link works as it normally would on an old-fashioned page, and loads another page.
But if JavaScript is turned on, then instead of navigating to an HREF, the code run by the hook causes a request to be placed to a different URL that spits out just the HTML that should be used to replace a DIV that's already showing somewhere on the page.
There's still a real link in the HTML just in case JS doesn't work, so the HTML you're seeing looks as it should. Try disabling JavaScript in your browser and see how Facebook works.
Live updates like this are all over the place in Web 2.0 applications, from Facebook to Google Docs to Workflowy to Basecamp, etc. The "better" tools provide the underlying HTML links where possible so that users without JavaScript can still get full use of the applications. (This is called Progressive Enhancement or Graceful degradation, depending on your perspective.) Of course, nobody would expect Google Docs to work without JavaScript.
In the case of a chat like Facebook, you must save the entire conversation on the server side (for example in a database). Then, when the user changes the page, you can restore the state of the conversation on the server side (with PHP) or by querying your server like you do for the chat (Javascript + AJAX).
This isn't done in Javascript. It needs to be done using your back-end scripting language.
In PHP, for example, you use Sessions. The variables set by server-side scripts can be maintained on the server and tied together (between multiple requests/hits) using a cookie.
One really helpful trick is to run HTTPFox in Firefox so you can actually monitor what's happening as you browse from one page to the next. You can check out the POST/Cookies/Response tabs and watch for which web methods are being called by the AJAX-like behaviors on the page. In doing this you can generally deduce how data is flowing to and from the pages, even though you don't have access to the server side code per se.
As for the answer to your specific question, there are too many approaches to list (cookies, server side persistence such as session or database writes, a simple form POST, VIEWSTATE in .net, etc..)
You can open your last closed web-page by pressing ctrl+shift+T . Now you can save content as you like. Example: if i closed a web-page related by document sharing and now i am on travel web page. Then i press ctrl+shift+T. Now automatic my last web-page will open. This function works on Mozilla, e explorer, opera and more. Hope this answer is helpful to you.

Categories