How to know if web content cannot be handled by Scrapy? - javascript

I apologize if my question sounds too basic or general, but it has puzzled me for quite a while. I am a political scientist with little IT background. My own research on this question does not solve the puzzle.
It is said that Scrapy cannot scrape web content generated by JavaScript or AJAX. But how can we know if certain content falls in this category? I once came across some texts that show in Chrome Inspect, but could not be extracted by Xpath (I am 99.9% certain my Xpath expression was correct). Someone mentioned that the texts might be hidden behind some JavaScript. But this is still speculation, I can't be totally sure that it wasn't due to wrong Xpath expressions. Are there any signs that can make me certain that this is something beyond Scrapy and can only be dealt with programs such as Selenium? Any help appreciated.
-=-=-=-=-=
Edit (1/18/15): The webpage I'm working with is http://yhfx.beijing.gov.cn/webdig.js?z=5. The specific piece of information I want to scrape is circled in red ink (see screenshot below. Sorry, it's in Chinese).
I can see the desired text in Chrome's Inspect, which indicates that the Xpath expression to extract it should be response.xpath("//table/tr[13]/td[2]/text()").extract(). However, the expression doesn't work.
I examined response.body in Scrapy shell. The desired text is not in it. I suspect that it is JavaScript or AJAX here, but in the html, I did not see signs of JavaScript or AJAX. Any idea what it is?

It is said that Scrapy cannot scrape web content generated by JavaScript or AJAX. But how can we know if certain content falls in this category?
The browsers do a lot of things when you open a web page. I will be oversimplify the process here:
Performs an HTTP request to the server hosting the web page.
Parses the response, which in most cases is HTML content (text-based format). We will assume we get a HTML response.
Starts the rendering the HTML, executes the Javascript code, retrieves external resources (images, css files, js files, fonts, etc). Not necessarily in this order.
Listens to events that may trigger more requests to inject more content into the page.
Scrapy provides tools to do 1. and 2. Selenium and other tools like Splash do 3., allow you to do 4. and access the rendered HTML.
Now, I think there are three basic cases you face when you want to extract text content from a web page:
The text is in plain HTML format, for example, as a text node or HTML attribute: <a>foo</a>, <a href="foo" />. The content could be visually hidden by CSS or Javascript, but as long is part of the HTML tree we can extract it via XPath/CSS rules.
The content is located in Javascript code. For example: <script>var cfg = {code: "foo"};</script>. We can locate the <script> node with a XPath rule and then use regular expressions to extract the string we want. Also there are libraries that allow us to parse pieces of Javascript so we can load objects easily. A complex solution here is executing the javascript code via a javascript engine.
The content is located in a external resource and is loaded via Ajax/XHR. Here you can emulate the XHR request with Scrapy and the parse the output, which can be a nice JSON object, arbitrary javascript code or simply HTML content. If it gets tricky to reverse engineer how the content is retrieved/parsed then you can use Selenium or Splash as a proxy for Scrapy so you can access the rendered content and still be able to use Scrapy for your crawler.
How you know which case you have? You can simply lookup the content in the response body:
$ scrapy shell http://example.com/page
...
>>> 'foo' in response.body.lower()
True
If you see foo in the web page via the browser but the test above returns False, then it's likely the content is loaded via Ajax/XHR. You have to check the network activity in the browser and see what requests are being done and what are the responses. Otherwise you are in case 1. or 2. You can simply view the source in the browser and search for the content to figure out where is located.
Let say the content you want is located in HTML tags. How do you know if your XPath expression correct? (By correct here we mean that gives you the output you expect)
Well, if you do scrapy shell and response.xpath(expression) returns nothing, then your XPath is not correct. You should reduce the specificity of your expression until you get an output that includes the content you want, and then narrow it down.

Related

Convert XML document to HTML and set <head>

I have an xml document loaded into the browser. I need to use it as a template to generate and display as an html page in its place, with all the work happening in JavaScript on the client.
I've got it mostly done:
The xml document loads a JavaScript file.
The JavaScript reads the document and generates the html document that I want to display.
The JavaScript replaces the document's innerHTML with the new html document.
The one thing that I'm missing is that I'd like to also supply the head of the new document.
I can create the head's content, of course. But, I cannot find any way to set it back into the browser's document. All my obvious attempts fail, hitting read-only elements of the document, or operations that are not supported on non-HTML documents.
Is there any way to do this, or am I barking up the wrong tree?
Alternative question: Even if this is really impossible, maybe it doesn't matter. Possibly I can use my JavaScript to accomplish everything that might be controlled by the head (e.g., viewport settings). After all, the JavaScript knows the full display environment and can make all needed decisions. Is this a sane approach, or will it lead to thousands of lines code to handle browser-specific or device-specific special cases?
Edited - added the following:
I think the real question is broader: The browser (at least Chrome and Chromium) seems to make a sharp distinction between pages loaded as .xml and pages loaded as .html. I'm trying to bend these rules: I load a page as .xml, but then I use JavaScript to change it into .html.
But, the browser still wants to view the page as .xml. This manifests in many ways: I can't add a <head>; I can't load CSS into the page; formatting tags are not interpreted as html; etc.
How can I convince the browser that my page is now bona fide html?

What HTML tags would be considered dangerous if stored in SQL Server?

Considering issues like CSRF, XSS, SQL Injection...
Site: ASP.net, SQL Server 2012
I'm reading a somewhat old page from MS: https://msdn.microsoft.com/en-us/library/ff649310.aspx#paght000004_step4
If I have a parametrized query, and one of my fields is for holding HTML, would a simple replace on certain tags do the trick?
For example, a user can type into a WYSIWYG textarea, make certain things bold, or create bullets, etc.
I want to be able to display the results from a SELECT query, so even if I HTMLEncoded it, it'll have to be HTMLDecoded.
What about a UDF that cycles through a list of scenarios? I'm curious as to the best way to deal with the seemingly sneaky ones mentioned on that page:
Quote:
An attacker can use HTML attributes such as src, lowsrc, style, and href in conjunction with the preceding tags to inject cross-site scripting. For example, the src attribute of the tag can be a source of injection, as shown in the following examples.
<img src="javascript:alert('hello');">
<img src="java
script:alert('hello');">
<img src="java
script:alert('hello');">
An attacker can also use the <style> tag to inject a script by changing the MIME type as shown in the following.
<style TYPE="text/javascript">
alert('hello');
</style>
So ultimately two questions:
Best way to deal with this from within the INSERT statement itself.
Best way to deal with this from code-behind.
Best way to deal with this from within the INSERT statement itself.
None. That's not where you should do it.
Best way to deal with this from code-behind.
Use a white-list, not a black-list. HTML encode everything, then decode specific tags that are allowed.
It's reasonable to be able to specify some tags that can be used safely, but it's not reasonable to be able to catch every possible exploit.
What HTML tags would be considered dangerous if stored in SQL Server?
None. SQL Server does not understand, nor try to interpret HTML tags. A HTML tag is just text.
However, HTML tags can be dangerous if output to a HTML page, because they can contain script.
If you want a user to be able to enter rich text, the following approaches should be considered:
Allow users (or the editor they are using) to generate BBCode, not HTML directly. When you output their BBCode markup, you convert any recognised tags to HTML without attributes that contain script, and any HTML to entities (& to &, etc).
Use a tried and tested HTML sanitizer to remove "unsafe" markup from your stored input in combination with a Content Security Policy. You must do both otherwise any gaps (and there will be gaps) in the sanitizer could allow an attack, and not all browsers full support CSP yet (IE).
Note that these should be both be done on point of output. Store the text "as is" in your database, simply encode and process for the correct format when output to the page.
Sanitize html both on the client and on the server before you stuff any strings into SQL.
Client side:
TinyMCE - does this automatically
CKEditor - does this automatically
Server side:
Pretty easy to do this with Node, or the language/platform of your choice.
https://www.realwebsite.com
the link above shows www.realwebsite.com while it actually takes you to www.dangerouswebsite.com...
<a '
href="https://www.dangerouswebsite.com">
https://www.realwebsite.com
<'/a>
do not include the random ' in the code I put it there to bypass activating the code so you can see the code instead of just the link. (btw most websites block this or anything if you add stuff like onload="alert('TEXT')" but it can still be used to trick people into going to dangerous websites... (although its real website pops up on the bottom of your browser, some people don't check it or don't understand what it means.))

Getting access to the original HTML in HtmlUnit HtmlElement?

I am using HtmlUnit to read content from a web site.
Everything works perfectly to the point where I am reading the content with:
HtmlDivision div = page.getHtmlElementById("my-id");
Even div.asText() returns the expected String object, but I want to get the original HTML inside <div>...</div> as a String object. How can I do that?
I am not willing to change HtlmUnit to something else, as the web site expects the client to run JavaScript, and HtmlUnit seems to be capable of doing what is required.
If by original HTML you mean the HTML code that HTMLUnit has already formatted then you can use div.asXml(). Now, if you really are looking for the original HTML the server sent you then you won't find a way to do so (at least up to v2.14).
Now, as a workaround, you could get the whole text of the page that the server sent you with this answer: How to get the pure raw HTML of a page in HTMLUnit while ignoring JavaScript and CSS?
As a side note, you should probably think twice why you need the HTML code. HTMLUnit will let you get the data from the code, so there shouldn't be any need to store the source code but rather the information it is contained in it. Just my 2 cents.

Parsing HTML using JavaScript

I'm working a page that needs to fetch info from some other pages and then display parts of that information/data on the current page.
I have the HTML source code that I need to parse in a string. I'm looking for a library that can help me do this easily. (I just need to extract specific tags and the text they contain)
The HTML is well formed (All closing/ending tags present).
I've looked at some options but they are all being extremely difficult to work with for various reasons.
I've tried the following solutions:
jkl-parsexml library (The library js file itself throws up HTTPError 101)
jQuery.parseXML Utility (Didn't find much documentation/many examples to figure out what to do)
XPATH (The Execute statement is not working but the JS Error Console shows no errors)
And so I'm looking for a more user friendly library or anything(tutorials/books/references/documentation) that can let me use the aforementioned tools better, more easily and efficiently.
An Ideal solution would be something like BeautifulSoup available in Python.
Using jQuery, it would be as simple as $(HTMLstring); to create a jQuery object with the HTML data from the string inside it (this DOM would be disconnected from your document). From there it's very easy to do whatever you want with it--and traversing the loaded data is, of course, a cinch with jQuery.
You can do something like this:
$("string with html here").find("jquery selector")
$("string with html here") this will create a document fragment and put an html into it (basically, it will parse your HTML). And find will search for elements in that document fragment (and only inside it). At the same time it will not put it in page DOM

Is there a way to validate the HTML of a page after AJAX operations are performed on it?

I'm writing a web app that inserts and modifies HTML elements via AJAX using JQuery. It works very nicely, but I want to be sure everything is ok under the bonnet. When I inspect the source of the page in IE or Chrome it shows me the original document markup, not what has changed since my AJAX calls.
I love using the WC3 validator to check my markup as it occasionally reminds me that I've forgotten to close a tag etc. How can I use this to check the markup of my page after the original source served from the server has been changed via Javascript?
Thank you.
Use developer tool in chrome to explore the DOM : it will show you all the HTML you've added in javascript.
You can now copy it and paste it in any validator you want.
Or instead of inserting code in JQuery, give it to the console, the browser will then not be able to close tags for you.
console.log(myHTML)
Both previous answers make good points about the fact the browser will 'fix' some of the html you insert into the DOM.
Back to your question, you could add the following to a bookmark in your browser. It will write out the contents of the DOM to a new window, copy and paste it into a validator.
javascript:window.open("").document.open("text/plain", "").write(document.documentElement.outerHTML);
If you're just concerned about well-formedness (missing closing tags and such), you probably just want to check the structure of the chunks AJAX is inserting. (Once it's part of the DOM, it's going to be well-formed... just not necessarily the structure you intended.) The simplest way to do that would probably be to attempt to parse it using an XML library. (one with an HTML mode that can be made strict, if you're not using XHTML)
Actual validation (Testing the "You can't put tag X inside tag Y" rules which browsers generally don't care too much about) is a lot trickier and, depending on how much effort you're willing to put into it, may not be worth the trouble. (Because, if you validate them in isolation, you'll get a lot of "This is just a fragment" false positives)
Whichever you decide to use, you need to grab the AJAX responses before the browser parses them if you want a reliable test result. (While they're still just a string of text rather than a DOM tree)

Categories