I'm building a Javascript preview function for a blog back-end (much like the one used on this website), and I'd like to be able to parse some custom tags that normally get parsed by PHP. I am wondering if it's possible to use the JS XML parser to parse content from a textarea that would look like:
<img=1>
Use for
<url=http://apwit.com>testing</url>
purposes only!
I read on another question here once that using regex to parse things like this is a bad idea because of the many many exceptions there could be. What do you think?
Use this: http://www.w3schools.com/Xml/tryit.asp?filename=tryxml_parsertest2
It parses xml from a string and uses the fast native XML parsing engine from the browser.
Explanation and discussion:
http://www.w3schools.com/Xml/xml_parser.asp
Related
I am using a REST api to get XML data from a database and am trying to display it in an html format using xslt. Unfortunately the xml data comes back with a few namespaces that are not defined. I can get the style sheet to work just fine on a local copy of the data if I strip the namespaces or define them. Striping the name spaces feels like a hack and no the correct way to do this.
this is essentially an example of the data I get back:
<root>
<entity:Entity ns1:atrib="foo">
<g:Value>foo1</g:value>
<g:Name>fooName</g:Name>
</entity:Entity>
xmlhttprequest methods in JS to get this information and XSLTProcessor to transform it then add it into a . It's not displaying the transformed information and i'm 100% positive it's the namespaces that is causing the issue.
I've googled everything I can think of with no luck. Road blocks like this are almost always due to me missing something fundamental.
XSLT will only operate on XML that is well-formed, and it requires all namespaces to be declared. If you want to process this data you should ideally fix it at source; if you can't do that you need to repair it before processing.
There are some XML parsers that allow you to process non-namespace-aware XML, and you could use such a parser as the basis of your repair tool, but this is such an unusual requirement that I'll have to leave you to research how to do that yourself.
I'm working on an FAQ type project using AngularJS. I have a number of questions and answers I need to import into the page and thought it would be a good idea to use a service/directive to load the content in dynamically from JSON.
The text strings are quite unwieldily (639+ characters) and the overall hesitation I have is adding HTML into the JSON object to format the text (Line breaks etc).
Is pulling HTML from JSON considered bad practice practice, and is there a better way to solve this? I'd prefer to avoid using multiple templates but it's starting to seem like a better approach.
Thanks
If you're using AngularJS and already have a build step, html2js could help you with turning HTML templates into JS, which can then be concat'd and minified.
You could try parsing the incoming JSON before sending it to the page and just adding in a <br /> everywhere you run into a \n. That way the JSON is more universally usable if you ever decide you want to port the data to another medium.
I am trying to create an application that reads JSON strings.
Right now: I can input a JSON string through Java, write it to an HTML document and have a JavaScript application read it; which then parses it and writes it to the same HTML application. I need to know how, using Java, to read the HTML that it gets written to so I can use that data. It is important to note this HTML file is all generated by code so there is no actual text file to read.
I realize this is a roundabout way of doing it, but up until this point it has worked. My question is simple: How can I read an HTML page in a part that is not in a <form> through either regular Java or Servlet.
You can do that only by parsing the HTML in Java. And, there are some open source libraries that does this job for you.
Here is one that you can use.
http://jsoup.org/
I have data sets similar to this:
<NDL>
<REPLICA 4925770B:0025BA85>
<VIEW OF64623968:A2336DB0-ON49256C46:002ACF42>
<NOTE OFA52D3E8C:0ED3F84A-ON605F586A:5D1C1FAA>
<HINT>CN=YW8LN6/O=TDK-JP</HINT>
<REM>Database 'Shunya Sato', View '受信ボックス', Document '[Requirement management system - Feature #125] (New) Collect example of LN link'</REM>
</NDL>
I need to retrieve the content enclosed by the <HINT> tag, and the pseudo-attributes in the , and tags. Is there some lib that could help me out with this, or is the best way to hope that everything will always be in this order and use split/find/other builtin stuff?
Unfortunately, unless you write a custom parser that can turn what you have into XML, you won't be able to use any traditional XML libraries to read your data. The only reason that people can perform XML queries over HTML is because there are clearly defined ways to convert HTML into a DOM, which can then be converted into XML. The same cannot be said for your data.
While your data may resemble XML, the only thing it has in common is the use of < and > to delimit fields. As such, you are probably just better off using string searching and spliting to get the fields you need.
I'm working a page that needs to fetch info from some other pages and then display parts of that information/data on the current page.
I have the HTML source code that I need to parse in a string. I'm looking for a library that can help me do this easily. (I just need to extract specific tags and the text they contain)
The HTML is well formed (All closing/ending tags present).
I've looked at some options but they are all being extremely difficult to work with for various reasons.
I've tried the following solutions:
jkl-parsexml library (The library js file itself throws up HTTPError 101)
jQuery.parseXML Utility (Didn't find much documentation/many examples to figure out what to do)
XPATH (The Execute statement is not working but the JS Error Console shows no errors)
And so I'm looking for a more user friendly library or anything(tutorials/books/references/documentation) that can let me use the aforementioned tools better, more easily and efficiently.
An Ideal solution would be something like BeautifulSoup available in Python.
Using jQuery, it would be as simple as $(HTMLstring); to create a jQuery object with the HTML data from the string inside it (this DOM would be disconnected from your document). From there it's very easy to do whatever you want with it--and traversing the loaded data is, of course, a cinch with jQuery.
You can do something like this:
$("string with html here").find("jquery selector")
$("string with html here") this will create a document fragment and put an html into it (basically, it will parse your HTML). And find will search for elements in that document fragment (and only inside it). At the same time it will not put it in page DOM