It is possible to obtain as a string the content of an external script? Something equivalent to myInlineScript.textContent?
The scenario is that I'm just starting to get into WebGL and all the tutorials I'm finding store shaders as inline <script type="x-shader/x-foo"> tags. The element itself isn't important, however — the shaders are created from plain strings. These are easily extracted from inline scripts via .textContent, but I'd of course prefer to keep my code separate from my markup.
Is there some equivalent for getting the source / content of an external script? A quick scan of the DOM docs and a Google search didn't yield anything enlightening.
Update
The WebGL shaders aren't a huge problem — there are a number of ways of getting strings in Javascript! But this isn't the only time I've encountered script tags being used to inline non-scripts and it got be curious about a better way to do it.
If it's on the same domain you can just do a normal ajax request for it, and get the file back as a string. This works for any kind of text file.
Also, I am not familiar with WebGL, but if your shaders are in external files then I assume the "x-shader" script type is simply a way to put them inline. Otherwise, it's just a string you pass to a method somewhere. So don't over-think this too much.
Related
I have an xml document loaded into the browser. I need to use it as a template to generate and display as an html page in its place, with all the work happening in JavaScript on the client.
I've got it mostly done:
The xml document loads a JavaScript file.
The JavaScript reads the document and generates the html document that I want to display.
The JavaScript replaces the document's innerHTML with the new html document.
The one thing that I'm missing is that I'd like to also supply the head of the new document.
I can create the head's content, of course. But, I cannot find any way to set it back into the browser's document. All my obvious attempts fail, hitting read-only elements of the document, or operations that are not supported on non-HTML documents.
Is there any way to do this, or am I barking up the wrong tree?
Alternative question: Even if this is really impossible, maybe it doesn't matter. Possibly I can use my JavaScript to accomplish everything that might be controlled by the head (e.g., viewport settings). After all, the JavaScript knows the full display environment and can make all needed decisions. Is this a sane approach, or will it lead to thousands of lines code to handle browser-specific or device-specific special cases?
Edited - added the following:
I think the real question is broader: The browser (at least Chrome and Chromium) seems to make a sharp distinction between pages loaded as .xml and pages loaded as .html. I'm trying to bend these rules: I load a page as .xml, but then I use JavaScript to change it into .html.
But, the browser still wants to view the page as .xml. This manifests in many ways: I can't add a <head>; I can't load CSS into the page; formatting tags are not interpreted as html; etc.
How can I convince the browser that my page is now bona fide html?
The title is maybe a bit misleading. I need to work with huge files (The smallest one is 210 KB), so it isn't a very fast way to get them trough AJAX. I tried including them in a plain text script tag, and it worked, but these files are a couple of thousands lines, so they made the HTML code very very long. The IDE froze, when i tried to delete 3 at once. My question is basically:
Is there a way, to read a code in a separate file, where I could just do
<script src="myFile.ext" id="myFile"></script>
and it would work the same as having it in the main html tag?
You will not be able to use the data in the file included with a script tag, unless it is javascript, and actually calls a function defined on your page.
Read about jsonp.
Your best bet is to use AJAX.
I think the only overhead to load with AJAX is the extra HTTP request headers, and that's negligible comparing to your huge file size. I don't understand why you think it's faster to load together with other HTML.
I have xsl page which include a number of templates cover all i need to create the webpages i want, i call the templates using nodes into another xsl file,
I need to call and collect the templates into a webpage instead of xsl using dropdown-lists.
How can i achieve that?
It seems not easy so any thoughts could help!
Thanks in advance!
I find it pretty tricky too and don't have a full answer for you.
Display the templates should be the easy part. You could catch them via XQuery, javascript as xml element from an xml file (the XSL stylesheet).
To call just some specific templates, I don't know...
One way to achieve your goal, is maybe using webServices to call xslTransform. You could do that easily with eXist for example (http://en.wikibooks.org/wiki/XQuery/XQuery_and_XSLT#Creating_an_XSLT_service). Exist embedded Webservices provide such functions (ie. calling XSLT inside web context). You have similar functionnalies in javascript (I guess...).
Maybe using XQuery (or anything else) to dynamically generate a simple template stylesheet (ie : extracting the template and create a XSLT file with only it inside) and execute it could be a solution.
Another way, may be using the mode attributes of templates. You could set an execution mode for a XSLT when launching it. But you may find yourself with one specific mode for each template...
Hope this could help.
I have a webpage that works and all is swell. It is coded using mostly good practises of external css files and minimal inline styles/code. All is well.
Now however, I want to send that page via HTML text only, such as in an email. So there should be no external references to external sites at all. Meaning I now must move my beautiful external references, internally.
I am thinking I can write a javascript function that finds every class of an object, removes it from that class, then gives that object inline "style" attributes equal to what the class has.
But I was wondering if anyone else has other suggestions.
The end goal is to get a wall of text, that when pasted in a non-internet connected browser with no cache or anything, will display exactly what I have on the screen of my "normal operations" page.
There is a perl CPAN module for this:
CSS::Inliner
you can also find the source on github:
https://github.com/kamelkev/CSS-Inliner
I'm working a page that needs to fetch info from some other pages and then display parts of that information/data on the current page.
I have the HTML source code that I need to parse in a string. I'm looking for a library that can help me do this easily. (I just need to extract specific tags and the text they contain)
The HTML is well formed (All closing/ending tags present).
I've looked at some options but they are all being extremely difficult to work with for various reasons.
I've tried the following solutions:
jkl-parsexml library (The library js file itself throws up HTTPError 101)
jQuery.parseXML Utility (Didn't find much documentation/many examples to figure out what to do)
XPATH (The Execute statement is not working but the JS Error Console shows no errors)
And so I'm looking for a more user friendly library or anything(tutorials/books/references/documentation) that can let me use the aforementioned tools better, more easily and efficiently.
An Ideal solution would be something like BeautifulSoup available in Python.
Using jQuery, it would be as simple as $(HTMLstring); to create a jQuery object with the HTML data from the string inside it (this DOM would be disconnected from your document). From there it's very easy to do whatever you want with it--and traversing the loaded data is, of course, a cinch with jQuery.
You can do something like this:
$("string with html here").find("jquery selector")
$("string with html here") this will create a document fragment and put an html into it (basically, it will parse your HTML). And find will search for elements in that document fragment (and only inside it). At the same time it will not put it in page DOM