GetLayoutObjectAttribute("webviewer" ; "Content") doesn't reflect Javascript DOM changes? - javascript

I'm trying to use TinyMce in a webviewer in FileMaker but save the resulting HTML into a database field.
I am aware of the standard practice of using an fmp:// link with a script & parameter, but that won't work in Windows (the html content returned as the parameter will likely exceed the 2048 character limit).
I am using a javascript function to change the HTML DOM, putting the contents of the TinyMCE editor into another div on the page. However, when I use GetLayoutObjectAttribute ( "webviewer" ; "Content" ) it shows the content of the unmodified (pre-javascript altered) page, not the page after javascript has modified the DOM.
Sample file: http://cris.lc/sxti2
Is this expected behavior? Am I doing something incorrectly?

This is the expected behavior on FileMaker Pro and FileMaker WebDirect.
FileMaker Go is different. GetLayoutObjectAttribute ( "webviewer" ; "Content" ) DOES get the current DOM with FileMaker Go.

Related

Cross borwser how to check for html editor changes when attributes order changes relativly?

My company uses tinyMCE editor for content editing feature.
problem : when saving content (as a bulk HTML string) from browser say (chrome) then view on Firefox
Attributes order changes as you see in this => differences in HTML between chrome & Firefox
Our problem is based on content.If content changes, business changes as well.
But in this case user doesn't change content, the browser does.
Scenario
- tinymce is loaded inside a popup
- user edites content & closes the editor popup
- we render edited HTML in a div element (part of a form)
- part of server-side form validation is checking for content (HTML) changes
- using C# to compare saved vs edited HTML content as two strings
Do you have any ideas on how to find the actual changes or could you provide us with a hint about the way to solve this ?
The reason why this occurs is due to how the HTML content is parsed by the browser. At this point of time, TinyMCE cannot guarantee the order of the attributes.
To resolve the issue for your use case, I would suggest parsing the HTML on the server (preferable) or client side before storing the data.
Depending on the technology stack you're on, there are a range of HTML Parsers written in languages from PHP, Java to Ruby. Prettier is one of the "go to" parsers these days - unfortunately there are no ASP.NET solutions options as far as I can see.

Go lang executing javascript for retrieving text in page

I'm trying to retrieve text that is loaded dynamically from a web page using golang.
The text to retrieve is on this page :
https://www.protectedtext.com/testretrieve?1234
This text is encrypted by a password and then decrypted on client side and loaded dynamically on the page.
I already tried with goquery by selecting 'textarea' object, but I can't get the text cause it's loaded dynamically.
How can I achieve this ? By executing JS in Go ? It's working in my chrome console, but no ideas on how to do that in Go.
A lightweight solution is the best for my project. Or any other website that can store and edit same text without modifying the URL ?
You may need a headless browser to load the javascript like for example phantomgo
However looking at the page source code we can see that they use sha512 for the tab title and aes for the textarea field.
The page which you shared contains https://www.protectedtext.com/testretrieve?1234, only one element of class textarea-contents
simple get class documents using goquery and get 0th part

Using JavaScript to scan dynamically rendered elements?

I apologize in advance if I've formulated this question poorly; I'm fairly new to web coding.
My goal is to use JavaScript to scan a webpage and determine whether or not a particular string is present. The difficulty here is that the page is dynamically rendered, so the string in question will never appear in the source code.
Would the string appear in the DOM if it is rendered on the page? If I were to scan the DOM would I find it there, and are there any special considerations to take into account if so?
Essentially I am looking for a simple way to scan the text that has been rendered on a page, not the source code. This must be possible somehow because my browser's "find on page" function works on the dynamically rendered page in question. Would there be a way to access the rendered elements on the page through the browser API itself? (I'm using Chrome.)
Try this:
var stringToSearchFor = 'foobar';
var searchThisString = document.body.innerText || document.body.textContent;
var found = (searchThisString.indexOf(stringToSearchFor) >= 0);
This extracts the text from the page, ignoring all markup, then does a simple scan of the resulting string.
In some versions of FireFox this will include the contents of inline script tags.

Why doesn't the Facebook "Like" button honor its parameters when inserted via JavaScript?

I just discovered that the iframe version of the Facebook Like button doesn't honor its query parameters when the iframe is created with JavaScript, rather than included directly in the document's HTML.
Please have a look at this jsFiddle that I created:
http://jsfiddle.net/qQsCC/
I generated a Like button at the URL linked above and first included the HTML exactly as it was provided. Then, I broke it down into the JavaScript code needed to create and append an identical element to the DOM.
In the "Result" window, you'll see the HTML version of the button on top, and the JavaScript-created version below. While the value of the src attribute is identical for both (as well as all other HTML attributes), the lower button doesn't appear to honor any of the parameters that I've passed, such as colorscheme or font.
Does anyone know why this is happening, or have any suggestions for how I might avoid this behavior?
The use case here is that I'm creating HTML ads that will include the iframe version of the "Like" button; a requirement is that the ad can only load 50KB of data initially, then up to 1MB after window.onload has fired. Since the "Like" button weighs in over 50KB alone, I need to construct the iframe using JavaScript after window.onload rather than just including the <iframe> element in the ad's HTML.
When you add url using HTML, html entities are automatically decoded. This doesn't nappen in javascript. So you need to decode the url before passing it to javasript eg:
like.src = 'http://www.facebook.com/plugins/like.php?href=http%3A%2F%2Fwww.facebook.com%2F&send=false&layout=standard&width=450&show_faces=false&action=like&colorscheme=dark&font=arial&height=35';
Hope this helps
Updated JSfidle:
http://jsfiddle.net/qQsCC/1/

is there a tool to capture all DOM webpage elements generated by browser-side javascript as html, for a full page html archive?

if a whole bunch of elements gets generated in my browser by javascript (using JSON data or just out of thin air) I am not able to fully archive such a page by saving its source. I already tried saving it as .mht file in IE, but that does not work - IE does not save the dynamically generated elements either.
An example of such a page is here http://www.amazon.com/gp/bestsellers/wireless/ref=zg_bs_nav - notice that "price" and "X new" elements do not exist in the source html but rather are dynamically generated.
If I wanted to parse this, I could work directly with the DOM by various means, yadda-yadda. But if I want to automagically save the page as html document such that it could be rendered with all the dynamically generated elements nicely rendered even while javascript is turned off, so far I am SOL.
Any suggestions?
In Firefox there's the Web Developer extension: https://addons.mozilla.org/en-US/firefox/addon/web-developer/
Once installed you can use View Source -> View Generated Source to access the JavaScript-modified HTML.

Categories