Editing Facebook Like-Box Css on The FLy? - javascript

I am not a coder but, i am able to get my way around code most of the time. However, i found that this is the best place to ask questions relating to code stuff.
I have been working on a website for a client and i am at 95% - the only problem i have is facebook like-box. i have found several tutorials on the web to modify the like box css, and i have implemented most of the recommendations but, i have no favorable results.
Please - stackoverflow help!
I know jquery/javascript is a very powerful language. And facebook like uses javascript iframe/xfbml.
what code would you use, if you were to modify the like box css elements before loading them .
I say load cos i am loading my like box via ".load" ajax. So, when a user clicks the facebook button jquery loads it.
In short: how would i edit a css file on the fly, and then load the edited version afterwards.
thanks

The key problem that you'll have here is that FB's Like button is loaded inside an iframe - a self-contained HTML document within your page (if you use firebug or webkit inspector to inspect the like button, you'll see it's within <body>, <html>, then <iframe>).
The thing about these self-contained pages is that you can't access or manipulate them from the surrounding document (your page). You can change the 'src' attribute (telling the iframe to load a new page), but you can't apply or change styles on the elements inside the page. This is a security limitation that browsers have.
I know that it is possible to have a custom-styled like button, but I don't think it's done with the iframe method.

Related

Include code from jQuery load() onto page source code [duplicate]

Many aspects of my site are dynamic. I am using jquery.
I have a div which once the DOM is ready is populated using load().
Then if a button is clicked, using load() once again, this value is replaced by another value.
This kind of setup is common across my site. My homepage is essentially lots of dynamically loaded, refreshed, and changeable content.
What are the repercussions of this for SEO?
Ive seen sites where each page is loaded using load() and then displayed using the animation functions... It looks awesome !
People have posed this question before, but noone has answered it properly.
So any ideas? JQUERY AND SEO??
Thanks
EDIT
Very interesting points. I dont want to overdo my site with jaascript.. just where neccesary to make it look good - my homepage however is one place of concern.
So when the DOM is readY, it loads content into a div. On clicking a tab, this content is changed. I.E No JS, No content.
The beauty here for me is that, there is no duplicated code. Is the suggestion here that i should simply 'print' some default content, then have the tabs link to pages (with the same content) if JS is disabled. I.E sacrifice a little duplicate code for SEO?
As far as degrading goes, my only other place of concern is tabs on the same page.. I have 3 divs, all containing content. On this page two divs are hidden until a tab is clicked. I used this method first before i started playing with JS. Would it perhaps be best to load() these tabs, then have the tab buttons link to where the content is pulled from?
Thanks
None of the content loaded via JavaScript will be crawled.
The common and correct approach is to use Progressive Enhancement: all links should be normal <a href="..."> to actual pages so that your site "makes sense" to a search spider; and the click() event overrides the normal functionality with load() so normal users with JavaScript enabled will see the "enhanced" version of your site.
If your content is navigable when JavaScript is turned off, you'll be a good ways toward being visible to search engines.
Note that search engine crawlers won't be submitting any forms on your site, so if you have any or elements that are meant to be navigating between your site's content pages, that content is not navigable by search engines.
Here is a guidelines how to make Google to crawl content loaded with ajax: http://code.google.com/web/ajaxcrawling/docs/getting-started.html
I use jquery load() asynchronous page load. It greatly improves user experience, but not seo-friendly. Here's the only solution I have found so far:
On first load I do not use jquery load() and try to write cookie with javascript.document.cookie = 'checkjs=on';
On next page load if php script finds this cookie it means that javascript is enabled and jquery load() can be used. If there's no such cookie then javascript is off (probably spider came), so jquery load() is not used.
if (!$_COOKIE['checkjs'] || $_COOKIE['checkjs']!='on'){echo 'js is off, hello Google!'; } else {echo 'js is on, can use jquery load';}
This way I can be sure that most of users can benefit from asynchronous page blocks load, exept for the very first load. And spiders get all content too.
In your case you could just load the same page with new parameter that makes another tab active. Spider is gonna be happy.

Chat widget with react.js

I am writing a chat widget, that would be distributed to the end users with little code to put in their website. Usual routine.
My widget is going to be written in React. I know several ways to achieve this. Let me list the ways that I could think of.
Give a code snippet with directly iframe and source url in it. Problem with this approach is, it can be used only if the widget is embed. If the widget needs to be popup, flexibility will be lost.
Give a code snippet with a javascript being loaded asynchronously. Javascript will create an iframe in the parent webpage and src can be set. This widget javascript can have little intelligence. This is the usual approach followed by most of the widget developers.
Of course, source URL will render a React page which is bundled by webpack, in either case.
I wanted to know the best practices of developing a widget. So I went through the popular implementations of it. I liked Intercom's widget very much. It is written in React. I analyzed how it works.
The minimal javascript is loaded async on the webpage. It is injecting an iframe with id intercom-frame. That iframe has a script in it's head with a source URl. Obviously it is React bundle.
The thing that I don't understand is, below this iframe, a div is created with three iframes in it. One to show the chat bubble, another to show the chat bubble icon, the last one to show the actual chat window. Those iframe doesn't have source url and I guess the bundle is served from the first iframe created by the widget javascript.
I came across this SO question, which partially answers my question. From the answer,
expose some API between your customer webpage and your iframe, using window messaging.
the main code (the iframe code) is then loaded by this first script
asynchronously, and not included in it.
What I don't understand is,
1.) How they would have achieved it with window messaging?
2.) How they would have managed to create a div with iframes in it, from another iframes script?. Widget javascript is not creating those elements, based on it's source. It should have been done by the React bundle in the iframe generated by widget js.
3.) How a react bundle inside an iframe can create react elements in the parent DOM?
None of the iframe created by Intercom's script has src attribute, that means they are not subject to the same origin policy. Therefore, they can modify parent page html and vice versa.
However, I don't understand why they need to have separate iframe. And why using a script to inject another script which inject the main html content. Doesn't the first script have enough ability to inject html content? I'd love to be lightened about these things.

How to sandbox a div element?

I am trying to build a content editor. This contenteditor will load a HTML document (with JavaScript) into for example a #result element. The problem with this, is that if inside this HTML element there is for example $("input").hide();, then all of my inputs are gone throughout the whole page, so not just inside the loaded HTML (my goal).
What I want to do with the editor is when a client clicks on an element that represents something in the database, the info of this element will popup and the user will be able to edit this. (So, if a user hovers over a form with the class "contact-form" (which is in the database, connected to the loaded page) a new window will popup with information about this specific form element.
Also, I cannot completely disable Javascript, since the loaded HTML might contain Javascript for styling etc.
My goal: Remove Javascript, that can be annoying when a user loads in an HTML file. Like an alert(); Also, remove the ability for the Javascript to edit somehthing outside it's own DOM.
P.S. I am open to better workarounds like using an iframe for this, BUT I want to be able to hover over elements in interact with them.
Edit: It seems that this question might be a bit too broad, looking at the comments. Summary of my question: How can I disable alert() for a specific div and how can I create a sandbox so that code inside a div, can only change elements from inside that div.
What you're looking for is HTML sanitization. This is the process by which you remove any dangerous content from a snippet of HTML on the server, before it's loaded in the browser. There are plenty of sanitization libraries out there that can strip script tags, object tags, etc. Just remember, you can't sanitize using javascript because by the time you've injected your script, another malicious script may have already loaded and run.
The only way to effectively sandbox a javascript environment is with iframes. You'll notice that websites like CodePen, JSBin and JSFiddle use them extensively. There's something called the ShadowDOM, which is the basis of Web Components, but it isn't very well supported yet.
To make it possible to run your own frontend scripts that allow for hovering, you can inject your script after your sanitization process. This way, if it's loaded inside an iframe your script will also be loaded.
Finally, alert() doesn't belong to any elements on the DOM. You can trigger an alert as soon as the page loads, for example. However, if you're trying to prevent alerts from popping up on user interactions, you could try removing all event listeners from a particular element. This won't be necessary if you sanitize the HTML of script tags, however, since the script wouldn't have had a chance to load so there won't be any event listeners.
You can use ShadowDOM to load an html document into a host node. See also WHY SHADOW DOM?

Is it possible to convert a dynamic HTML page with a lot of javascript to a page without javascript?

I have a page with a lots of javascript. However, the page once rendered remains static, there are no moving things or special effects, etc... It should be possible to render the same HTML without any javascript at all using only the plain HTML and CSS. This is exactly what I want - I would like to get a no javascript version of the particular page. Surely, I do not expect any dynamic behavior, so I am OK if buttons are dead, for example. I just want them rendered.
Now, I do not want an image. It needs to be an HTML with CSS, may be embedded with the HTML, which is fine too.
How can I do it?
EDIT
I am sorry, but I must have not been clear. My web site works with javascript and will not work without it. I do not want to check if it works without, I know it will not and I really do not care about it. This is not what I am asking. I am asking about a specific page, which I want to grab as pure HTML + CSS. The fact that its dynamic nature is lost is of no importance.
EDIT2
There is a suggestion to gram the HTML from the DOM inspector. This is what I did the first thing - in Chrome development utils copied as HTML the root html element and saved it to a file. Of course, this does not work, because it continues to reference the CSS files on the web. I guess I should have mentioned that I want it to work from the file system.
Next was to save the page as complete with all the environment using some kind of the Save menu (browser dependent). It saves the page and all the related files forming a closure, which can be open from the file system. But the html has to be manually cleaned up of all the javascript - tedious and error prone.
EDIT3
I seem to keep forgetting things. Images should be preserved, of course.
I have to do a similar task on a semi-regular basis. As yet I haven't found an automated method, but here's my workflow:
Open the page in Google Chrome (I imagine FireFox also has the relevant tools);
"Save Page As" (complete page), rename the html page to something nicer, delete any .js scripts which got downloaded, move everything into a single folder;
On the original page, open the Elements tab (DOM inspector), find and delete any tags which I know cause problems (Facebook "like" buttons for example) (I also try to delete script tags at this stage because it's easier) and copy as HTML (right-click the <html> tag. Paste this into (replace) the downloaded HTML file (remember to keep the DOCTYPE which doesn't get copied;
Search all HTML files for any remaining script sections and delete (also delete any noscript content), and search for on (that's with a space at the start but StackOverflow won't render it) to remove handlers (onload, onclick, etc);
Search for images (src=, url(), find common patterns in image filenames and use regular expressions to replace them globally. So for example src="/images/myimage.png" => |/images/||. This needs to be applied to all HTML and CSS files. Also make sure the CSS files have the correct path (href). While doing this I usually replace all href (links) with #;
Finally open the converted page in a browser (actually I tend to do this early on so that I can see if any change I make causes it to break), use the Console tab to check for 404 errors (images that didn't get downloaded or had a different name) and the Network tab to check if anything is still being loaded from the online version;
For any files which didn't get downloaded I go back to the original page and use the Resources tab to find them and download manually;
(Optional) Cull any content which isn't needed (tracker images/iframes, unused CSS, etc).
It's a big job. I'd love a tool which automated all that, but so far I haven't found one. The pages I download are quite badly made (shops) which have a lot of unusual code, so that's why there are so many steps. You might not need to follow every step.

Ajax entire page - display only one div and retain CSS and other header material?

A client wants a merch shop on their site, and has set one up. I could iFrame in the whole page to the merch page, but frankly the merch site is an eyesore, and their site has a very particular feel to it. So I'm considering using an AJAX GET to grab the whole page, then javascript to display only the div with the merchandise in it. However, there are a lot of javascript includes (etc) on the merch site that I'd need to make sure are still present for the div to work correctly.
Any feeling on if this would work or not? Would the displayed div take its stylesheet and scripts from the AJAX'd page? Can I put the div in an iframe instead?
Opinions?
It sounds like an ugly solution. Isn't it better to do this serverside instead, for example let a PHP script read in the page and to whatever magic it takes to display it?
Using AJAX to load entire pages is ugly for a couple of reasons, including:
It breaks the URLs (can be worked around but requires extra work)
It's hard for search engines to crawl your site
It breaks some GUI elements in the browser, such as loading visualisations
looks like you can use jquery load function http://docs.jquery.com/Ajax/load

Categories