I use a wordpress blog format for the news section of my website but not the main part and I want to have a"most recent" widget drawing from the wordpress section but displaying on a page that is not using the blog it is written mostly in html and javascript. Is this possible and if so does anyone have any examples they could point me to?
Thanks!
I don't think it's possible - the Widget API is loaded somewhere in the middle of the WordPress loading process so you need a PHP rendering engine loading the entire WP and getting the widget code evaluated at the end. You will need some PHP to get the data from the database or at least AJAX code calling to external scripts that pull posts from the MySQL database.
It may be possible with an iframe, but I (as would most developers) highly advise against that. W3C compliance with iframes can be tricky, but if you're not worried about that then give it a go. On second thought, iframes wouldn't even be a viable option. Just pull the RSS in to your HTML pages using something like this (http://www.htmlgoodies.com/beyond/css/displaying-rss-feeds-with-xhtml-css-and-jquery.html) from HTMLGoodies.com.
Another alternative could be to use a method outlined by FLD Trace here: http://www.fldtrace.com/wordpress/display-latest-post-outside-of-wordpress-with-json-and-jquery, and showcased here: http://www.fldtrace.com/lab/jsonAPI.html.
I've used it before with mild success on a few projects, and styled the FLDT widget accordingly. Be sure to give credit where credit is due if you use this and it fits the bill of what you're looking for--though I don't see any licensing restrictions for FLD Trace's method.
Related
We have a web app that its content generated by javascript. Can google index those pages?
When we investigate this issue we always found solutions from old pages about using "#!" in links.
In our app the links are like this:
domain.com/paris
domain.com/london
When we use these kind of links, javascript populates content.
Is it wise to use HTML snapshot or do you have any other suggestions?
Short answer
Yes they can crawl JavaScript generated content, as long as you are using pushstates.
Detailed answer
It depends on your setup. Google and Bing CAN crawl javascript and AJAX based content if your are using pushstates. If you do they will handle content coming from AJAX calls, updates to page title or meta tags using javascript, and in general any such things.
Most frontend frameworks like Angular, Ember or Backbone already works with pushstates so in these cases you don't need to do anything. Check whatever system you are using to see how they do things. If you are not using pushstates you will need to implement it on your own or use the whole escapted_fragment html snapshot deal.
So if you use pushstate then yes, search engines can crawl your page just fine. If you don't then no, you will need to implement pushstates or do HTML snapshots.
Bonus info - Unfortunately Facebook does not handle pushstates, so the facebook crawler needs either non-dynamic og-tags or HTML snapshots.
"Generated by JavaScript" is ambiguous. That could mean that you are running a JS script on the server or it could mean that you are making an AJAX call with a JS API. The difference appears to matter as far as Googlebot is concerned. But you don't have to take my word for it, as there is empirical proof of what Googlebot will and won't currently cache as far as JavaScript content in the form of live experiments using both the XMLHTTPRequest API and the Fetch API. So, as you can see, server-side rendering is still going to be the best way to go for SEO.
I intend to create an HTML report from a perl script on Linux/Unix side. The report contains various statistics mainly in the tabular format. Since there are many such tables I want to split them into categories using Tabs. The report then will be sent to some email-ids, as an attachment.
The questions are:
Is there a good example of HTML + Javascript to create such tabs? I could not find a complete example
Libraries like jQuery fits the bill except that I need to give the .js file as well, which becomes a bit tedious. Is it possible to somehow embed jQuery (or any other library) in HTML?
Thanks in Advance!
I hope this answers your questions
Use jQuery UI which is an extension of jQuery library, Or you can use ExtJS
and there are lots of UI library depending on how much you want.
Why does giving JS file become tedious ? use script tags to call the external JS files to use these libraries. Embedding JS in your HTML will clutter it and its a BAD practice.
Checkout
jquery's tab example :
http://jqueryui.com/demos/tabs/
ExtJS tab example : http://dev.sencha.com/deploy/ext-4.1.0-gpl/examples/tabs/tabs.html
EDIT:
If you are planning to use JS in emails, forget it. A lot of email clients remove JS content.
Instead
Share a google spreadsheet link with email
Generate a PDF that has the report, there are a lot of libraries that convert HTML to pdf , use them and convert an HTML table to PDF.
Take a teaser approach to this one where its a hybrid of INLINE html + actual links to go to real content.
Just take a screenshot of the real tabs and place as the header image in your email
Below the tabs image, place only the first page of the tab content
Upon clicking the tabs in the email it takes them to the actual page
The URL can be tokenized and be HTTPS so it will be somewhat secure to view via link
The real tabs can use jQuery UI as others have suggested.
Here's my problem: I want to build a website, mostly static but with some dynamic parts (a little blog for news, etc..).
My webserver can only do static files (it's actually a public dropbox directory!) but I don't want to repeat the layout in every html page!
Now, I see two possible solutions here: either I create an index.htm page that emulates site navigation with javascript and AJAX or I create all the different html pages and then somehow import the layout bits with javascript..
From you I need ideas and suggestions on how to implement this, which libraries to use, or maybe there exists even something tailored exactly for what I need?
Thanks!!
I would define the site layout in your index.html file, and then use JavaScript and Ajax to load the actual content into a content div on the page. That way your content files (fetched by Ajax) will be more or less plain HTML, with CSS classes defined in index.html. Also, I wouldn't recommend building a blog in pure HTML and JavaScript. It wouldn't be very interactive; no comments, ratings, etc. You could store your blog content in XML and then fetch and display it with Ajax and JavaScript, however.
While on the subject of XML, you could implement all your site content in XML. You should also store the list of pages (for generating navigation) as XML.
Just another one way. You can generate static HTML in your computer and upload result to dropbox. Look at emacs muse.
jQuery allows you to easily load a section of one page into another page. I recommend loading common navigation sections into the different pages, rather than the other way around to avoid back/forward problems. Layout can be done with a separate CSS file rather than with tables to minimize the amount of repeated code. For the blog, you could put each blog entry in a separate file and load each section individually.
However, I would just use something already available. TiddlyWiki, for example, is a self-contained wiki that is all in one file. It's very customizable, and there's already a blog plug-in available for it. You can work on the site on your hard drive or USB drive, and then you can upload it to the web when done. There's nothing more to it.
Have you considered using publishing software on your computer to combine your content with a template, resulting in a set of static pages that you can then upload to the dropbox?
Some options in this regard come to mind:
Movable Type - can output static HTML which can then be uploaded to the server
Adobe Dreamweaver
Apple iWork Pages
To handle comments, you can use Disqus. It inserts a complete comment system into your site using just JavaScript.
You can use the Google Closure templates. It's one of the fastest and most versatile javascript templating solutions around.
I noticed that like Google Email, FB's source code shows nothing but Javascript. Why do they use JS to write the page?
this allows them to render pages extremely fast. They just load some javascript to render everything on the screen and then load the rest.
They name it BigPipe. You can read more here http://www.facebook.com/note.php?note_id=389414033919
pretty interesting reading.
Because their pages are extremely dynamic; most of the content has to be constructed dynamically.
All their content is populated using AJAX giving it a dynamic and desktop-ish look and feel (aka the instant messaging features)
Because AJAX (Asynchronized JavaScript and XML), provides dynamic feature to webpages, or websites, with this multiple parts of single page can work or can load simultaneously, so it provide great flexibilty and speed to loaading and working of pages
I had read that SEO is applicable for static website, which holding the information in the initial page itself. Is it possible to get search engines to index the dynamically added information?
I used AJAX for loading information. In this situation how can I optimize a site for search engines?
You have to make all your content accessible without javascript (ie. ajax). Otherwise the search engine spiders cannot index your content.
The proper way to use javascript and Ajax is to first code your pages and delivery content without javascript. All content should show in a logically organized manner. Once this is done you can use JS/Ajax to provide superior usability to the visitors who have JS enabled.
That will benefit all your users, javascript enabled and disabled, and the search engines.
As long as each page has a unique URL (either by url rewriting or by query string parameters) and uses that to drive the content being displayed SEO will work.
I've done this a number of times in the past.
Ensure that your content is accessible to clients without JavaScript. You may have JavaScript on your pages that changes the content based on the URL.
I don't really know about this, but IMHO, using semantic markups and submitting sitemap to Google helps a lot.
You can create a website that has AJAX and is search engine compatible but it must be created such that the same information can be accessed without AJAX through the same URL. Search engine cannot execute Javascript and as such any content only available through Javascript will be inaccessible to the search engine.
You need to either provide this content within the <noscript> tag or within the page by default and have the Javascript hide it for your AJAX version.
You cannot deliver a different page to a search engine such as Google as they will generally crawl a page both as their bot but also masking as a user by sending a user-agent string purporting to be say, Internet Explorer. This is their method for ensuring you're not trying to game the search engines and they're seeing the same content as a regular user.
To solve this problem I have create a sitemap of the site.
For example, in my sitemap I have
www.example.com/level_one/level_two/page1.html,
www.example.com/level_one/level_two/page2.html,
...
So the crawlers (Google, yahoo, Bing, etc) knows what to look for. But when an user goes to www.example.com always use pure AJAX site.
So you need to acces the pages in the sitemap like a static site.
Other way to solve this (more work) is to make page compatible without JavaScript, so if the user can execute JavaScript you rewrite all href to "#" (for example)
Please check: http://www.mattcutts.com/blog/give-each-store-a-url/
SEO ultimately is based on have a good site.
Things that will help you are links from other "good sites", Having descriptive friendly, URLS, page titles and H1 headings
submitting sitemaps to google and using there webmaster tools is a great starting place