We have developed a website and it uses JavaScript library to query database and display the data in HTML page. When you go to the website, you need to search for something in order to retrieve the data.
so by default website doesn't display any data and it needs users to perform action.
The search result data is not visible in HTML view source as it uses JavaScript.
So, the search engines have no visibility as to what our website used for and data used in order to redirect more visitors.
Secondly, I wonder how search bots/engine crawl the websites with non-static content and understand enough about the website to redirect users.
from what i see from your question what you need to do is send requests to your server to query data from your database and show it to you client in real-time.For that i would recommend that you use web sockets(such as socket.io) or AJAX so that you could update your website seamlessly
From what I have researched, crawlers actually don't read dynamic content. Instead, they use this technique called dynamic rendering.
Dynamic rendering has to do with the server itself. It checks each request and if it determines it to be a bot, then it will send static HTML content to the bot. Otherwise, it will send normal dynamic content to the user.
Also, google and other search engines make use of meta tags. With meta tags you can define a short description of the webpage which will oftentimes be shown in the search results page.
As for question in the title, you would need to send the search information to a server. From there, you would process the data server-side and send the results back to the client where JavaScript would render it based off of the results.
You should use AJAX for this.
Resources:
https://ignitevisibility.com/dynamic-rendering-seo-details-need-know/
https://developer.mozilla.org/en-US/docs/Web/Guide/AJAX
https://developer.mozilla.org/en/docs/Web/HTML/Element/meta
Related
Im kind of new to this and looking to expand pulling API results and displaying them on page, whether it's from a blog resource or content generation.
For example, I want to pull from VirusTotal's API to display returned content. What is the best way to capture that in an input tag and display it in a DIV. And what if it were an option to pull from different API's based on drop down selection?
An example of the API to pull content would be here https://developers.virustotal.com/reference#api-responses under the /file/report section.
To call the data from the API, you need to send a request. However, there is a problem with CORS. Basically, you can't call the website from inside your web browser from a page on your local machine, because your browser blocks the request. The web browser will only allow calls to and from the same server, except for a few exceptions.
There's two ways to approach this.
The simplest one is to make a program that calls the API and outputs an HTML file. You can then open that HTML file to read the contents. If you want to update the info, you would need to run that program once again manually. You could easily do this building off the python they provided.
The other, little bit more complex way, is where you host a server on your PC. When you go to the webpage on that server, it sends a request to the website, and then provides the latest information. There's tons of frameworks and ways to do this. For an absolute beginner on this subject, ExpressJS is a good start. You can make a hello world program, and once you do that you can figure out how to call the API whenever a page is loaded, and display the results.
I am working on a seemingly simple Facebook integration to allow my client (a local restaurant) to display their daily specials on their website. To make this work properly I need to sort through their posts, find the most recent post that has a certain string of characters (to distinguish from other posts they make), and display the post on the webpage. I know how to do this part but I am having trouble getting the data in the first place.
My question: How do I access the data for this Facebook page's posts? This needs to work
Without logging into Facebook
Preferably client-side
I would also like to use the JSON from the Graph API but I need an access token according to all the research I have done so far which would force me to go server-side. The Facebook social plugins are not specific enough to do what I want. Any help using the Graph API or another method to retrieve the page's posts is very much appreciated.
Consider I have a URL, now I want to have some information associated with the URL on my page same way as Facebook or other websites such as LinkedIn do. You submit a URL and the data about the website is retrieved to be submitted. I am using JQuery and HTML for an application and want to know how to do this thing. My application has few URL's retrieved from the different sources. I want to show some of the information instead of plane URL's. How is it possible to make such a thing using JQuery?
You cannot access external URL's directly by AJAX calls because of the Same Origin Policy. What you'll have to do is to submit a request to your own server, and have some serverside code request the external URL and retreive information.
How that is best achieved depends on what serverside setup you're running.
.NET example
PHP example
(basically just google "Screen scraping" + your language of choice)
You need to process the whole page to search for images or useful information.
I have a client request on one of my projects where they want to be able to enter a url and have it pull in some information form the site who's url they entered and save it in the database.
So the user enters: http://www.example.com/2342342 and my controller visits that site, and gets the content of the first <h1>Tag</h1> on the site and saves this in the database. Is this possible? If so, how would I go about doing it? Would I use some rails commands to do it, or something else, like jQuery?
Nokogiri is a great parser and can work directly with an url.
So two steps there:
Instantiate a Nokogiri object with the url as param
Parse the html page to get what you expect
Find instructions here: http://nokogiri.org/tutorials/parsing_an_html_xml_document.html
Because you'll work with another website, keep in mind two advice:
wrap your queries so that you can rescue if the website is down
consider using ajax request because it could be long
I would checkout the Railscast here:
http://railscasts.com/episodes/190-screen-scraping-with-nokogiri
It's explained very well on how to use Nokogiri and scrape content from other sites.
I am having a trouble embedding ajax html into the html page itself, I need to make this ajax response be apparent in the page source.
I have 2 servers, one that runs the web application and the other is responsible for performing search queries (searcher). Now the application server sends the html page to the client's browser, which will request some search queries to searcher through ajax, after the successful reply the browser will put the html result into the page.
The problem is that search results do not exist in the html source which is not good for SEO, google crawlers will have no idea what is being searched for.
The other problem is if I made the application server make a request and wait for searcher results, the page will take tons of seconds to load.
I am not sure what to do.. I really need to make the website SEO friendly and also need the page to load quickly!!
Any pointers or ideas will be appreciated.
Thanks a lot,
Wa'el
It's impossible to get Ajax provided data to be present in the "source" in this case as the source is always the original page requested from the server before any client side changes.
And any kind of client that does NOT support javascipt, like search engine crawlers, will never se any ajax loaded data.
If you need the information to be indexable you need to
1: serve the page as in from the server, no client side loading
2: Not use posted forms fo reach the data, search engines do not folow posts, only get links.