escaped_fragment in sitemap or shebang? - javascript

My site is a Single Page Application and uses alot of javascript. I've got my server setup to generate the site the user sees with javascript so Google sees the same content. Google knows about this via the fragment meta tag and converts links with the shebang to escaped_fragment to get that rendered site. Since I want Google to know about not-so-recent articles on my site that don't have a link from the mainpage anymore, I added a sitemap.
The question is if I should add urls with or without the shebang or convert it to escaped_fragment.
http://www.example.com/#!veryAwsomeDynamicPage
or
http://www.example.com/?_escaped_fragment=veryAwsomeDynamicPage
My Goal: I want Google to get my content via escaped_fragment, but want the link that Google shows in its search result to be with the shebang (so that users don't get redirected)

as outlined in google's specification under 'Role of the Search Engine Crawler' it states
The search engine agrees to display in the search results the corresponding pretty URLs:
thus, http://www.example.com/#!veryAwsomeDynamicPage is displayed in the search results while google fetches the content on http://www.example.com/?_escaped_fragment=veryAwsomeDynamicPage
so http://www.example.com/#!veryAwsomeDynamicPage shall be in the sitemap.xml

Related

Google's IMPORTXML is returning "Imported content is empty", even though content loads without JavaScript [duplicate]

I am trying to import data from the following website to Google Sheets. I want to import all the matches for the day.
https://www.tournamentsoftware.com/tournament/b731fdcd-a0c8-4558-9344-2a14c267ee8b/Matches
I have tried importxml and importhtml, but it seems this does not work as the website uses JavaScript. I have also tried to use Apipheny without any success.
When using Apipheny, the error message is
'Failed to fetch data - please verify your API Request: {DNS error'
Tl;Dr
Adapted from my answer to How to know if Google Sheets IMPORTDATA, IMPORTFEED, IMPORTHTML or IMPORTXML functions are able to get data from a resource hosted on a website? (also posted by me)
Please spend some time learning how to use the browsers developers tools so you will be able to identify
if the data is already included in source code of the webpage as JSON / literal JavaScript object or in another form
if the webpage is doing a GET or POST requests to retrieve the data and when those requests are done (i.e. as some point of the page parsing, or on event)
if the requests require data from cookies
Brief guide about how to use the web browser to find useful details about the webpage / data to import
Open the source code and look if the required data is included. Sometimes the data is included as JSON and added to the DOM using JavaScript. In this case it might be possible to retrieve the data by using the Google Sheets functions or URL Fetch Service from Google Apps Script.
Let say that you use Chrome. Open the Dev Tools, then look at the Elements tab. There you will see the DOM. It might be helpful to identify if the data that you want to import besides being on visible elements is included in hidden / not visible elements like <script> tags.
Look at Source, there you might be able to see the JavaScript code. It might include the data that you want to import as JavaScript object (commonly referred as JSON).
There are a lot of questions about google-sheets +web-scraping that mentions problems using importhtml and/or importxml that already have answers and even many include code (JavaScript snippets, Google Apps Script functions, etc.) that might save you to have to use an specialized web-scraping tool that has a more stepped learning curve. At the bottom of this answer there is a list of questions about using Google Sheets built-in functions, including annotations of the workaround proposed.
On Is there a way to get a single response from a text/event-stream without using event listeners? ask about using EventSource. While this can't be used on server side code, the answer show how to use the HtmlService to use it on client-side code and retrieve the result to Google Sheets.
As you already realized, the Google Sheets built-in functions importhtml(), importxml(), importdata() and importfeed() only work with static pages that do not require signing in or other forms of authentication.
When the content of a public page is created dynamically by using JavaScript, it cannot be accessed with those functions, by the other hand the website's webmaster may also purposefully have prevented web scraping.
How to identify if content is added dynamically
To check if the content is added dynamically, using Chrome,
Open the URL of the source data.
Press F12 to open Chrome Developer Tools
Press Control+Shift+P to open the Command Menu.
Start typing javascript, select Disable JavaScript, and then press Enter to run the command. JavaScript is now disabled.
JavaScript will remain disabled in this tab so long as you have DevTools open.
Reload the page to see if the content that you want to import is shown, if it's shown it could be imported by using Google Sheets built-in functions, otherwise it's not possible but might be possible by using other means for doing web scraping.
According to Wikipedia,
Web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websites.
Use of robots.txt to block Web crawlers
The webmasters could use robots.txt file to block access to website. In such case the result will be #N/A Could not fetch URL.
Use of User agent
The webpage could be designed to return a special a custom message instead of the data.
Below there are more details about how Google Sheets built-in "web-scraping" functions works
IMPORTDATA, IMPORTFEED, IMPORTHTML and IMPORTXML are able to get content from resources hosted on websites that are:
Publicly available. This means that the resource doesn't require authorization / to be logged in into any service to access it.
The content is "static". This mean that if you open the resource using the view source code option of modern web browsers it will be displayed as plain text.
NOTE: The Chrome's Inspect tool shows the parsed DOM; in other works the actual structure/content of the web page which could be dynamically modified by JavaScript code or browser extensions/plugins.
The content has the appropriated structure.
IMPORTDATA works with structured content as csv or tsv doesn't matter of the file extension of the resource.
IMPORTFEED works with marked up content as ATOM/RSS
IMPORTHTML works with marked up content as HTML that includes properly markedup list or tables.
IMPORTXML works with marked up content as XML or any of its variants like XHTML.
The content doesn't exceeds the maximum size. Google haven't disclosed this limit but the below error will be shown when the content exceeds the maximum size:
Resource at url contents exceeded maximum size.
Google servers are not blocked by means of robots.txt or the user agent.
On W3C Markup Validator there are several tools to checkout is the resources had been properly marked up.
Regarding CSV check out Are there known services to validate CSV files
It's worth to note that the spreadsheet
should have enough room for the imported content; Google Sheets has a 10 million cell limit by spreadsheet, according to this post a columns limit of 18278, and a 50 thousand characters as cell content even as a value or formula.
it doesn't handle well large in-cell content; the "limit" depends on the user screen size and resolution as now it's possible to zoom in/out.
References
https://developers.google.com/web/tools/chrome-devtools/javascript/disable
https://en.wikipedia.org/wiki/Web_scraping
Related
Using Google Apps Script to scrape Dynamic Web Pages
Scraping data from website using vba
Block Website Scraping by Google Docs
Is there a way to get a single response from a text/event-stream without using event listeners?
Software Recommendations
Web scraping tool/software available for free?
Recommendations for web scraping tools that require minimal installation
Web Applications
The following question is about a different result, #N/A Could not fetch URL
Inability to use IMPORTHTML in Google sheets
Similar questions
Some of this questions might be closed as duplicate of this one
Importing javascript table into Google Docs spreadsheet
Importxml Imported Content Empty
scrape table using google app scripts
One answer includes Google Apps Script code using the URL Fetch Service
Capture element using ImportXML with XPath
How to import Javascript tables into Google spreadsheet?
Scrape the current share price data from the ASX
One of the answers includes Google Apps Script code to get data from a JSON source
Guidance on webscraping using Google Sheets
How to Scrape data from Indiegogo.com in google sheets via IMPORTXML formula
Why importxml and importhtml not working here?
Google Sheet use Importxml error could not fetch url
One answer includes Google Apps Script code using the URL Fetch Service
Google Sheets - Pull Data for investment portfolio
Extracting value from API/Webpage
IMPORTXML shows an error while scraping data from website
One answer shows the xhr request found using browser developer tools
Replacing =ImportHTML with URLFetchApp
One answer includes Google Apps Script code using the URL Fetch Service
How to use IMPORTXML to import hidden div tag?
Google Sheet Web-scraping ImportXml Xpath on Yahoo Finance doesn't works with french stock
One of the answers includes Google Apps Script code to get data from a JSON source. As of January 4th 2023, it's not longer working, very likely because Yahoo! Finance is now encrying the JSON. See the Tainake's answer to How to pull Yahoo Finance Historical Price Data from its Object with Google Apps Script? for script using Crypto.js to handle this.
How to fetch data which is loaded by the ajax (asynchronous) method after the web page has already been loaded using apps script?
One answer suggest to read the data from the server instead of scraping from a webpage.
Using ImportXML to pull data
Extracting data from web page using Cheerio Library
One answer suggest the use of an API and Google Apps Script
ImportXML is good for basic tasks, but it won't get you too far if you are serious in scraping:
The approach only works with the most basic websites (no SPAs rendered in browsers can be scraped this way. Any basic web scraping protection or connectivity issue breaks the process, and there isn't any control over HTTP request geo location, or number of retries) - and Yahoo Finance is not a simple website
If the target website data requires some cleanup post-processing, it's getting very complicated since you are now "programming with Excel formulas", rather a painful process compared to regular code writing in conventional programming languages
There isn't any proper launch and cache control, so the function can be triggered occasionally and if the HTTP request fails, cells will be populated with ERR! values
I recommend using proper tools (automation framework and scraping engine which can render JavaScript-powered websites) and use Google Sheets just for basic storage purposes:
https://youtu.be/uBC752CWTew (Pipedream for automation and ScrapeNinja engine for scraping)

How to scrape info from a specific page of a website that has a single URL for multiple pages?

I am trying to write a program that automatically clicks on certain places based on what is displayed. The website I am trying to do this on is Gimkit. I am using a python web-scraper and auto-clicker to perform this task. The problem I am running into is that the player side URL is the same, no matter what html is loaded. This leads to the scraper only getting the html for the first page, which is the game PIN page. This means that my scraper is getting no useful information and is simply stuck. When I fill out the information on my device and check the URL, it is the same. There is no difference in the URL when I am entering the PIN or playing the game, but when I inspect element, it obviously is different html. Right now, I am using Requests, BeautifulSoup4, and lxml to get and format the html from the site. How can I access the html for the gameplay page instead of the PIN page?
Related: How can I keep the same url in the address bar for every page?

json-ld for Javascript popup window content

on our website, products details page open via a Javascript popup window.
that same product page may be opened with a direct link to that page with the popup window opened.
In the above scenario #2 my json-ld data is loaded fine and Google structured data testing tool picks up the information.
However in the most common scenario i.e. scenario #1 above the json-ld data doesn't seem to load and the product information is null.
Example - scenerio 1: http://www.beride.net/school/guincho-adventours
Example - scenerio 2: http://www.beride.net/school/guincho-adventours?course=62
I used Google Tag manager to fire up the json-ld scripts
Does anyone know I can can get the json-ld information to load in the above scenario #1?
The Structured Data Testing Tool is not 100% at coping with JavaScript including the Google Tag Manager. Either post in rendered html to the tool or look at the Structured Data report in the Google Search Consoles.
Your first example is a list of products. Googles guides indicate that you should not mark up complete entities when they are listed as summaries that link to the details. They suggest you mark up a list of links.
https://developers.google.com/search/docs/guides/mark-up-listings

How to display a webpage in a html page from an url?

I want to display 2 webpages in single aspx webpage is it possible ?
for example
user -- open link for www.mywebsite.com
In my homepage i want to display both www.google.com and www.bing.com.
In the background i call two different url's and they should display on my same aspx page.
To integrate websites in other websites use an iframe like this:
<iframe src="http://www.bing.com"></iframe>
More information: w3schools
Google
This will not work for google because of their therms and services:
1.3 Your Obligations. You shall receive a Query from the End User and shall forward that Query to Google. You maynot in any way frame, cache or modify the Results produced by Google, except as otherwise agreed to between You and Google.
So if you want to integrate Google search to your website, you can read more about the API here Google Api
Bing
To embend searchresults from bing take a look at the Bing api. 5000 queries per day are free.

How to display HTML5 convert of a PDF on page without using an IFrame?

I do not want to embed the PDF directly as the PDF is itself indexed by Google, and is returned directly in Google search results (If the PDF is displayed on page I can benefit from advertising clicks).
As suggested elsewhere on StackOverflow I have used this conversion tool:
http://www.idrsolutions.com/example_conversions/
However the output from here in the Iframe is not indexed by Google.
Here is an example of the output I desire:
http://www.manualsdir.com/manuals/132858/jaguar-s-type.html
How can I replicate this Functionality?
Google tries to associate framed content with the page containing the frames, but we don't guarantee that we will.
From here: https://support.google.com/webmasters/answer/34445?hl=en
Speculating, what this probably means is that the content should be on the same domain (to ensure it's your content) and only iframed once or a small number of times (so that multiple pages cannot gain credit for the same piece of content).
Did you follow the above ideas and give Google enough time to index the content?
For disclosure, I am a developer at IDRsolutions.
We do have customers who are iframing converted content and having it indexed by Google. You might like to try the singlefile mode that puts all pages into one file so Google is able to easily index content on all of the pages.
If you have any questions specific to the conversion, there are support forums here: http://support.idrsolutions.com/forums/forum/pdf-conversion-forum/

Categories