Is there a way or api to register Url on Google? For example, when a page was created in the project, its address will be automatically submit to Googlebot.
There isn't really a way to submit it automatically (otherwise there would be no captcha).
However why do you want to submit every page to GoogleBot? It is sufficient to point the bot to your project's address once the project goes online (if Google didn't already found your project). GoogleBot is regularly visiting your project, automatically adding/removing new and old pages in your project (as soon as there is somewhere in your project a link pointing to the new page).
Related
I'm implementing Invisible reCAPTCHA on my website, and to reduce the number of third party JS files on page load I'd like to download the https://www.google.com/recaptcha/api.js file only once the user submits a form. Once it loads I would then use grecaptcha.render followed by grecaptcha.execute. I've implemented and tested this, and it works as expected.
My concern is that delaying this file load may negatively affect Google's ability to confirm the user is not a robot. For example, perhaps Google's JS picks up user events while they're using the page and uses those actions to verify the humanness of a user. By delaying the JS load, Google won't have those user events to take into account and may result in more legitimate users getting annoying image challenges.
The docs don't mention anything about this. Does anyone have experience with this implementation? I'd think Google relies on other info like IP address and cookies for this kind of verification, but I want to confirm.
I'm scratching my head from yesterday trying to find about this.
When I navigate to account settings page and view source code, there's literally no user specific data like name, email, gender etc, but when I check via inspect element its there. Same happens with other web pages like order history etc.
I'm assuming the data is being generated dynamically (Am I right?)
I have two questions about this.
How do developers do this?
What's the purpose of doing this? Since developers take the extra pain of generating data dynamically this must be solving an issue otherwise why would they do this?
By generating the new page dynamically, developers can improve the user experience. For example, if you had a separate html file for your settings page, the user would have to make a call to your server to receive that file and see the page (maybe 1/3 of a second). However, if the developer dynamically generates new pages using javascript or some framework, everything is stored locally on the user's machine meaning that the page loads significantly quicker (~1/500 of a second).
Hope this helps.
I made a webpage which can provide some direct downloads. Therefore I only want real human, not crawler, to download my files. I tried to use Google reCAPTCHA but it is part of the webpage - visitors can still use the download links and doesn't have to worry about the reCAPTCHA at all. Is there a way to mandate visitors to pass the verification first? For example, is it possible to pop up reCAPTCHA before the whole page is loaded? If that's doable, how can I do it? Thanks!
What I can recommend here is the captcha form be on the current page that you have and create a new page with the download links that's not indexable.
Upon authorizing the captcha code, use header('Location: download.php'); or something similar to redirect the user.
A captcha before loading a webpage is possible, but it always uses client side code such as javascript which bots can easily bypass.
I am creating an angular app that is hosted on a webserver that doesn't allow me to edit htaccess files or webconfig. There is no server side language option available which means no middleware for creating HTML snapshots. This is a high dollar CRM with webstore and no option of switching hosts.
So I have come up with my own "solution" to the issue. Would it be considered ok to create hyperlinks that link to url's that will generate the same view that will be updated by an onClick event. This way the user will see the content loaded immediately, but bots will have to reload the page at the new url to see the page content.
Example:
View 2
I'm struggling to find a good solution to this issue, and I know others have to be in the same situation as me when it comes to development. The code above is just a visual reference to what I am referring to.
Have you looked at
grunt-html-snapshot
After implementing this and testing this, it does work well. Google sees them as new pages and the user never has to worry about loading new content.
I have a website which has two versions, an all singing all dancing javascript powered application which is served when you request the root url
/
As you navigate around the lovely website the content updates, as does the url, thanks to html5 push state or good old correctly formatted #! urls. However if you don't have javascript enabled you can still use all functionality of the site as each piece of content also exists under it's own url. This is great for 3 reasons
non javascript users can still use the site
SEO - web crawlers can index the site easily
everything is shareable on social networks
The third reason is very important to me as every piece of content must be individually shareable on the site. And because each piece of content has it's own url it is easy to deep link to that url, and each piece of content can have it's own specific open graph data.
However the issue I hit is the following. You are a normal person and have javascript enabled and you are browsing and image gallery on the site and decide to share the picture of a lovely cat you have found. Using javascript the url has been updated to
/gallery/lovely-cat
You share this url and your friend clicks on it. When they click on the link the server sends you the non javascript / web crawler version of the site, and the experience is no where near as nice as the javascript version you would have been served if you directly went to the root of the site and navigated there.
Do anyone have a nice solution / alternative setup to solve this problems? I have several hacks which work, however I am not that happy with them. They include :
javascript redirect to the root of the site on every page and store a cookie / add a #! to the url so on page render the javascript router will show the correct content. ( does google punish automatic javascript redirects? )
render the no javascript page, and add some javascript which redirects the user to the root, similar to above, whenever the user clicks on a link
I don't particularly like either of these solutions, but can't think of a better solution. Rendering the entire javascript app for each page doesn't appear to be a solution to me, as you would end up with bad looking urls such as /gallery/lovely-cat/gallery/another-lovely-cat as you start navigating through the site.
My solution must support old browsers which do not implement push state
Make the "non javascript / web crawler version of the site" the same as the JavaScript version. Just build HTML on the server instead of DOM on the client.
Rendering the entire javascript app for each page doesn't appear to be a solution to me,
That is the robust approach
as you would end up with bad looking urls such as /gallery/lovely-cat/gallery/another-lovely-cat
Only if you linked (and pushStateed) to gallery/another-lovely-cat instead of /gallery/another-lovely-cat. (Note the / at the front).
Try out this plugin it might solve your 3rd reason, along with two reasons.
http://www.asual.com/jquery/address/