How does Next.js improve SEO in comparison with CRA? [closed] - javascript

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
I'm new to this framework and after passing the interactive learning, I've got some questions about the way Next.js handle SEO issues.
Next.js bold capability is its ability to render React components on the server-side, however, as far as I found out, it only renders the first request on the server-side and other requests will be rendered on the client-side, so how does it do better than usual CRA in terms of being SEO friendly?
Based on the first assumption, crawlers like google, hit our site, then receive back the first response as a full HTML page, however for the other consecutive links inside the received page, it should execute client-side rendering using Javascript(which is the main Achilles heel for SEO) and, therefore there is no difference between CRA and Next.js except for the first request, right?
If the previous assumptions are correct, then how could we use links inside our pages that are SEO friendly for crawlers like Google?

As you mentioned, Next.js renders server side only the first request.
Modern crawlers are able to crawl client side apps, so there is no difference for them between Next & CRA, but, there are some restrictions (such as first download size) which probably will fallback to "old" crawl mode, which is collect all links, and request them separately. That means that Next.js will SSR each request, CRA won't.
As addition, many crawlers starts to pay attention to other parameters such as performance (for example First Paint), which SSR version will get better score.
No one can for sure tell that Google bot will behave like this, it may work in a different way, as I've explained in #1.
Render first page, (ssr version), collect links, and render each page
separately (ssr version as well).
The cool think with Next.js Link component is that it wraps a proper anchor, and sets a correct herf that means that Bots will be able to operate in the "old" way without JS enables, if JS is enables it will be rendered much faster (it has several optimisations, such as preloading next pages etc...)

Related

Javascript/jQuery best practice for web app: centralization vs. specification [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Imagine a web app that have dozens of page groups (pg1, pg2, ...) and some of these page groups need some JavaScript code specific only to them, but not to the entire app. For example, some visual fixes on window.resize() might be relevant only in pg2 but nowhere else.
Here are some possible solutions:
1/ Centralized: having one script file for the entire app that deals with all page groups. It's quite easy to know if relevant DOM object is present and so all irrelevant pages simply do a minor extra if().
Biggest advantage is that all JS is loaded once for the entire web app and no modification of the HTML code is needed. Disadvantage is that a additional checks are added to irrelevant pages.
2/ Mixed: the centralized script checks for the existence of a specific function on a page and launches it if it exists. For example we could add a
if (typeof page_specific_resize === 'function') page_specific_resize();
The specific page group in this case will have:
<script>
function page_specific_resize() {
//....
}
</script>
Advantage is that the code exists only for relevant pages and so isn't tested on every page. Disadvantage is additional size for the HTML results in the entire page group. If there are more than a few lines of code, the page group might be able to load an additional script specific to it but then we're adding an http call there to possibly save a few kilos in the centralized script.
Which is the best practice? Please comment on these solutions or suggest your own solution. Adding some resources to support your claims for what's better (consider performance and ease of maintenance) would be great. The most detailed answer will be selected. Thanks.
It's a bit tough to think on a best solution since it's an hypothetical scenario and we don't have the numbers to crunch on: what are the most loaded pages, how many are there, the total script size, etc...
That said, I didn't find the specific answer, Best Practice TM, but general points where people agree on.
1) Cacheing:
According to this:
https://developer.yahoo.com/performance/rules.html
"Using external files in the real world generally produces faster
pages because the JavaScript and CSS files are cached by the browser"
2) Minification
Still according to the Yahoo link:
"[minification] improves response time performance because the size of the downloaded file is reduced"
3) HTTP Requests
It's best to reduce HTTP calls (based on community answer).
One big javascript file or multiple smaller files?
Best practice to optimize javascript loading
4) Do you need that specific scritp at all?
According to: https://github.com/stevekwan/best-practices/blob/master/javascript/best-practices.md
"JavaScript should be used to decorate your site with additional functionality, and should not be required for your site to be operational."
It depends on the resources you have to load. Depends on how frequently a specific page group is loaded or how much frequently you expect it to be requested. The web app is single page? What each specific script do?
If the script loads a form, the user will not need to visit the page more than once. User will need internet connection to post data later anyway.
But if it's a script to resize a page and the user has some connection hiccups (ex: visiting your web app on a mobile, while taking the the subway), it may be better to have the code already loaded so the user can freely navigate. According to the Github link I posted earlier:
"Events that get fired all the time (for example, resizing/scrolling)"
Is one thing that should be optimized because it's related to performance.
Minifying all the code in one JS file to be cached early will reduce the number of requests made. Also, it may take a few seconds to a connection to stablish, but takes milliseconds to process a bunch of "if" statements.
However, if you have a heavy JS file for just one feature which is not the core of your app (say, this one single file is almost n% the size of the total of other scripts combined), then there is no need to make the user wait for that specific feature.
"If your page is media-heavy, we recommend investigating JavaScript techniques to load media assets only when they are required."
This is the holy grail of JS and hopefully what modules in ECMA6/7 will solve!
There are module loaders on the client such as JSPM and there are now hot swapping JS code compilers in node.js that can be used on the client with chunks via webpack.
I experimented successfully last year with creating a simple application that only loaded the required chunk of code/file as needed at runtime:
It was a simple test project I called praetor.js on github: github.com/magnumjs/praetor.js
The gh-pages-code branch can show you how it works:
https://github.com/magnumjs/praetor.js/tree/gh-pages-code
In the main,js file you can see how this is done with webpackJsonp once compiled:
JS:
module.exports = {
getPage: function (name, callback) {
// no better way to do this with webpack for chunk files ?
if (name.indexOf("demo") !== -1) {
require(["./modules/demo/app.js"], function (require) {
callback(name, require("./modules/demo/app.js"));
});
}
.....
To see it working live, go to http://magnumjs.github.io/praetor.js/ and open the network panel and view the chunks being loaded at runtime when navigating via the select menu on the left.
Hope that helps!
It is generally better to have fewer HTTP requests, but it depends on your web app.
I usually use requirejs to include the right models and views I need in each file.
While in development it saves me considerable time on debugging (because I know which files are present), in production this could be problematic considering the number of requests.
So I use r.js, which is conceived especially for requirejs, to compile my JS files when it's time for deployment.
Hope this helps

Somebody help me to answer me why do we use script in asp.net? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
when i use asp.net to coding for website. Asp.net in server call sql server (ado.net, linq or entities framework) and get back data, send for clientside. i use some control as girdview to show data. actually, i do web optimization ( sql server - store procedure, create index, partition => faster, acceleration for get data. Website: UI simple, do not use too many effects)
but why in client-side, when server return data for client, many people always are going to use script (such as javascript, jquery, node js, angular js, bootstrap, react, google o/i....) to show in webpage.
so, it slower or faster when we use girdview?
And when User (people in clientsite) stop scrip in browser, it's mean, Manufacturers allow user or offer to stop script on browser in clientside., so why do we user them ( *.js), when User can stop script?
Even many people use asp.net (new version - 2013) in server, they also use script in there. so asp.net + script in server is faster or slow when we only use asp.net?
please, help me answer.
(I'm apologize because my English is not good.)
Thank you so much.
In the early years of Web development, Javascript on the client side provided for considerable enhancement of the client's "user experience" that static HTML delivered from the server did not. This includes such things as the enabling or disabling of certain interface features based on user input, the appearance or hiding of certain regions of a display based on user input, or combination of other pieces of data.
As web development evolved, the need for even more robust client-side interaction with back-end web servers became evident, and the "frameworks" you mentioned all work in various ways to improve the design, responsiveness, and behavior of a web-based application in ways beyond just enabling or disabling a button. This amounts to complex data binding, callbacks to web services, reducing server round-trips, and creating rich client interfaces, to name only a few.
They're all tools, each with their own role, each working to make web applications a bit more robust than those of the generation before them.
If I understand your question right, the answer comes down to speed and preference.
Firstly, if you disable client-side javascript, your asp.net controls aren't going to really work anyway. You'll find few places that still disable this so it's not really a concern people have anymore.
Secondly, it comes down to where you want to focus development effort and what kind of developers you have. If you have a lot of people used to working backend (C#) and want to stay there, then using asp.net controls and the like make development easier.
If you have javascript developers or people who want to use it, then you have more options that allow you to more decouple your server-side code from your front-end code. This can work out well for maintenance purposes.
The real point is that if you can utilize ajax (http://www.w3schools.com/ajax/default.asp) within your web application, you can make it a lot more responsive. ASP.NET Controls can often cause your page to refresh and cause unnecessary server-side computing to get the data and re-render the entire page (or partial page with asp.net mvc). Using new technologies like angular and others you listed, you can focus data computation and network traffic only on what's important.
For example, if you need a table to change what data is loaded, you can make an ajax request JUST for the data you need to load and then just render that portion on the client.
First of all, every "script" you mentioned (jQuery, AngularJS, Bootstrap, React) is a library written in JavaScript. Except node, which isn't even front-end. And I'm not sure what did you mean by google o/i... JavaScript is currently the only language which works in all browsers.
Initial purpose of JavaScript was to check form values before sending data to servers. It quickly evolved past that, although the usage was throttled by browser adoption, which is still a problem today.
Nowadays we can use JavaScript to render whole webpages. First when opening the page, it can help with rendering, meaning that server doesn't have to do all the work, but can just send plain data, usually in JSON. It's also used to add content to page later, without reloading the page (AJAX). Most well-known examples are real-time chat systems, like the one on facebook. This greatly improves user experience, I can't imagine how terrible it would be if whole page would reload to display a single new message.
Although user can disable JavaScript in their browsers and this would mean the page probably won't work, except if there is fallback design for such cases, I do not know why would someone disable it. And to be honest, probably most of the regular users don't even know this can be done and where is the setting to disable JavaScript.

Is there an easy way to make Javascript apps SEO friendly? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I've been looking at using a new workflow process for web development. Yemoan, Grunt, and Bower with AngularJS seems like a great solution for front-end development. The only downside is that the SEO is absolutely horrible. This seems like a HUGE component of the business decision driving adoption of these services yet I can't find any solutions.
What's a solid solution for making SEO-friendly javascript apps?
The current standard practice for making ajax heavy sites/apps SEO friendly is to use snapshots. See the google tutorial on this here: https://developers.google.com/webmasters/ajax-crawling/docs/html-snapshot and here: https://developers.google.com/webmasters/ajax-crawling/docs/specification
To summarize, you add this tag <meta name="fragment" content="!"> to your DOM. The crawler will see this and redirect itself from www.example.com to www.example.com?_escaped_fragment_= where it will be expecting the snapshot of the page.
You could manually copy the html from your site after all ajax is finished, and create your snapshot files yourself. However, this could be quite a nuisance. Instead, you could use PhantomJS to automate this process for you. Personally, I am going to use .htaccess to send the escaped_fragment requests to a single php file which has cached markup created from the content manager when the edits were made. This allows it to recreate the markup for crawlers to view (but no functionality for humans).
Here's a relevant piece of info from Debunking 10 common KnockoutJS myths. I assume it applies more or less equally to Angular.
Graceful degradation in absense of javascript depends on the way your
application has been architectured. Although KO being a pure
javascript library, does not offer any support for graceful
degradation in absence of javascript, nevertheless unlike many of the
competing technologies it does not hinder graceful degradation.
To create a KO application that degrades gracefully, just ensure that
the initial state of the page that is rendered by the server suffices
to convey the information that a user should see in absence of
javascript. Fallback mechanisms (eg simple forms and links) should be
available that provide the complete (or partial) application
functionality in absence of javascript. Then when you create your view
models you can instantiate them from the data already available from
the DOM and future data can be loaded via ajax without refreshing the
page.
A good example for this functionality can be a grid. The basic HTML
page served by the server can contain a simple HTML table with support
for traditional links for pagination. Then you can create your view
models from the data present in the table ( or ajax if a bit of
redundant data load does not matter for you) and utilize KO for
interactive bindings.
Since KO does not use special inline markup or custom html tags, but
rather simple data-bind attributes which are anyways not visible in
absence of javascript, it does not hinder graceful degradation.

How might I block an IP address using JavaScript or jQuery? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I use the www.Spruz.com network for my website. I am not looking to use PHP or SSL blocking methods being that I can't as is. I can't seem to find JavaScript or jQuery code to possibly hide a DIV element or redirect without having to use PHP or SSL methods. I am getting hit with foreign spammers/advertisers that are hitting up my messengers and whatnot. I need to block a few IPs but I am lost. How might I achieve my goal?
I suggest you use server side scripting for this. I think you are misunderstanding what javascript actually does. Javascript is a client-side scripting language which means it runs on the client machine. So although you can hide the div, simply changing the css properties will reveal everything (a regular user wouldn't do that, you can't say anything about a malicious user).
Ideally you'd get access to your PHP or whatever server side language so you could run IP blocks from there. If you can't, you can try the solution here to get the IP via JavaScript:
How to get client's IP address using javascript only?
But the best solution is probably to rebuild whatever form is being abused with either reCAPTCHA or some other spam blocking script.
Well before trying to figure this out, just for grins I fed "spruz web blocking" into Google. The very first hit was http://my.spruz.com/forums/?page=post&id=9B8BE09C-1D18-4C9D-8510-C1D3035BDA44&lastp=1 ...looks like it explains in detail exactly how to do what you want on the server side. I haven't actually tested it or investigated in detail that they implemented it what I would consider correctly, but it seems worth a shot.
As others have expressed, client-side blocking won't do more than slightly annoy serious spammers. The logic is to a) send them the requested pages then b) have the pages recognize they're inside a hostile environment and self-destruct. Once the spammers already have the pages (a), they can take a quick snapshot before the pages self-destruct (b), and do whatever they wish. (In fact they'd likely use something like 'wget' to fetch the raw page and skip page execution and display altogether no matter what you do.) Correctly-coded server-side solutions, on the other hand, never send the pages to that IP address in the first place, so the only thing they can do is try to fool you by pretending to be somebody else.
(There's a pretty standard way to do this sort of thing pretty easily with an 'apache' web server, but I see that's not relevant on spruz.)

Enterprise JavaScript Tips [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm currently writing an article covering tips/tricks/best practices when working with JavaScript within an Enterprise environment. "Enterprise" can be a bit ambiguous so, for the purpose of this article we will define it as: supporting multiple web-based applications within a network that is not necessarily connected to the Internet.
Here are just a few of the thoughts I've had, to get your creative juices flowing:
Ensure all libraries are maintained in a central, web-accessible location and that all applications reference those libraries (rather than maintaining independent copies).
Reference libraries by version, guaranteeing new releases won't break your applications (no jquery-latest, use jquery-#.#.# instead).
Proper namespacing of application code
What tips can you provide to help me out?
Test your javascript on the largest DOM size possible. IE6/7/8 will hang based on number of executed VM statements, as opposed to actual run time. Regex and regex jQuery selectors are particularly bad.
Write less. Javascript in particular becomes very hard to manage and debug beyond a certain size set. Breaking up functionality into different external source files can help, but always consider a better method to do what your doing (example: jquery plugin.)
If you're writing a common pattern over and over, STOP. Either create a global method, or if the method acts on a jQuery selector, consider writing your own jQuery plugin instead.
Don't make methods take DOM objects or IDs. Pass in the jQuery object itself, and operate on that. In this manner, you don't force arbitrary DOM constraints on your method (the object passed in might not even be on the DOM yet, or it might not have an ID).
Don't modify prototypes. This breaks libraries/jQuery. Write a plugin or new datatype if you have to.
Don't modify libraries; this breaks upgradability. You can often achieve a similar affect by wrapping the jQuery library with your own plugin and forwarding/intercepting calls, kind of like AOP.
Don't have code execute while the DOM is still loading. This leads to race conditions that you'll only catch on the machines breakage occurs on, and even then it won't be consistent.
Don't style the page with jQuery. It's tempting, but a FOUC gets worse as the DOM grows. Build .first-child, .last-child etc. in your server pages, as opposed to hacking it in with jQuery.
maybe I'll come back and add more... but for now I have just few in my mind:
1) Caching strategies. Enterprise servers are heavy loaded, to serve http requests it is important to know how can you deal with it. E.g. JS can be cached on client side, but you should know how to 'tell ' a client that new version is available.
2) There are different libraries which minify counts of requests to JS files just appending them (based on configuration). E.g. for Java it is Jawr (just one of). It's better to load 1,2,3 scripts (read 'files') instead of 100 (and this number becomes normal today, in era of RIA). One more nice trick Jawr does, it creates zipped bundles, so when client asks for script server does not need to zip it.
3) Your business logic can be processed by application server (sort of JBoss, GlassFish etc when we talk about java), but JavaScript is static so it can be server by http server (like Apache, or better lighttd, nginx). Again this way you minify server loading (critical for enterprise)
4) libraries like jquery can be loaded from Google CDN (or any other reliable source).
5) use Yslow, PageSpeed, Ajax DynaTrace to check performance, get ideas to improve etc.
6) try mod_pagespeed, it can 'eliminate' jawr, or make powerful company for it
7) one more issue used today is JavaScript-on-demand loading
8) offline storage
Well, although you've specified topics you are interested in, the area still looks unlimited...

Categories