Getting a reactjs SPA spidered without using node.js - javascript

I'm building a reactjs based website that others will be deploying. It takes the form of a single page app with URL routing /#like=this and the final websites will be content rich. All of the content needs to be visible to search engine bots. Is there a way to do this (even a hacky one) that doesn't require isomorphic server-side rendering? In particular, I can't expect the end users to be able to serve pages with node/express.

Is there a way to do this (even a hacky one) that doesn't require isomorphic server-side rendering?
No
In particular, I can't expect the end users to be able to serve pages with node/express.
Nodejs has nothing to do with this. The client does not know where the content is rendered.

Related

Will a separate backend server with Next.js frontend affect the benefits of Next.js (SSR, SEO friendly, etc)?

Currently I have a backend express server that is used by our mobile apps (iOS and Android). I want to use Next.js for our web app version of the same application for the benefits that its been praised for (SSR, SEO, etc). I don't want to rewrite our current backend api onto the Next.js server. I just want to use Next.js as the frontend while it sends request to our current server (ideally I want to keep the backend code in a single server). I've done a bit of digging and it's entirely possible, however will this render the benefits of Next.js useless so I might as well just use CRA?
I want to have an application that is the most SEO friendly and efficient for users. Wondering which option is the best in this case.

Server side rendering with next.js vs traditional SSR

I am very used to the approach where SSR meant that the page got a full refresh and received a full HTML from the server, where it gets rendered with razor/pub/other depending on the backend stack. So every time the user would click on the navigation links, it would just send a request to the server and the whole page would refresh, receiving a new HTML. That is the traditional SSR which I understand.
With SPA however, we have for example react or angular, where we receive almost empty HTML on the beginning and then the JS so that the whole app gets initialized on the client side. We can then have some REST API to get json data and render views on the frontend (client side routing and rendering) without any page refresh. We don't even need any server really.
Now, what I have a problem understanding is how SSR (such as next.js) works with react.
From what I am reading, the first request returns full HTML+CSS (which helps with SEO etc - I get that), but what happens later? What happens after that first/initial request? Does the whole react app initialize in the browser and then it just behaves EXACTLY as if it was a normal SPA (meaning we have client side routing and rendering from now on, without any need to make requests to that server)? In other words, does next.js still make any server requests after the initial one, or does it act like a typical SPA with CRA from now on?
I spent lots of time reading but all the articles mainly focus on the initial request and SEO and time to first byte, paint etc. and I am simply trying to understand why its called SSR since it seems to work different than the traditional SSR which I described on the beginning.
does next.js still make any server requests after the initial one, or does it act like a typical SPA with CRA from now on?
You got it right. The first (initial) request is handled by the server and after that the frontend handles the routing (at least in the case of Next.js).
If you want to see an example OpenCollective is built with Next.js. Try playing around with it and see the Network tab in the DevTools.
I am simply trying to understand why its called SSR since it seems to work different than the traditional SSR which I described on the beginning.
It is called SSR because the app is effectively being rendered on the server. The fact that frontend routing takes over after the initial render doesn't remove the fact that the server did the work of rendering the app as oppose to the user machine.
That's Not all the things that happen with Next.js, In Next.js You can build something called Hybrid apps.
In traditional SSR, all of your client requests get handled by the server. Every request goes to the server and gets responses.
In classic CSR with something like React, as you said, all the things happens in the browser via the client-side javascript.
But, in Next.js you can define three different approaches (mainly two according to the docs) for pages to get delivered.
based on the app needs and requirements, you can serve a couple of pages in pure traditional SSR mode, a couple of them in classic CSR mode, and a couple of in SSR mode via dynamic data that get fetched and rendered into the pages on the fly.
these features bring lots of flexibility to design a web-app that behaves perfectly in every different scenario that is needed.

Control of dynamically loaded scripts in Meteor.js

Would there be any mechanism (at least theortical) which would allow to control which scripts are provided to a client? I have splited code into dynamically loadable parts using import('dynamically_loadable_file') but whenever is it called on a client, the file is served. I'd like to perform some security check whether does a user have permission to load the file. I thought of middlewares but those are only for HTTP and executable scripts are served through WebSockets.
Also, if it would be possible I would like to control content of the provided scripts. E.g. I'd like to add or "hide" some functions or variables to the script based on a user loading them. I guess something like dynamic compilation using AST would be required, or maybe there is/would be available something else. I guess that's another level but if there would be some content available on such ideas I'd be thankful.
Maybe it is not possible with meteor at all, so if this is possible wherever in JavaScript (node.js) world, it'd help too.
Thanks for ideas and explanations.
Most client-side protection mechanism can be surrounded with enough knowledge and the right tools.
The most viable solution to your problem would be to use a server side rendering (ssr) library for your current front-end engine.
With ssr your would solve to
allow to control which scripts are provided to a client?
perform some security check whether does a user have permission to load the file
scripts are served through WebSockets
control content of the provided scripts
add or "hide" some functions or variables to the script based on a user loading them
Because all your templates are rendered on the server and only the resulting data is returned to the client.
Some ssr packages for Meteor:
Generic: https://docs.meteor.com/packages/server-render.html
React: https://www.chrisvisser.io/meteor/how-to-set-up-meteor-react-with-ssr (guide with link to a boilerplate repo)
Vue: https://github.com/meteor-vue/vue-meteor/tree/master/packages/vue-ssr
The native Meteor Way
Besides this I would like to emphasize, that you can achieve most data handling through Publications and Methods.
Showing / hiding HTML elements on the client does not add any security if your data and logic are not secured on the server.
If you only publish the right data to the right user (for example using alanning:roles) then it does not matter which scripts you load.
Same goes for Methods: If you are very strict in who (use again alanning:roles) can call a Method then it does not matter, if a user can disable the Router and see all of the "hidden" areas on the client because all invalid actions are rejected server-side.

Single Page Application Web crawlers and SEO

I have created my blog as a single page application using mithril framework on the front end. To make queries I've used a rest API and Django at the backend. Since everything is rendered using javascript code and when the crawlers hit my blog all they see is an empty page. And to add to that whenever I share a post on social media for instance all Facebook sees is just an empty page and not the post content and title.
I was thinking of looking at the user agents and whenever the USER-AGENT is from a crawler I would feed it the rendered version of the pages but I'm having problems implementing the above method described.
What is the best practice to create a single page app that uses rest API and Django in the backend SEO friendly for web crawlers?
I'm doing this on a project right now, and I would really recommend doing it with Node instead of Python, like this:
https://isomorphic-mithril.mvlabs.it/en/
You might want to look into a server-side rendering of the page that crawlers visit.
Here is a good article on Client Side vs Server Side
I haven't heard of Mithril before, but you might find some plugins that does this for you.
https://github.com/MithrilJS/mithril-node-render
This might help you : https://github.com/sharjeel619/SPA-SEO
The above example is made with Node/Express but you can use the same logic with your Django server.
Logic
A browser requests your single page application from the server,
which is going to be loaded from a single index.html file.
You program some intermediary server code which intercepts the client
request and differentiates whether the request came from a browser or
some social crawler bot.
If the request came from some crawler bot, make an API call to
your back-end server, gather the data you need, fill in that data to
html meta tags and return those tags in string format back to the
client.
If the request didn't come from some crawler bot, then simply
return the index.html file from the build or dist folder of your single page
application.

Are there any role-based declarative authorization frameworks for rich browser-based JavaScript web apps?

Before I say anything else: yes, I know authorization needs to be done on the server-side. Even so, the client-side app still has to hide the GUI elements that don't apply to the logged-in user.
Having said that, here's the question: are there any role-based declarative authorization frameworks for rich browser-based JavaScript web apps?
Background: with browser-based JavaScript web apps (GWT, ExtJS, etc) there is generally no need for dynamic server-side view generation since the entire app can be downloaded as static files and all view transitions are made in the browser. Implementing role-based security therefore either requires dynamic GUI customization after the user logs in and the app loads, or dynamic generation of the app's files on the server-side after login, before anything is downloaded to the client (could be built and cached too). This SO question talks about these approaches: How to use Ext JS for role based application
Thanks!

Categories