When react app reload/refresh then it's says 404 not fount - javascript

I'm facing an issue that is when I reload/refresh my app then it says 404 error (remember it shows only on production) not development mode.
enter image description here
I tried it on
Vite and ReactJs
```.
Resolve the react 404 error.

This will vary depending on the hosting service. Nginx or Apache must be configured. If you have access to the configuration your domain server, you can add your domain config file below codes;
location / {
try_files $uri /index.html;
}
if you are use Netlify add the netlify.toml and paste the below codes.
[[redirects]]
from = "/*"
to = "/"
status = 200

Related

Nodejs: How to reverse proxy + TLS certificates with Caddy (Cloudflare)?

this is my first time deploying nodejs from localhost to the live server. I am using aapanel for my live server.
Here is the relevant code in node server.js file:
const hostname = 'localhost';
// const hostname = 'www.thespacebar.io';
// set port, listen for requests
const PORT = process.env.PORT || 8080;
app.listen(PORT, hostname, () => {
console.log(`Server is running on port ${PORT}.`);
});
Here is my pm2 settings:
I am unable to open my nodejs app with GET https://www.thespacebar.io:8080, but it works for GET http://www.thespacebar.io:8080
GET https://www.thespacebar.io:8080 does not work with error:
This site can’t provide a secure connection
ERR_SSL_PROTOCOL_ERROR
Anyone know what I did wrong?
EDIT: I have installed Caddy and setup the Caddyfile in /etc/caddy like this:
# The Caddyfile is an easy way to configure your Caddy web server.
#
# Unless the file starts with a global options block, the first
# uncommented line is always the address of your site.
#
# To use your own domain name (with automatic HTTPS), first make
# sure your domain's A/AAAA DNS records are properly pointed to
# this machine's public IP, then replace ":80" below with your
# domain name.
import ./thespacebar.io
:80 {
# Set this path to your site's directory.
root * /usr/share/caddy
# Enable the static file server.
file_server
# Another common task is to set up a reverse proxy:
# reverse_proxy localhost:8080
# Or serve a PHP site through php-fpm:
# php_fastcgi localhost:9000
}
# Refer to the Caddy docs for more information:
# https://caddyserver.com/docs/caddyfile
and created the adjacent file thespacebar.io:
thespacebar.io {
reverse_proxy localhost:8080
}
but when I visit https://thespacebar.io/, I end up at index.html instead of the JSON { message: "Welcome to bezkoder application." }
and POST http://www.thespacebar.io/api/verification/callback with body param verify_token:abcde is supposed to show the JSON:
{
"message": "Callback called successfully."
}
instead of 404 Not Found
EDIT 2: I have removed the portion:
# :80 {
# Set this path to your site's directory.
# root * /usr/share/caddy
# Enable the static file server.
# file_server
# Another common task is to set up a reverse proxy:
# reverse_proxy localhost:8080
# Or serve a PHP site through php-fpm:
# php_fastcgi localhost:9000
# }
# Refer to the Caddy docs for more information:
# https://caddyserver.com/docs/caddyfile
from etc/caddy/Caddyfile
but when I run caddy run Caddyfile and caddy reload Caddyfile, I am getting this error:
[root#vultrguest caddy]# caddy run Caddyfile
2022/12/02 08:11:44.132 INFO using adjacent Caddyfile
2022/12/02 08:11:44.132 WARN Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies {"adapter": "caddyfile", "file": "Caddyfile", "line": 12}
2022/12/02 08:11:44.133 INFO admin admin endpoint started {"address": "localhost:2019", "enforce_origin": false, "origins": ["//localhost:2019", "//[::1]:2019", "//127.0.0.1:2019"]}
2022/12/02 08:11:44.133 INFO http server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS {"server_name": "srv0", "https_port": 443}
2022/12/02 08:11:44.133 INFO http enabling automatic HTTP->HTTPS redirects {"server_name": "srv0"}
2022/12/02 08:11:44.133 INFO tls.cache.maintenance started background certificate maintenance {"cache": "0xc000151030"}
2022/12/02 08:11:44.133 INFO tls.cache.maintenance stopped background certificate maintenance {"cache": "0xc000151030"}
Error: loading initial config: loading new config: http app module: start: listening on :80: listen tcp :80: bind: address already in use
[root#vultrguest caddy]# caddy reload Caddyfile
2022/12/02 08:11:49.875 INFO using adjacent Caddyfile
2022/12/02 08:11:49.876 WARN Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies {"adapter": "caddyfile", "file": "Caddyfile", "line": 12}
Error: sending configuration to instance: performing request: Post "http://localhost:2019/load": dial tcp [::1]:2019: connect: connection refused
[root#vultrguest caddy]#
If I run GET http://www.thespacebar.io:8080 I get:
Web server is down Error code 521
Visit cloudflare.com for more information.
2022-12-02 08:22:13 UTC
You
EDIT3: The site I am trying to setup reverse proxy is using cloudflare, so I have modified my Caddyfile to:
# The Caddyfile is an easy way to configure your Caddy web server.
#
# Unless the file starts with a global options block, the first
# uncommented line is always the address of your site.
#
# To use your own domain name (with automatic HTTPS), first make
# sure your domain's A/AAAA DNS records are properly pointed to
# this machine's public IP, then replace ":80" below with your
# domain name.
# import ./thespacebar.io
# cloudflare
(cf) {
tls {
resolvers 1.1.1.1
dns cloudflare [cf-token-goes-here]
}
}
but when I run `caddy run Caddyfile`, I got this error:
`Error: adapting config using caddyfile: parsing caddyfile tokens for 'tls': Caddyfile:17 - Error during parsing: getting module named 'dns.providers.cloudflare': module not registered: dns.providers.cloudflare`
thespacebar.io {
import cf
reverse_proxy localhost:8080
}
but when I goto cd /etc/caddy and caddy run Caddyfile I get the error:
Error: adapting config using caddyfile: parsing caddyfile tokens for 'tls': Caddyfile:17 - Error during parsing: getting module named 'dns.providers.cloudflare': module not registered: dns.providers.cloudflare
all the tutorials for adding this module (dns.providers.cloudflare) are for xcaddy and not caddy, how do I add this module for caddy?
Caddy is simple to setup as a reverse proxy, and it gets letsencrypt SSL certs for you with minimal fuss
{
email example#email.com
}
thespacebar.io {
reverse_proxy localhost:8080
}
I see you've posted an update - the one thing I would remove is
:80 {
.....
}
If you read the text you posted it does say to replace :80 with your domain (but don't add :80 or caddy won't do the certificate for the domain)
I also see you haven't set up a global section with an email address - I'm fairly sure that needs to be there (don't quote me on that) for letsencrypt to work - at least it used to when I first started using caddy
here is some pseudo code for a generic caddyfile for caddy v2
for the code below replace "\*" with "*" "\" is used below to escape "/*" in the markdown
this code will add basic security headers and cors to the response
it will proxy pass thru to a process on localhost port 9883
if you have a dns record for your server it will set up the letsencrypt certs for you and renew them when required
see caddy snippets https://caddyserver.com/docs/caddyfile/concepts#snippets
# begin common code block snippet to be imported into the server block,
# for example here we set common security headers
# see the markdown escape comment above for "/\*" should be "/*"
(common) {
header /\* {
-Server
-X-Powered-By
+X-Permitted-Cross-Domain-Policies none
+X-Frame-Options DENY
+X-Content-Type-Options nosniff
+Strict-Transport-Security "max-age=63072000 includeSubDomains preload"
+Referrer-Policy no-referrer
}
}
# cors snippet
(cors) {
#cors_preflight method OPTIONS
# "{args.0}" is an input value used when calling the snippet
#cors header Origin "{args.0}"
handle #cors_preflight {
header Access-Control-Allow-Origin "{args.0}"
header Access-Control-Allow-Methods "GET, POST, PUT, PATCH, DELETE"
header Access-Control-Allow-Headers "Content-Type"
header Access-Control-Max-Age "3600"
respond "" 204
}
}
# main server block
# dns record for server is myserver.edu
myserver.edu {
# import common sec headers snippet
import common
# import cors snippet passing server name parameter, wildcard cors poor sec
import cors myserver.edu
# proxy redirect see handle_path directive
# https://caddyserver.com/docs/caddyfile/directives/handle_path
# see the markdown escape comment above for "/\*" should be "/*"
handle_path /somepath/\* {
reverse_proxy localhost:9883 {
header_up X-Real-IP {remote_host}
# caddy will add X-Forwarded-For for you so not need this one
#header_up X-Forwarded-For {remote_host}
header_down Content-Security-Policy "media-src blob:"
}
}
}

compress.js made the CACHE directoly under uwsgi not nginx

I am using Django Compressor on nginx & uwsgi
I have each docker container for nginx & uwsgi
I copied static folder to nginx:/static and others to uwsgi:/myapp/ in advance.
However compress.js made the compress file dynamically and set under uwsgi:myapp/static/CACHE of uwsgi container.
<script src="/static/CACHE/js/output.b6723c2174c0.js">
So consequently 404 not found for this file, because this request is redirect to nginx:/static not uwsgi:/myapp/static
How anyone solves this problem?
my nginx setting is below
server {
listen 8000;
server_name 127.0.0.1;
charset utf-8;
location /static {
alias /static;
}
location / {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_pass http://uwsgi:8001/;
include /etc/nginx/uwsgi_params;
}
}
It's about troubleshooting. There can be different issues.
Similar reported here django-compressor and nginx: 404 with compressed files
Having same issue I found that in settings.py the COMPRESS_ROOT was defined to the path which is not specified in the Nginx configuration.
Removing COMPRESS_ROOT fixed it. Refer to: Django Compressor Quickstart

Nginx not loading the second angular app

Being new to Angular 4, Nodejs and Nginx i'm having trouble configuring Nginx to serve 2 different Angular apps. My problem is the following:
I generated two Angular 4 apps with the angular CLI and then modified them. Once done, I did a ng build on both of them to get the built files in the dist folder. The first app is the login page that does JWT Authentification with a backend in Nodejs. Once the auth is done it should redirect the user to the second app which is the actual website.
I use window.location.href='http://localhost:8080/app' in the login app to redirect to the second one.
Now since the second app is supposed to check the Token, it is stored in the browser's localStorage.
In order to keep this token, both the apps must be on the same address and port. Therefore I want to use NginX as a reverse proxy.
I installed Nginx and put the files of both apps in /var/www/html/angular/login/ and /var/www/html/angular/app/ respectively.
My Nginx sites-available/default file is the following:
#Default server configuration
server {
listen 8080 default_server;
#listen [::]:8080 default_server;
index index.html;
server_name localhost:8080/;
location / {
alias /var/www/html/angular/login/;
try_files $uri $uri/ /index.html;
}
location /app {
alias /var/www/html/angular/app;
try_files $uri $uri/ /index.html;
}
}
Now what's happening is that when I go on localhost:8080 I get my login app just as I should but when I authenticate, the url changes to localhost:8080/app as it should and the name of the app on the tab changes but the content of the app isn't loaded. It stays on the login page eventhough it seems to fetch the second app files...
It looks as if I had put the files of the login app in the /var/www/html/angular/app/ folder eventhough I didn't.
Any idea where it comes from?
The apps were tested before using ng serve on 2 different ports but I couldn't keep the token since localStorage is attached to a domain and port.

Azure web app throwing error when referring to non-existent path

Is there a reason why my web app inside an Azure web app is returning a 404 error saying:
Failed to load resource: the server responded with a status of 404 ()
The path for the file is: https://WEB-APP-NAME-HERE.azurewebsites.net/resources/demos/style.css
If I look into my project files and folders, I do not have a recources folder, nor a demos folder and for sure no style.css file.
Can this message be turned of inside portal.azure.net? Or can this path be deleted so that Azure / the web app doesn't check it anymore?
Thanks in advance.
Have you searched your actual code for a reference to that file?
Or would it previously have been there and a bot is trying to re-read it?

Failed to load resource: net::ERR_CONTENT_LENGTH_MISMATCH

What does this error message mean and how do I resolve it? That is from console of Google Chrome v33.0, on Windows 7.
Failed to load resource: net::ERR_CONTENT_LENGTH_MISMATCH http://and.img.url/here.png
I'm trying to change the images' src attribute using jQuery. For example like this (simplified):
$('.image-prld').attr('src', someDynamicValue);
There are about 30 images on the page. And the above error is happening for random images every time when I reload the page. But sometimes it is working well for all the images, without any error.
When this error happens, the particular image is displaying like this:
However, when I open the link next to the error message on a new tab, the image is loading, which says me logically that the images is valid and it exists.
Docker + NGINX
In my situation, the problem was nginx docker container disk space. I had 10GB of logs and when I reduce this amount it works.
Step by step (for rookies/newbies)
Enter in your container: docker exec -it <container_id> bash
Go to your logs, for example: cd /var/log/nginx.
[optional] Show file size: ls -lh for individual file size or du -h for folder size.
Empty file(s) with > file_name.
It works!.
For advanced developers/sysadmins
Empty your nginx log with > file_name or similar.
Hope it helps
This error is definite mismatch between the data that is advertised in the HTTP Headers and the data transferred over the wire.
It could come from the following:
Server: If a server has a bug with certain modules that changes the content but don't update the content-length in the header or just doesn't work properly. It was the case for the Node HTTP Proxy at some point (see here)
Proxy: Any proxy between you and your server could be modifying the request and not update the content-length header.
As far as I know, I haven't see those problem in IIS but mostly with custom written code.
Let me know if that helps.
It could be even caused by your ad blocker.
Try to disable it or adding an exception for the domain from which the images come from.
This can be caused by a full disk (Ubuntu/Nginx).
My situation:
this error occured in Chrome with Nginx serving a static file: ".../static/js/vendor.c4ed7962fb4a63ad3c3b.js net::ERR_CONTENT_LENGTH_MISMATCH 200 (OK)"
root disk was full; after cleaning tmp files the error disappeared.
to prevent: make sure your disk remains clean ( a script such as this could help:https://crunchify.com/how-to-automatically-delete-tmp-folders-in-linux-automatic-disk-log-cleanup-bash-script/ )
In my case I was miscalculating the Content-Length that I advertised in the header. I was serving Range-Requests for files and I mistakenly published the filesize in Content-Length.
I fixed the problem by setting Content-Length to the actual range that I was sending back to the browser.
So in case I am answering to a normal request I set the Content-Length to the filesize. In case I am answering to a range-request I set the Content-Length to the actualy length of the requested range.
In my case I was modifying the request to append a header (using Fiddler) to an https request, but I did not configure it to decrypt https traffic. You can export a manually-created certificate from Fiddler, so you can trust/import the certificate by your browsers. See above link for details, some steps include:
Click Tools > Fiddler Options.
Click the HTTPS tab. Ensure the Decrypt HTTPS traffic checkbox is checked.
Click the Export Fiddler Root Certificate to Desktop button.
This is what worked for me.
proxy_buffer_size 1M;
proxy_buffers 4 1M;
I increased the size of the above parameters in nginix proxy.conf file.
Here, nginix is working as a proxy for my microservice-based applications.
It is definitely has to do something with disk space on your server
clearing the log folder worked for me.
follow these steps
1. go to nginx log directory
cd /var/log/nginx
2. delete all the older log
rm *.gz
3. emplty error log
truncate -s 0 error.log.1
4. empty access log
truncate -s 0 access.log.1
In my case it was a proxy issue (requests proxied from nginx to a varnish cache) that caused the issue. I needed to add the following to my proxy definition
proxy_set_header Connection keep-alive;
I found the answer here: https://stackoverflow.com/a/55341260/1062129
If this is related to docker, try stopping the erroneous container and starting a new container using docker run command from the same image.
If anyone struggle with that problem using docker + nginx, it could be permissions.
Nginx logs shown error:
2019/12/16 08:54:58 [crit] 6#6: *23 open() "/var/tmp/nginx/fastcgi/4/00/0000000004" failed (13: Permission denied) while reading upstream, client: 172.24.0.2, server: test.loc, request: "GET /login HTTP/1.1", upstream: "fastcgi://172.28.0.2:9001", host: "test.loc"
Run inside nginx container(path might vary):
chown -R www-data:www-data /var/tmp/nginx/
Running docker system prune -a did the trick for me. I did not have any luck rebuilding my containers or following #mrroot5's answer, although those would seem to achieve similar things.
In my case, I had to deactivate "All-in-One WP Migration" WordPress plugin.

Categories