I'm trying publish a website in a subfolder (exemple.com/sitename) usign Traefik.
The site is build with Next JS.
What is happening when I run deploy is that all script links in the builded site disregard the folder (sitename). For example, js script named generatedfile.js is being accessed by the link example.com/generatedfile.js, the corret way would be example.com/sitename/generatedfile.js
My traefik args:
-l traefik.frontend.rule="Host:example.com; PathPrefixStrip:/sitename" -l traefik.frontend.entryPoints="http, https" -l traefik.frontend.headers.SSLRedirect="true"
I had tried add basePath to my next.config.js, but when I do this, I only access de site in the link exemple.com/sitename/sitename
next.config.js:
module.exports = withFonts({
basePath: '/sitename'
});
I'm using docker to deploy in AWS.
I've been trying to solve this all day, I don't even know what else to try to solve it.
Sorry for my english, it's not my first language.
PathPrefixStrip means match the path and strip the matched string before forwarding the request to your application. Use PathPrefix instead.
Looks like you're using v1.x of Traefik. Here's the documentation explaining the difference better: https://doc.traefik.io/traefik/v1.7/basics/
It's worth mentioning that if you have multiple routing rules, Traefik sorts them by their string length in a descending order and goes through them to match the incoming request. In other words, /api is matched before /.
Kibana is not starting up properly. When I open up the console it appears to be a javascript resource issue. When I open the js files directly (clicking on their link in the console) it appears they are incomplete and have been abruptly been cut off. Not sure if this is a browser file limit or somehow my files have been cut off? Please see the images below to show you what Im seeing.
File as seen in chrome. This is the very bottom of the file as per how chrome loads it.
I have restarted kibana to see if that would resolve it, no luck.
I think browsers have a max line limit in js files. I am not sure why kibana hasn't minified the js files? has it started up in some dev mode?
question summary
I guess I have discovered the reason for kibana not loading is because of the js not fully loading, this would change my question to how can I get all of my javascript to load?
Update
I have located the JS files in the kibana bundles folder and found that the file is fully intact. It is indeed a browser loading complete file issue. I'm confused why suddenly those files are too long to be loaded by the browser? Was working fine a fortnight ago. Still trying to work out how I can get chrome to load the files.
As suggested by #asettouf I have removed(backed up) bundles folder in the /opt/kibana/optimize directory and started kibana up again. This did re-generate the bundles folder but the files are identical, meaning I still have the same issue. How come Kibana is not minifying the js when it bundles the files for caching?
My kibana.yml. I think it is cleaner to paste a link to it:
http://www.heypasteit.com/clip/O8HUN
went back turned on verbose logging and this is my output from deleting optimize folder and restarting. nothing stands out as an error message to me.
/var/log/kibana/kibana.log
replaced hostname with localhost for privacy and security reasons
http://www.heypasteit.com/clip/OA4OR
I think this is an error with the webpack module not compiling the JS correctly. however i dont know enough about the module to debug it.
the files in question in the optimize folder are:
commons.bundle.js which is 65723 lines
kibana.bundle.js at 108950 lines
These are far from optimized and the content inside the files are not minified.
Result of curl -v localhost:5601
http://www.heypasteit.com/clip/OEKEX
CURL REQUEST DIRECTLY TO KIBANA JS RESOURCES
I can confirm that curl -ukibanaadmin -v http://localhost/bundles/commons.bundle.js returns me the full 108950 lined JS file and curl -ukibanaadmin -v http://actual_kibana_hostname/bundles/commons.bundle.js returns the same content (number of lines) as my browser.
With that information I can make the assumption that this is not a Kibana issue at all. As mentioned by #val it is probably a setting in nginx that is preventing the entire file from being served unless accessed by localhost.
I think I can rule out Cloudflare as I still get the issue when I hit my server directly from the browser.
Thanks to everyone's suggestions and help so far. Getting closer and closer to solving my issue. I will do some more research on Nginx and its settings.
NGINX UPDATE
Nginx appears to only be loading the first 72kbs of my havascript files. Ive search all around my nginx config files and cannot see this setting anywhere. Ive added things like
location / {
...
proxy_max_temp_file_size 1m;
...
}
and
location / {
...
sendfile on;
sendfile_max_chunk 1m;
...
}
But still I'm unable to overwrite this weird setting that is only allowing first part of file being loaded.
The connection uses nginx proxy_pass to foward port 80 to kibanas port '5601'. I feel like there could be a setting that limits file transfer over proxy? just not sure where to find it.
proxypass connection looks like:
server {
listen 80;
server_name logs.mydomain.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
And my default nginx settings is posted here.
/etc/nginx/nginx.conf
http://www.heypasteit.com/clip/OEKIR
I have tried adding sendfile_max_chunk 512k and changed worker_processes from 4 to 2. Any other config things were already there. I was not the the one who initially set up the ELK stack.
And after the mentioned changes it looks like this:
/etc/nginx/nginx.conf
http://www.heypasteit.com/clip/OEKF0
ERROR RETURNED - DISK SPACE UPDATE
This issue has come back. When I checked the VMs health I noticed that the disk drive was full. Elasticsearch was logging a couple of GBs worth of text in to error logs each day. I still Have not fully identified why elastic search is flooding the error logs.
But I think excessive disk space usage has contributed to this error. Nginx could be detecting this and switching over to minimal safe configuration which would only allow 72kbs of data to be served per file.
When I cleared out the excessive logs i stopped getting the incomplete js error without needing to restart nginx or kibana.
Since you're accessing Kibana behind a proxy, you'd need to bypass the proxy and see if the problem persists, i.e. compare the load times of
// through proxy
curl -v kibana_host/bundles/commons.bundle.js
curl -v kibana_host/bundles/kibana.bundle.js
// bypass proxy
curl -v localhost:5601/bundles/commons.bundle.js
curl -v localhost:5601/bundles/kibana.bundle.js
If the load time is lower when you bypass the proxy then you know it's not a Kibana issue but more with the way your proxy is configured.
UPDATE
Since we've narrowed this down to a proxy issue, try to update your nginx config with this
sendfile_max_chunk 512k
worker_processes 2
UPDATE 2
Try another modification of your nginx configuration:
proxy_buffering: off;
I used tomcat 7 for deploy my war file. It contains various images, css and js file.
I configure my tomcat on port 8080. Everything goes fine but I don't know why these content not serve by tomcat. It gives 404 not found error.
Directory Structure
And I hit this below url
http://localhost:8080/images/jackson2.png
When I put these images outside ROOT folder to inside webapps/examples/ then it works fine.
Now what is wrong with it ? It there anything I missed ?
I could not understand the issue ? Help me
Perhaps check that tomcat and/or the image files have the correct permissions set, I'm not too sure how to go about it on windows, depending on which version your running it could be a UAC issue, you can try looking here: https://www.mulesoft.com/tcat/tomcat-windows. You can try stopping tomcat, remove the work and temp directories, then starting tomcat.
I used express generator to create a project, which automatically includes the public folder in app.js ...
I am trying to display an image in my index.ejs but it is not working ...
what would be the correct url to use?
After looking online it says that the src should be this img src="localhost:8000/images/1.png"
However, I am using docker toolbox, so I'm using the docker quick start terminal and I got it to work by using the docker i.p
so its like img src="127.432.343:8000/images/1.png"
but that IP is specific to my computer I want my co-workers to be able to see the image without having to refactor the code ...
the name of my container is web so I also tried I tried src="web:8000/images/1.png"
But this doesn't work, I feel like it should though, any tips please been trying for hours here.
Also if this were to go to production on a server, what would be the best way to do that? I don't want to have to change the code if I end up uploading it to AWS ...
Any chance you can switch to Docker for Mac? That would allow you to see it on localhost
Static content in express must be in the /public folder. Images under public/images folder. If your image is in that folder, then it should be accessible from the url you're saying.
To show it from an ejs file, you need to only point to images/1.png (use a relative route):
<img src="images/i.png" />
This way, you won't have any trouble uploading it to any production server.
For other computers in your network to test your project, you need to check your computer's local network ip (ipconfig / ifconfig) and test with the ip address port 8000. Not every internal network has it's DNS fully configured.
Also check your computer's firewall to allow tcp traffic through port 8000. Hope it helps.
What does this error message mean and how do I resolve it? That is from console of Google Chrome v33.0, on Windows 7.
Failed to load resource: net::ERR_CONTENT_LENGTH_MISMATCH http://and.img.url/here.png
I'm trying to change the images' src attribute using jQuery. For example like this (simplified):
$('.image-prld').attr('src', someDynamicValue);
There are about 30 images on the page. And the above error is happening for random images every time when I reload the page. But sometimes it is working well for all the images, without any error.
When this error happens, the particular image is displaying like this:
However, when I open the link next to the error message on a new tab, the image is loading, which says me logically that the images is valid and it exists.
Docker + NGINX
In my situation, the problem was nginx docker container disk space. I had 10GB of logs and when I reduce this amount it works.
Step by step (for rookies/newbies)
Enter in your container: docker exec -it <container_id> bash
Go to your logs, for example: cd /var/log/nginx.
[optional] Show file size: ls -lh for individual file size or du -h for folder size.
Empty file(s) with > file_name.
It works!.
For advanced developers/sysadmins
Empty your nginx log with > file_name or similar.
Hope it helps
This error is definite mismatch between the data that is advertised in the HTTP Headers and the data transferred over the wire.
It could come from the following:
Server: If a server has a bug with certain modules that changes the content but don't update the content-length in the header or just doesn't work properly. It was the case for the Node HTTP Proxy at some point (see here)
Proxy: Any proxy between you and your server could be modifying the request and not update the content-length header.
As far as I know, I haven't see those problem in IIS but mostly with custom written code.
Let me know if that helps.
It could be even caused by your ad blocker.
Try to disable it or adding an exception for the domain from which the images come from.
This can be caused by a full disk (Ubuntu/Nginx).
My situation:
this error occured in Chrome with Nginx serving a static file: ".../static/js/vendor.c4ed7962fb4a63ad3c3b.js net::ERR_CONTENT_LENGTH_MISMATCH 200 (OK)"
root disk was full; after cleaning tmp files the error disappeared.
to prevent: make sure your disk remains clean ( a script such as this could help:https://crunchify.com/how-to-automatically-delete-tmp-folders-in-linux-automatic-disk-log-cleanup-bash-script/ )
In my case I was miscalculating the Content-Length that I advertised in the header. I was serving Range-Requests for files and I mistakenly published the filesize in Content-Length.
I fixed the problem by setting Content-Length to the actual range that I was sending back to the browser.
So in case I am answering to a normal request I set the Content-Length to the filesize. In case I am answering to a range-request I set the Content-Length to the actualy length of the requested range.
In my case I was modifying the request to append a header (using Fiddler) to an https request, but I did not configure it to decrypt https traffic. You can export a manually-created certificate from Fiddler, so you can trust/import the certificate by your browsers. See above link for details, some steps include:
Click Tools > Fiddler Options.
Click the HTTPS tab. Ensure the Decrypt HTTPS traffic checkbox is checked.
Click the Export Fiddler Root Certificate to Desktop button.
This is what worked for me.
proxy_buffer_size 1M;
proxy_buffers 4 1M;
I increased the size of the above parameters in nginix proxy.conf file.
Here, nginix is working as a proxy for my microservice-based applications.
It is definitely has to do something with disk space on your server
clearing the log folder worked for me.
follow these steps
1. go to nginx log directory
cd /var/log/nginx
2. delete all the older log
rm *.gz
3. emplty error log
truncate -s 0 error.log.1
4. empty access log
truncate -s 0 access.log.1
In my case it was a proxy issue (requests proxied from nginx to a varnish cache) that caused the issue. I needed to add the following to my proxy definition
proxy_set_header Connection keep-alive;
I found the answer here: https://stackoverflow.com/a/55341260/1062129
If this is related to docker, try stopping the erroneous container and starting a new container using docker run command from the same image.
If anyone struggle with that problem using docker + nginx, it could be permissions.
Nginx logs shown error:
2019/12/16 08:54:58 [crit] 6#6: *23 open() "/var/tmp/nginx/fastcgi/4/00/0000000004" failed (13: Permission denied) while reading upstream, client: 172.24.0.2, server: test.loc, request: "GET /login HTTP/1.1", upstream: "fastcgi://172.28.0.2:9001", host: "test.loc"
Run inside nginx container(path might vary):
chown -R www-data:www-data /var/tmp/nginx/
Running docker system prune -a did the trick for me. I did not have any luck rebuilding my containers or following #mrroot5's answer, although those would seem to achieve similar things.
In my case, I had to deactivate "All-in-One WP Migration" WordPress plugin.