The problem
My website fails to load random images at random times.
Intermittent failure to load image with the following error in console:
"GET example.com/image.jpg net::ERR_CONTENT_LENGTH_MISMATCH"
Image either doesn't load at all and gives the broken image icon with alt tag, or it loads halfway and the rest is corrupted (e.g. colors all screwed up or half the image will be greyed out).
Setup
Litespeed server, PHP/mySQL website, with HTML, CSS, Javascript, and JQuery.
Important Notes
Problem occurs on all major web browsers - intermittently and with various images.
I am forcing UTF-8 encoding and HTTPS on all pages via htaccess.
Hosting provider states that all permissions are set correctly.
In my access log, when an image fails to load, it gives a '200 OK' response for the image and lists the bytes transferred as '0' (zero).
It is almost always images that fail to load but maybe 5% of the time it will be a CSS file or Javascript file.
Problem occurred immediately after moving servers from Apache to Litespeed and has been persistent over several weeks.
Gzip and caching enabled.
This error is definite mismatch between the data that is advertised in the HTTP Headers and the data transferred over the wire.
It could come from the following:
Server : If a server has a bug with certain modules that changes the content but don't update the content-length in the header or just doesn't work properly.
Proxy : Any proxy between you and your server could be modifying the request and not update the content-length header.
This could also happens if setting wrong content-type.
As far as I know, I haven't see those problem in IIS/apache/tomcat but mostly with custom written code. (Writing image yourself on the response stream)
It could be even caused by your ad blocker.
Try to disable it or adding an exception for the domain from which the images come from.
Suggest accessing the image as a discrete url using cURL, eg
php testCurlimg >image.log 2>&1 to see exactly what is being returned by the server. Then you can move upon level to test the webpage
php testCurlpg >page.log 2>&1 to see the context for mixed data
I just ran into this same ERR_CONTENT_LENGTH_MISMATCH error. I optimized the image and that fixed it. I did the image optimization using ImageOptim but I'm guessing that any image optimization tool would work.
Had this problem today retrieving images from Apache 2.4 when using a proxy I wrote in php to provide a JWT auth gateway for accessing a couchdb backend. The proxy uses php fsockopen and the fread() buffer was set relatively low (30 bytes) because I had seen this value used in other peoples work and I never thought to change it. In all my failing JPG (JFIF) images I found the discrepancy in the original versus the image served was a series of crlf that matched the size of the fread buffer. Increased the byte length for the buffer and the problem no longer exists.
In short, if your fread buffer streaming the image is completely full of carriage returns and line feeds, the data gets truncated. This possibly also relates to the post from Collin Krawll as to why image optimization resolved that problem.
Related
I have a Three.JS powered WebGL example found at
http://jsfiddle.net/dja1/7xwrqnen/
material.map = THREE.ImageUtils.loadTexture('/images/earthmap2k.jpg');
If you use Chrome, you will see that when the large image is requested the caching for this image is correctly set to 'max-age=1296000'. If you 'Play' the JSFiddle again the image is not downloaded again - as expected.
However if you repeat in IE 11 then the image is always downloaded. It appears to complete ignore the caching.
For a large file this can be a real problem since when you click on a hyperlink that goes to a different page yet displays the same type of animation then the image needs to be downloaded again making for a poor user experience.
Does WebGl just ignore image caching in IE 11? What would a work around be?
Thanks in advance.
Dave A
Looking through the source code, you'd end up at https://github.com/mrdoob/three.js/blob/master/src/loaders/ImageLoader.js where you can see it does:
var image = document.createElement( 'img' );
Using image elements like that doesn't offer any control over caching. In that file you can also see that it does cache internally, but that doesn't help across reloads. So, in short, what you're seeing here will be some IE11 specific behaviour where it decides to reload the image each time.
I have now researched this topic and can provide some insights.
In order to get caching to happen as described you need to have three things in place;
The server needs to send a Cache-Control max-age (or similar) in the response to the request for the web page.
HTTP/1.1 200 OK
Cache-Control: max-age=1296000
Content-Type: text/html
The server needs to send a Cache-Control max-age (or similar) in the response to the request for the image
HTTP/1.1 200 OK
Accept-Ranges: bytes
Cache-Control: max-age=1296000
Content-Type: image/jpeg
This line must be commented out from the javascript.
THREE.ImageUtils.crossOrigin = 'anonymous';
This last line is the message to Three JS (and WebGL) to allow the use of images from other web servers. In my case I wanted to use a CDN to serve the large image. So this means that CDNs are effectively precluded from being used with Three JS; you can use them, its just the image will be re-downloaded every time the page is requested which defeats the purpose of the caching.
The difficultly in demonstrating this solution with jsFiddle is that it does not issue the 'cache-control' when the jsFiddle page is requested (and rightly so) so it will always re-download the image when running in jsFiddle.
I used to use a web hosting with cPanel and there is no problem with drag and drop image uploading ( every file is less than 2MB ).
The uploading method is like
<img src="data:image/jpeg;base64,xxxxxxx...">
and I post it on url into php to encode to a image file.
After I moved my website to another web hosting and some problem just happened with drag and drop uploading.
If any file size is larger than 730KB, the uploading will be fail.
I have google a lot, including modify php.ini like post_max_size, upload_max_filesize, even set ini_set('memory_limit', '256M') and ini_set('post_max_size', '8M') in php file, it's not working at all.
If your new hosting is not administrated by you, then they may have restricted the ability to set ini configs from the PHP scripts, and lowered the limit file uploads.
Also memory_limit is not the directive you need, is max_upload_size and post_max_size.
memory_limit limits the amount of RAM that PHP can consume before fatal erroring.
I have found where the problem is.
based on my uploading method is drag and drop image, i tried to count post length with using "alert(encode.length)" and i found that if length of every file is longer than 1,000,000, it will shows error.
so i tried to find the value 1,000,000 in phpinfo()
it's
suhosin.post.max_value_length
so in
/etc/php5/conf.d/suhosin.ini
i change it 1,000,000 to 10,000,000 and remove the mark then restart apache, it works fine now.
I have created an image in my object so i can draw it to my canvas... I did it like this:
item[id].img = new Image();
item[id].img.src = './image_folder/'+data[i][j].image;
Then my canvas draws on this line:
canvas[2].ctx.drawImage(item[theID].imge, px, py);
It works fine but in Chrome console it says:
Resource interpreted as Image but transferred with MIME type text/html
I'm curious what this actually means and how to correct it ?
When you request things from servers, they will send "headers" with whatever is being sent.
It's how browsers can figure out how to use video or music, or know what to do with JS or CSS.
Modern browsers are pretty intelligent about dealing with these things, but if you tried to send an .mp3 to a browser that doesn't know how to use .mp3s, it might try loading the file as text, and you'd get a lot of funny characters.
MIME types can avoid that, mostly. If you ask to download an .mp3, the server might send a header that looks like "Content-Type: audio/mpeg codecs=mp3".
A regular web-page, in comparison, would be sent as "Content-Type: text/html", while a .png image would be sent as "Content-Type: image/png".
If you're playing around on a test-server that you installed using a WAMP installer, or EasyPHP or whatever, your server probably doesn't know about serving .png files with the "image/png" MIME-type.
Intelligent browsers will read the contents of the file and try to figure out what they're supposed to be, if they're given the wrong MIME-type for the file (which is why your images work in the first place).
This particular error probably isn't going to hurt anyone (because browsers that can't figure out you've got a .png file are probably browsers that don't have <canvas>).
But to fix it in other cases (like .ogg files for <audio> and <video> support, which IS important), you should figure out what kind of server you're running (my money's on Apache), and figure out how to add mime-type and file-type declarations.
You could find that through a Google search like "add mime-types to apache".
If this is a server that's live on the internet, and you're paying for hosting, then you'll need to set it through your hosted site.
I have a bit of JavaScript using jQuery that loads data with a quick $.get(url, function(response){ /* ... */}); The data is a straight up text file that is then handled by the JavaScript in that response function.
This has worked for me quite nicely, but I just ran into this problem on my machine: Using the same code, I now get an error saying:
XML Parsing Error: not well-formed Location:
moz-nullprincipal:{74091275-3d54-4959-9613-5005459421ce} Line Number
1, Column 16: image:tiles.png;
---------------^
If I load this from another server, it works perfectly. It's only when I host it on my own PC that I get this error (note that it previously worked perfectly on my own PC as well, which is running Ubuntu and serving the page with Apache). After much headbanging, I found that if I change the extension on the filename I'm loading, it works fine. The file was previously named "test.sprite", and that is when I got the error. If I renamed it to "test.txt" it loads fine.
This error ~seems~ to coincide with a recent upgrade on my system. I upgraded Ubuntu 10.something to 12.04. I'm assuming there was some sort of update in the Apache config that I didn't notice which is causing it to send different headers depending on the extension of the file (the two named here are identical - the .txt is actually just a symlink to the .sprite).
So I have a solution to my immediate problem, but I'd rather not bow to the system's idiosyncrasies. Any idea how I can fix this without renaming the file?
Please note that I'm not an apache expert, but I'll have a crack with pointing you in the right direction.
If undefined, the jQuery AJAX functions will assume the content-type is whatever header Apache has sent back. You can quite simply see what the response is by running your code in Chrome, opening developer tools (Ctrl + Shift + J) and choosing "Network". After clicking on the relevant request you will see the headers coming back, including the content-type.
In your Apache configuration the content-type for the sprite is probably not defined. You can add this with the following line:
AddType 'text/plain; charset=UTF-8' .sprite
This should be in a configuration file parsed by Apache - depending on your version this could be apache.conf, httpd.conf, or another file.
I hope this helps or at least points you in the right direction. Remember to configtest before restarting Apache!
Check out the content-type of the response header, make sure the header you received from the server and your local machine have the same content-type, i.e. same file type , same encoding, something like this: "content-type:text/html; charset=UTF-8".
I'm attempting to upload a file and the server-side language used is Perl. The CGI module version is 3.15
for some weird reason I can upload any file below 32KB, but beyond that that filesize, I received the following error:
CGI.pm: Server closed socket during multipart read (client aborted?).
I tried setting the following parameters for CGI
use CGI ':standard';
$CGI::POST_MAX=-1;
$CGI::DISABLE_UPLOADS=0; # Allow file upload
but I still receive the error.
I hear that this problem is fixed in the newer version of CGI, but I cannot upgrade the CGI module, is there an alternative?
Any ideas are welcome !!!
I see you are setting $CGI::POST_MAX = 1, which is 1 byte.
Trying increasing it: 1024 * 100.
From the docs:
$CGI::POST_MAX
If set to a non-negative integer, this variable puts a ceiling on the size of POSTings, in bytes. If CGI.pm detects a POST that is greater than the ceiling, it will immediately exit with an error message. This value will affect both ordinary POSTs and multipart POSTs, meaning that it limits the maximum size of file uploads as well. You should set this to a reasonably high value, such as 1 megabyte.
The key to your problem seems to be here:
CGI.pm: Server closed socket during multipart read (client aborted?).
The error message seems to indicate that your server is closing the connection. If you are using Apache, nginx, whatever, check their config settings to increase maximum post request body size.
Read the whole documentation for CGI.pm, even if it's long. Somewhere there is a sub which sets a security maximum for file uploads. This is intended to make your server more DDOS-resistant. Just set a value you prefer. This would solve the 32kB part of the problem...
(I'm sorry that this wild guess didn't solve anything. Consider this answer deleted.)