I have a React web application that allows Image uploads.
After performing a fetch POST request for multiple images (in this case 6) to my API, the browser refreshes itself and reloads the current page. It is worth noting that this application allows images to be cropped and so for every image the user uploads there is a second image (cropped) to upload. So the above 6 images result in 12 POST requests.
The refresh behavior is INCONSISTENT and difficult to reproduce. I have inserted breakpoints within the function this behavior occurs. Using the chrome debugger tools I have stepped through the flow and found that the refresh occurs after this call.
this.ws.onmessage = function(e) {
debug('message event', e.data);
self.emit('message', e.data);
};
It is located inside the file websocket.js within the Library node_modules/react-dev-tools/node_modules/socketjs-client/lib/transport/websocket.js
I have narrowed it down to this file and ruled out any issues from my project codebase.
My theory is that the behavior of my application is triggering an external listener/case which is causing a full browser refresh.
I see that the file in question is inside react-dev-tools and thought that removing this module could solve the problem, however, this occurs in my production environment also and so I feel removing this could break the build.
Any thoughts as to better my investigation or potential solutions please are helpful.
I'm not sure how you're running your environments, but maybe this will help...
I ran into this exact issue and it took me 3 days (too long) to narrow it to nodemon (in development) and pm2 (in production). I'm not sure how you're serving your application/images, but for me, any time a new file was added, the service was intermittently restarted (sometimes it uploaded, sometimes it was cut off).
For development, I added a nodemon.json config file at the application root (nodemon app.js --config nodemon.json):
{
"ignore": ["uploads"]
}
For production, I created a prod.json at the application root and ran pm2 start prod.json:
{
"apps" : [{
"name" : "app-name",
"ignore_watch" : ["uploads"],
"script" : "./app.js",
"env": {
"NODE_ENV": "production"
}
}]
}
If you're using neither of the above packages, then I'd suggest looking into the possibility of reconfiguring how you're storing and serving images to the application (as a last resort).
Related
I've integrated Sentry with my website a few days ago and I noticed that sometimes users receive this error in their console:
ChunkLoadError: Loading chunk <CHUNK_NAME> failed.
(error: <WEBSITE_PATH>/<CHUNK_NAME>-<CHUNK_HASH>.js)
So I investigated the issue around the web and discovered some similar cases, but related to missing chunks caused by release updates during a session or caching issues.
The main difference between these cases and mine is that the failed chunks are actually reachable from the browser, so the loading error does not depend on the after-release refresh of the chunk hashes but (I guess), from some network related issue.
This assumption is reinforced by this stat: around 90% of the devices involved are mobile.
Finally, I come to the question: Should I manage the issue in some way (e. g. retrying the chunk loading if failed) or it's better to simply ignore it and let the user refresh manually?
2021.09.28 edit:
A month later, the issue is still occurring but I have not received any report from users, also I'm constantly recording user sessions with Hotjar but nothing relevant has been noticed so far.
I recently had a chat with Sentry support that helped me excluding the network related hypotesis:
Our React SDK does not have offline cache by default, when an error is captured it will be sent at that point. If the app is not able to connect to Sentry to send the event, it will be discarded and the SDK will no try to send it again.
Rodolfo from Sentry
I can confirm that the issue is quite unusual, I share with you another interesting stat: the user affected since the first occurrence are 882 out of 332.227 unique visitors (~0,26%), but I noticed that the 90% of the occurrences are from iOS (not generic mobile devices as I noticed a month ago), so if I calculate the same proportion with iOS users (794 (90% of 882) out of 128.444) we are near to a 0,62%. Still small but definitely more relevant on iOS.
This is most likely happening because the browser is caching your app's main HTML file, like index.html which serves the webpack bundles and manifest.
First I would ensure your web server is sending the correct HTTP response headers to not cache the app's index.html file (let's assume it is called that). If you are using NGINX, you can set the appropriate headers like this:
location ~* ^.+.html$ {
add_header Cache-Control "no-store max-age=0";
}
This file should be relatively small in size for a SPA, so it is ok to not cache this as long as you are caching all of the other assets the app needs like the JS and CSS, etc. You should be using content hashes on your JS bundles to support cache busting on those. With this in place visits to your site should always include the latest version of index.html with the latest assets including the latest webpack manifest which records the chunk names.
If you want to handle the Chunk Load Errors you could set up something like this:
import { ErrorBoundary } from '#sentry/react'
const App = (children) => {
<ErrorBoundary
fallback={({ error, resetError }) => {
if (/ChunkLoadError/.test(error.name)) {
// If this happens during a release you can show a new version alert
return <NewVersionAlert />
// If you are certain the chunk is on your web server or CDN
// You can try reloading the page, but be careful of recursion
// In case the chunk really is not available
if (!localStorage.getItem('chunkErrorPageReloaded')) {
localStorage.setItem('chunkErrorPageReloaded', true)
window.location.reload()
}
}
return <ExceptionRedirect resetError={resetError} />
}}>
{children}
</ErrorBoundary>
}
If you do decide to reload the page I would present a message to the user beforehand.
The chunk is reachable doesn't mean the user's browser can parse it. For example, if the user's browser is old. But the chunk contains new syntax.
Webpack loads the chunk by jsonp. It insert <script> tag into <head>. If the js chunk file is downloaded but cannot parsed. A ChunkLoadError will be throw.
You can reproduce it by following these steps. Write an optional chain and don't compile it. Ensure it output to a chunk.
const obj = {};
obj.sub ??= {};
Open your app by chrome 79 or safari 13.0. The full error message looks like this:
SyntaxError: Unexpected token '?' // 13.js:2
MAX RELOADS REACHED // chunk-load-handler.js:24
ChunkLoadError: Loading chunk 13 failed. // trackConsoleError.js:25
(missing: http://example.com/13.js)
See the error on my website here
I have embedded a blazor app in my jekyll site. It runs perfectly locally, but when I publish it on github pages, I am getting this error:
Failed to find a valid digest in the 'integrity' attribute for resource 'https://chrisevans9629.github.io/blazor/xt/_framework/wasm/dotnet.3.2.0-rc1.20222.2.js' with computed SHA-256 integrity 'yVt8FYsTQDifOGsifIkmEXwe+7ML0jZ1dMi2xluiDXQ='. The resource has been blocked.
This is something that I think blazor generates when the page is ran. this is what my page looks like that starts blazor:
<script src="js/index.js"></script>
<app>Loading...</app>
Built with <3 using Blazor
<script src="_framework/blazor.webassembly.js"></script>
This is what the page looks like on github pages:
<script src="js/index.js"></script>
<app>Loading...</app>
<p>Built with <3 using Blazor
<script src="_framework/blazor.webassembly.js"></script></p>
<script type="text/javascript">var Module; window.__wasmmodulecallback__(); delete window.__wasmmodulecallback__;</script><script src="_framework/wasm/dotnet.3.2.0-rc1.20222.2.js" defer="" integrity="sha256-iZCHkFXJWYNxCUFwhj+4oqR4fkEJc5YGjfTTvdIuX84=" crossorigin="anonymous"></script></body>
Why is this error happening and how can I fix this? I've thought about create a script that would remove the integrity attribute, but I don't think that would be a good solution.
I found an answer here
Cause
Because I am using github pages to host my blazor app, it's using git to push up the code. Git by default will try to normalize line endings when committing code, which was causing the integrity of the blazor app to fail due to the files changing.
Solution
To fix this, I added a .gitattributes file to my blazor folder with * binary as the contents.
This tells git to treat all files as binary and therefore not to normalize the line endings. After I did that, I had to delete my _framework folder in my blazor app and rebuild it. After doing this, the blazor app worked.
In case someone else ends up here with the issue I had today..
I also got this error on a Blazor Wasm app locally after simple modification, and then still appeared after reverting changes.
The solution for me was to do a clean and rebuild.
In my case, it was a wrong target framework in the publish profile - I should not have selected win-x64.
I'm not sure of the exact reason, but the server interferes in some way with the response, based on the target framework. Just select browser-wasm and redeploy; it should be fine.
I spent too much time on this issue. Clean and Rebuild does not work for me.
What worked for me is deleting bin and obj folders from the Client(Blazor WASM) Project.
Environment
.Net 5 and 6
Visual Studio 2019 and 2022
Just to leave here a note on something I came across while trying to figure out what was going on.
If for some reason you removed the service worker from your app and the resources were actually cached in the common http cache, there is a possibility that once you re-enable the service worker you will get this error, because the service worker will pick up the http cached version and not the server's.
What I did was to add cache: "no-cache" to the Request's init.
So my onInstall now looks something like this
async function onInstall(event) {
console.info('Service worker: Install');
// Fetch and cache all matching items from the assets manifest
const assetsRequests = self.assetsManifest.assets
.filter(asset => offlineAssetsInclude.some(pattern => pattern.test(asset.url)))
.filter(asset => !offlineAssetsExclude.some(pattern => pattern.test(asset.url)))
.map(asset => new Request(asset.url, { integrity: asset.hash, cache: "no-cache" }));
// Also cache authentication configuration
assetsRequests.push(new Request('_configuration/TestApp.Client'));
await caches.open(cacheName).then(cache => cache.addAll(assetsRequests));
}
It looks like hash generated inside ServiceWorkerAssetsManifest for all the files and on the client side don't match. It looks like ServiceWorkerAssetsManifest is not generating hash again when the file is modified, specially the static files.
Had the same problem today, in my case the error came with a css file.
The problem was that I had two versions of my application deployed to local folders.
At first I started the old version, closed it and then opened up the new version.
It seems that the old css file was cached in the browser which caused the error to appear.
The fix was simply pressing CTRL + U to open up the index.html file, clicking on the css file which caused the error and press F5 to reload the file. This solved the error for me.
A better solution!
Open service-worker.js
change
.map(asset => new Request(asset.url, { integrity: asset.hash }));
to :
.map(asset => new Request(asset.url));
Now it works!
I Had this same issue and none of these solutions worked for me, but they set me on the right path. I am deploying mine to my local machine and using IIS for testing purposes, and I found that in the publish profile that I have created in Visual Studio 2022, the check box to "Remove additional files at destination" was not checked and as soon as I checked this and republished it, everything worked fine. I must have removed a file that was being published in a previous build and it was still there since it wasn't being deleted by any subsequent builds/publishes. But this solved it for me, it might
I'm using the react/react-router/webpack stack with dynamic routing between my different pages of the app which means every page loads asynchronously by demand. Everything works great but when I deploy a new version, my current active users who didn't fetch all of the Js file of all the pages will get stuck once the'll try to navigate to another page they haven't visited yet.
EXAMPLE
lets say I have a code split app with the following .js generated files and md5 (for cache busting):
main.123.js
profile.456.js
User visits my main page and gets only main.123.js.
In the meantime I deploy a new version with a different md5 (I made changes both in profile.js and main.js)
main.789.js
profile.891.js
User tries to navigate to the profile page, the app tries to fetch profile.456.js but get an error since the profile.js file has been swapped and there is now way for the app to know the new file name.
I came up with 2 solutions but none of them really solves the problem
SOLUTION 1?
Always keep 2 versions available in production. but it's a slippery slope since 2 versions sometimes won't be enough. i.e User can leave his tab open for days so several deployments can take place until he decides to use the app again.
SOLUTION 2?
catch the js loading exception and show the user a message to reload the app. This solution can work but it's an annoying UX if you ask me.
Has anyone experienced this kind of issue? Any suggestions?
EDIT - SOLUTION:
I have decided to go with the second approach (solution 2). In order to catch the loading error I had to upgrade my webpack to webpack 2 since require.ensure implementation doesn't support catching require file exception. With webpack 2 I can use System.import which supports error handling. So basically what I did is:
dynamic loading my component using System.import :
function getMyComponent(nextState, cb, path) {
return System.import('components/myComponent').then(module => {
cb(null, module);
});
}
And catching the error:
function getAsyncComponent(getComponent) {
return function (nextState, cb, path) {
getComponent(nextState, cb, path, store).catch(err => {
store.dispatch(notificationSend({message: <span>Uh oh.. Something went wrong.}));
cb(err);
});
}
}
<Route path="foo" getComponent={getAsyncComponent(getMyComponent)}/>
I would go for a variant of #2 where, upon detecting that a route file no longer exists on the server, you do a refresh for the user using window.location.reload(true). That should do a hard refresh which will load the new files.
I have a pdf that is rendered from a server side html file in my Meteor application using webshot. This pdf is displayed in the browser, and also attached to an email to be sent to various users. Since migrating over to Meteor's Galaxy platform, I am unable to render the images in the html file, and the email attachment doesn't work correctly. My set up worked perfectly on Digital Ocean with Ubuntu 14.04, and also on my localhost. It still works perfectly at both of these environments, but doesn't work with Galaxy. (it's worth noting I don't know much about programming email attachments, but used Meteor's email package, which is based on mailcomposer)
The pdf renders, so I know phantomjs is working, and webshot is taking a screenshot and displaying it as a pdf, so I know webshot is working. However, the images won't render and when attaching to an email, the file is corrupted/doesn't send correctly. I have tried logging the html to ensure that the URLs to the image files are all correct, and they are when deployed to Galaxy, but they just won't render with phantomjs/webshot. I am using the meteorhacks:ssr package to render the html file on the server before reading it with phantomjs.
I've tried contacting Galaxy support about this, but haven't had much assistance. Has anyone else experienced this? I'm struggling to even pinpoint the package causing the issue to submit a pull request if I need to. Thanks!
So I figured out my problem, which I'll share with others, but I'll also share a few pointers on debugging webshot in an app running on Galaxy's servers.
First, webshot doesn't pipe errors to the Galaxy's logs by default, as it's running on a spawned node.js process, so you need to change this line in your 'project_path/.meteor/local/isopacks/npm-container/npm/node_modules/webshot/lib/webshot.js' file (note, I'm still on Meteor 1.2, so this is wherever your npm webshot package is located):
// webshot.js line 201 - add , {stdio: "inherit"} to spawn method
var phantomProc = crossSpawn.spawn(options.phantomPath, phantomArgs, {stdio: "inherit"});
This passes all logs from the spawned process to your console. In addition to this, comment out the following code in the same file:
// comment out lines 234-239
// phantomProc.stderr.on('data', function(data) {
// if (options.errorIfJSException) {
// calledCallback = true;
// clearTimeout(timeoutID);
// cb(new Error('' + data))
// }
// });
Doing these two modifications will print logs from the phantomjs process to your Galaxy container. In addition to that, you will want to modify the webshot.phantom.js script that is located in the same directory to print to the console in order to debug. This is the script you will want to modify however you see fit to find your issue, but the phantomjs docs recommend using phantom callbacks to debug errors from the web page being loaded, such as:
page.onResourceError = function(resourceError) {
console.log('Unable to load resource (#' + resourceError.id + 'URL:' + resourceError.url + ')');
console.log('Error code: ' + resourceError.errorCode + '. Description: ' + resourceError.errorString);
};
For my particular issue, I was getting an SSL handshake issue:
Error code: 6. Description: SSL handshake failed
To fix this, I had to add the following code to my webshot options object:
phantomConfig: {
"ignore-ssl-errors": "true",
"ssl-protocol": "any"
},
This fixed the issue with loading the static images in my pdf over https (note: this worked correctly on Digital Ocean without the code above, I'm not sure what is different in the SSL configuration on Galaxy's containers).
In addition, I was having issues with attaching the pdf correctly to an email my app sent. This turned out to be an issue with rendering the url correctly for email using Meteor.absoluteUrl() in the mailcomposer attachments filePath object. I don't know why Meteor.absoluteUrl() did not render my app's url correctly on Galaxy in an email attachment, as Meteor.absoluteUrl() works in other places in my app, and it worked on Digital Ocean, but it didn't work here. When I switched the attachment object over to a hard coded URL, it worked fine, so that might be something worth checking if you are having issues.
I know quite a few Meteor developers have used webshot to create pdf's in their app, and I'm sure some will be migrating to Galaxy in the future, so hopefully this is helpful to others who decide to switch to Galaxy. Good luck!
I'm facing problems when the client internet connection is unstable. If disconnections occur during the loading process google maps services won't work even when the connection comes back.
The difficulty with phonegap compared to a normal browser is that there is no "reload page" button the user can hit if the page didn't load properly. We thus have to ensure a 100% safe load.
If you follow the instructions provided by Google to implement google maps javascript api in your phonegap app, the loading process will crash at three different steps if the client connection is unstable.
Each crash is independent and a bit complex to explain, so i created sub questions : first crash, second crash, third crash
I also created that file so that anyone can reproduce the crashes.
I suspect that the problem partly comes from google's script, but there is probably a work around i didn't see.
Not sure at all that this could help but I faced an issue regarding multiDex when I tryed to use google map api when adding this code in file build-extras.gradle my google map crash was solved :
android {
defaultConfig {
minSdkVersion 15
targetSdkVersion 23
multiDexEnabled true
}
dexOptions {
javaMaxHeapSize "2g"
}
packagingOptions {
exclude 'META-INF/LICENSE'
exclude 'META-INF/LICENSE.txt'
exclude 'META-INF/license.txt'
exclude 'META-INF/NOTICE'
exclude 'META-INF/NOTICE.txt'
exclude 'META-INF/notice.txt'
}
}
dependencies {
compile 'com.android.support:multidex:1.0.0'
}
What I understood was that using simple Dex I can only implement 65000 method but the google api add lot of them.
Building Apps with Over 65K Methods
Not sure this could help or not, let me know if I should delete this answer.