Rendering images in a webshot pdf on Meteor's Galaxy - javascript

I have a pdf that is rendered from a server side html file in my Meteor application using webshot. This pdf is displayed in the browser, and also attached to an email to be sent to various users. Since migrating over to Meteor's Galaxy platform, I am unable to render the images in the html file, and the email attachment doesn't work correctly. My set up worked perfectly on Digital Ocean with Ubuntu 14.04, and also on my localhost. It still works perfectly at both of these environments, but doesn't work with Galaxy. (it's worth noting I don't know much about programming email attachments, but used Meteor's email package, which is based on mailcomposer)
The pdf renders, so I know phantomjs is working, and webshot is taking a screenshot and displaying it as a pdf, so I know webshot is working. However, the images won't render and when attaching to an email, the file is corrupted/doesn't send correctly. I have tried logging the html to ensure that the URLs to the image files are all correct, and they are when deployed to Galaxy, but they just won't render with phantomjs/webshot. I am using the meteorhacks:ssr package to render the html file on the server before reading it with phantomjs.
I've tried contacting Galaxy support about this, but haven't had much assistance. Has anyone else experienced this? I'm struggling to even pinpoint the package causing the issue to submit a pull request if I need to. Thanks!

So I figured out my problem, which I'll share with others, but I'll also share a few pointers on debugging webshot in an app running on Galaxy's servers.
First, webshot doesn't pipe errors to the Galaxy's logs by default, as it's running on a spawned node.js process, so you need to change this line in your 'project_path/.meteor/local/isopacks/npm-container/npm/node_modules/webshot/lib/webshot.js' file (note, I'm still on Meteor 1.2, so this is wherever your npm webshot package is located):
// webshot.js line 201 - add , {stdio: "inherit"} to spawn method
var phantomProc = crossSpawn.spawn(options.phantomPath, phantomArgs, {stdio: "inherit"});
This passes all logs from the spawned process to your console. In addition to this, comment out the following code in the same file:
// comment out lines 234-239
// phantomProc.stderr.on('data', function(data) {
// if (options.errorIfJSException) {
// calledCallback = true;
// clearTimeout(timeoutID);
// cb(new Error('' + data))
// }
// });
Doing these two modifications will print logs from the phantomjs process to your Galaxy container. In addition to that, you will want to modify the webshot.phantom.js script that is located in the same directory to print to the console in order to debug. This is the script you will want to modify however you see fit to find your issue, but the phantomjs docs recommend using phantom callbacks to debug errors from the web page being loaded, such as:
page.onResourceError = function(resourceError) {
console.log('Unable to load resource (#' + resourceError.id + 'URL:' + resourceError.url + ')');
console.log('Error code: ' + resourceError.errorCode + '. Description: ' + resourceError.errorString);
};
For my particular issue, I was getting an SSL handshake issue:
Error code: 6. Description: SSL handshake failed
To fix this, I had to add the following code to my webshot options object:
phantomConfig: {
"ignore-ssl-errors": "true",
"ssl-protocol": "any"
},
This fixed the issue with loading the static images in my pdf over https (note: this worked correctly on Digital Ocean without the code above, I'm not sure what is different in the SSL configuration on Galaxy's containers).
In addition, I was having issues with attaching the pdf correctly to an email my app sent. This turned out to be an issue with rendering the url correctly for email using Meteor.absoluteUrl() in the mailcomposer attachments filePath object. I don't know why Meteor.absoluteUrl() did not render my app's url correctly on Galaxy in an email attachment, as Meteor.absoluteUrl() works in other places in my app, and it worked on Digital Ocean, but it didn't work here. When I switched the attachment object over to a hard coded URL, it worked fine, so that might be something worth checking if you are having issues.
I know quite a few Meteor developers have used webshot to create pdf's in their app, and I'm sure some will be migrating to Galaxy in the future, so hopefully this is helpful to others who decide to switch to Galaxy. Good luck!

Related

Webpack code splitting: ChunkLoadError - Loading chunk X failed, but the chunk exists

I've integrated Sentry with my website a few days ago and I noticed that sometimes users receive this error in their console:
ChunkLoadError: Loading chunk <CHUNK_NAME> failed.
(error: <WEBSITE_PATH>/<CHUNK_NAME>-<CHUNK_HASH>.js)
So I investigated the issue around the web and discovered some similar cases, but related to missing chunks caused by release updates during a session or caching issues.
The main difference between these cases and mine is that the failed chunks are actually reachable from the browser, so the loading error does not depend on the after-release refresh of the chunk hashes but (I guess), from some network related issue.
This assumption is reinforced by this stat: around 90% of the devices involved are mobile.
Finally, I come to the question: Should I manage the issue in some way (e. g. retrying the chunk loading if failed) or it's better to simply ignore it and let the user refresh manually?
2021.09.28 edit:
A month later, the issue is still occurring but I have not received any report from users, also I'm constantly recording user sessions with Hotjar but nothing relevant has been noticed so far.
I recently had a chat with Sentry support that helped me excluding the network related hypotesis:
Our React SDK does not have offline cache by default, when an error is captured it will be sent at that point. If the app is not able to connect to Sentry to send the event, it will be discarded and the SDK will no try to send it again.
Rodolfo from Sentry
I can confirm that the issue is quite unusual, I share with you another interesting stat: the user affected since the first occurrence are 882 out of 332.227 unique visitors (~0,26%), but I noticed that the 90% of the occurrences are from iOS (not generic mobile devices as I noticed a month ago), so if I calculate the same proportion with iOS users (794 (90% of 882) out of 128.444) we are near to a 0,62%. Still small but definitely more relevant on iOS.
This is most likely happening because the browser is caching your app's main HTML file, like index.html which serves the webpack bundles and manifest.
First I would ensure your web server is sending the correct HTTP response headers to not cache the app's index.html file (let's assume it is called that). If you are using NGINX, you can set the appropriate headers like this:
location ~* ^.+.html$ {
add_header Cache-Control "no-store max-age=0";
}
This file should be relatively small in size for a SPA, so it is ok to not cache this as long as you are caching all of the other assets the app needs like the JS and CSS, etc. You should be using content hashes on your JS bundles to support cache busting on those. With this in place visits to your site should always include the latest version of index.html with the latest assets including the latest webpack manifest which records the chunk names.
If you want to handle the Chunk Load Errors you could set up something like this:
import { ErrorBoundary } from '#sentry/react'
const App = (children) => {
<ErrorBoundary
fallback={({ error, resetError }) => {
if (/ChunkLoadError/.test(error.name)) {
// If this happens during a release you can show a new version alert
return <NewVersionAlert />
// If you are certain the chunk is on your web server or CDN
// You can try reloading the page, but be careful of recursion
// In case the chunk really is not available
if (!localStorage.getItem('chunkErrorPageReloaded')) {
localStorage.setItem('chunkErrorPageReloaded', true)
window.location.reload()
}
}
return <ExceptionRedirect resetError={resetError} />
}}>
{children}
</ErrorBoundary>
}
If you do decide to reload the page I would present a message to the user beforehand.
The chunk is reachable doesn't mean the user's browser can parse it. For example, if the user's browser is old. But the chunk contains new syntax.
Webpack loads the chunk by jsonp. It insert <script> tag into <head>. If the js chunk file is downloaded but cannot parsed. A ChunkLoadError will be throw.
You can reproduce it by following these steps. Write an optional chain and don't compile it. Ensure it output to a chunk.
const obj = {};
obj.sub ??= {};
Open your app by chrome 79 or safari 13.0. The full error message looks like this:
SyntaxError: Unexpected token '?' // 13.js:2
MAX RELOADS REACHED // chunk-load-handler.js:24
ChunkLoadError: Loading chunk 13 failed. // trackConsoleError.js:25
(missing: http://example.com/13.js)

websocket.js causing unexpected refresh of React app

I have a React web application that allows Image uploads.
After performing a fetch POST request for multiple images (in this case 6) to my API, the browser refreshes itself and reloads the current page. It is worth noting that this application allows images to be cropped and so for every image the user uploads there is a second image (cropped) to upload. So the above 6 images result in 12 POST requests.
The refresh behavior is INCONSISTENT and difficult to reproduce. I have inserted breakpoints within the function this behavior occurs. Using the chrome debugger tools I have stepped through the flow and found that the refresh occurs after this call.
this.ws.onmessage = function(e) {
debug('message event', e.data);
self.emit('message', e.data);
};
It is located inside the file websocket.js within the Library node_modules/react-dev-tools/node_modules/socketjs-client/lib/transport/websocket.js
I have narrowed it down to this file and ruled out any issues from my project codebase.
My theory is that the behavior of my application is triggering an external listener/case which is causing a full browser refresh.
I see that the file in question is inside react-dev-tools and thought that removing this module could solve the problem, however, this occurs in my production environment also and so I feel removing this could break the build.
Any thoughts as to better my investigation or potential solutions please are helpful.
I'm not sure how you're running your environments, but maybe this will help...
I ran into this exact issue and it took me 3 days (too long) to narrow it to nodemon (in development) and pm2 (in production). I'm not sure how you're serving your application/images, but for me, any time a new file was added, the service was intermittently restarted (sometimes it uploaded, sometimes it was cut off).
For development, I added a nodemon.json config file at the application root (nodemon app.js --config nodemon.json):
{
"ignore": ["uploads"]
}
For production, I created a prod.json at the application root and ran pm2 start prod.json:
{
"apps" : [{
"name" : "app-name",
"ignore_watch" : ["uploads"],
"script" : "./app.js",
"env": {
"NODE_ENV": "production"
}
}]
}
If you're using neither of the above packages, then I'd suggest looking into the possibility of reconfiguring how you're storing and serving images to the application (as a last resort).

AWS S3 Direct Upload Javascript failure

I manage a site written in Ruby on Rails, Javascript, and C that's been up for a few years, and allows users to upload data files that we process. I use the s3_direct_upload gem to bypass our web server and process the file asynchronously, providing progress updates to the client.
Recently we have a user that is no longer able to upload files though he had been able to in the past. No other users are experiencing the issue. He is running Chrome on Windows, and thankfully was willing to get a screen print of the Java console when the error occurs. It is below.
The Javascript code in question is as follows with the error getting called on s3_upload_failed, content.filename has the correct filename, and content.error_thrown is blank:
$(document).ready(function() {
var upload_names = [];
$('#new_ride_menu, #new_ride_button').S3Uploader(
{
remove_completed_progress_bar: false,
progress_bar_target: $('.ride-upload-progress'),
before_add: validateFile,
expiration: null
}
);
$('#new_ride_menu, #new_ride_button').bind('s3_uploads_start', function(e, content) {
$('#ride-upload-modal').appendTo('body').modal('show');
});
// one upload is complete
$('#new_ride_menu, #new_ride_button').bind('s3_upload_complete', function(e, content) {
upload_names.push(content.file);
});
// all uploads are complete
$('#new_ride_menu, #new_ride_button').bind('s3_uploads_complete', function(e, content) {
// HANDLE SERVER COMMS AND PROCESSING
});
$('#new_ride_menu, #new_ride_button').bind('s3_upload_failed', function(e, content) {
$.post('/logs', {level: 'warn', message: 'Error uploading file: '+content.filename});
return alert(content.filename + ' failed to upload');
});
});
I'm attempting to replicate this error with no success. I've tried:
Disabling Javascript
Disabling cookies
Blocking third party cookies
Turned off unsandboxed plugin access
Turned off clipboard access
Does this set of errors look familiar to anyone? Is this a client side configuration issue? Any direction would be greatly appreciated.
After being unable to diagnose with more info, I got on the phone with the customer and asked them to bring up the site on IE. That worked and therefore proved something was amiss with Chrome.
Turns out they were using two Chrome instances, one for work and one for home. The home version played nicely with my site, however the work version had some very restrictive options. In the end, the solution was to make sure the customer was using their home version of Chrome.

PDF.js not working when deploying to different Server in IE

I have a local IIS site where i developed some code with PDF.js. There it worked fine to load a specific PDF and read the text contents from it.
Then I copied everything to the a library in a SharePoint Server (thats the only difference, IIS vs SharePoint) and changed all references. The code does not throw any Errors, with debugging level info it just prints
Info: Cannot use postMessage Transfers
to the console. Adding a console.log line into the PDF.js catch block of the promise did not result in any new information. It doesn't even get to the first logging inside the then:
var pdfobj = PDFJS.getDocument(docPath);
pdfobj.then(function (pdf) {
console.log(pdf);
any ideas?
EDITS: Updated from PDF.JS 1.1 to 1.2
There are not many error logs in PDF.js. I accidently hardcoded a wrong URL where even the server is non existent... and no error log, not even the then(...).catch(...) is called?
It is working now in Firefox but not in IE and I cannot see any reason for this. The Info message about Cannot use postMessage Transfers is also only displayed in IE (using IE 11).
It does work now. I am not sure what I did to fix it, but I will update this answer when I know. I think it has something to do with the directory structure of the PDF.js files. Previously I just uploaded all JS files (there were no errors though).
Still there is no exception handling when the PDF does not exist.

django views - 502 bad gateway

I'm testing my project in the production server where I'm getting several errors of various features in my web application which is working perfectly on my computer.
Please go to http://qlimp.com and login using this username/password: nirmal/karurkarur Then go to http://qlimp.com/cover You'll find a palette where you can upload images and do something similar to flavors.me. I'm having several problems here(images,text,other information are not getting stored in the database).
I think there is no problem with the setup. The problem is it is not even entering into the Django views properly but working without any problem on my computer. Is there anyone experienced the same problem? I'm wondering why is it not working.
Also you can check out in http://qlimp.com/signup/ and you can find the problem where the datas are not get stored.
So there are many problems which I can't ask in one question(not a stackoverflow culture) and so I'm asking this.
When I upload the image I checked in chrome inspector 'network tab' it shows 502 bad gateway
Here is my Django views.py: https://gist.github.com/2778242
jQuery Code for the ajax image upload:
$('#id_tmpbg').live('change', function()
{
$("#ajax-loader").show();
$("#uploadform").ajaxForm({success: showResponse}).submit();
});
function showResponse(responseText, statusText, xhr, $form) {
$.backstretch(responseText)
$("#ajax-loader").hide();
}
And I also checked that it is actually entering into the request.is_ajax() but not into form.is_valid() in my views. Why is it so? I'm uploading the right format.
Could anyone identify the mistake I've done? Also I need an answer on Why the code is not working on production server which is actually working on the development server (this would be helpful for me to solve rest of the problems).
Development server: Ubuntu 11.10/Python 2.7/Django 1.3.1
Production server: Ubuntu 12.04/Python 2.7/Django 1.3.1
UPDATE
There is some problem in everyone signing in with the same user/password. So please register there and it shows [Errno 111] Connection refused, doesn't matter, you can login then.
UPDATE-2
Actually the problem is with form.is_valid() so I removed it and checked but now I'm getting this error:
Exception Type: ValueError
Exception Value: The BackgroundModel could not be created because the data didn't validate.
Exception Location: /home/nirmal/project/local/lib/python2.7/site-packages/django/forms/models.py in save_instance, line 73
I'm all-time uploading the right Image format and I don't know why it is not validating.
UPDATE-3
I'm getting 304 Not Modified for all the static files in http://qlimp.com/cover Will this be a problem for not working?
It's Nginx that gives the 502 error when gunicorn is not available.
gunicorn_django -bind=127.0.0.1:8001 only launches one synchronous worker process and it may be busy responding to other requests.
You may want to spawn more workers (-w2). If you need to handle big data transfers consider using an asynchronous worker flavor (e.g. -k gevent, you need gevent to be installed).
More info on choosing the worker class and the number of workers in Gunicorn FAQ.
I've found the problem which was stucking me for the past 3 days. It is because I've forget to do this sudo apt-get install libjpeg62 libjpeg62-dev zlib1g-dev before the PIL installation, that is why the image is not get validated.
The next issue is I've given a relative path for the MEDIA_ROOT in my settings.py file which leads to 404 NOT FOUND and I changed it to absolute path.
So these are simple mistakes which leads to some mysterious errors. Also Thanks to everyone for the help.

Categories