I manage a site written in Ruby on Rails, Javascript, and C that's been up for a few years, and allows users to upload data files that we process. I use the s3_direct_upload gem to bypass our web server and process the file asynchronously, providing progress updates to the client.
Recently we have a user that is no longer able to upload files though he had been able to in the past. No other users are experiencing the issue. He is running Chrome on Windows, and thankfully was willing to get a screen print of the Java console when the error occurs. It is below.
The Javascript code in question is as follows with the error getting called on s3_upload_failed, content.filename has the correct filename, and content.error_thrown is blank:
$(document).ready(function() {
var upload_names = [];
$('#new_ride_menu, #new_ride_button').S3Uploader(
{
remove_completed_progress_bar: false,
progress_bar_target: $('.ride-upload-progress'),
before_add: validateFile,
expiration: null
}
);
$('#new_ride_menu, #new_ride_button').bind('s3_uploads_start', function(e, content) {
$('#ride-upload-modal').appendTo('body').modal('show');
});
// one upload is complete
$('#new_ride_menu, #new_ride_button').bind('s3_upload_complete', function(e, content) {
upload_names.push(content.file);
});
// all uploads are complete
$('#new_ride_menu, #new_ride_button').bind('s3_uploads_complete', function(e, content) {
// HANDLE SERVER COMMS AND PROCESSING
});
$('#new_ride_menu, #new_ride_button').bind('s3_upload_failed', function(e, content) {
$.post('/logs', {level: 'warn', message: 'Error uploading file: '+content.filename});
return alert(content.filename + ' failed to upload');
});
});
I'm attempting to replicate this error with no success. I've tried:
Disabling Javascript
Disabling cookies
Blocking third party cookies
Turned off unsandboxed plugin access
Turned off clipboard access
Does this set of errors look familiar to anyone? Is this a client side configuration issue? Any direction would be greatly appreciated.
After being unable to diagnose with more info, I got on the phone with the customer and asked them to bring up the site on IE. That worked and therefore proved something was amiss with Chrome.
Turns out they were using two Chrome instances, one for work and one for home. The home version played nicely with my site, however the work version had some very restrictive options. In the end, the solution was to make sure the customer was using their home version of Chrome.
Related
I have a pdf that is rendered from a server side html file in my Meteor application using webshot. This pdf is displayed in the browser, and also attached to an email to be sent to various users. Since migrating over to Meteor's Galaxy platform, I am unable to render the images in the html file, and the email attachment doesn't work correctly. My set up worked perfectly on Digital Ocean with Ubuntu 14.04, and also on my localhost. It still works perfectly at both of these environments, but doesn't work with Galaxy. (it's worth noting I don't know much about programming email attachments, but used Meteor's email package, which is based on mailcomposer)
The pdf renders, so I know phantomjs is working, and webshot is taking a screenshot and displaying it as a pdf, so I know webshot is working. However, the images won't render and when attaching to an email, the file is corrupted/doesn't send correctly. I have tried logging the html to ensure that the URLs to the image files are all correct, and they are when deployed to Galaxy, but they just won't render with phantomjs/webshot. I am using the meteorhacks:ssr package to render the html file on the server before reading it with phantomjs.
I've tried contacting Galaxy support about this, but haven't had much assistance. Has anyone else experienced this? I'm struggling to even pinpoint the package causing the issue to submit a pull request if I need to. Thanks!
So I figured out my problem, which I'll share with others, but I'll also share a few pointers on debugging webshot in an app running on Galaxy's servers.
First, webshot doesn't pipe errors to the Galaxy's logs by default, as it's running on a spawned node.js process, so you need to change this line in your 'project_path/.meteor/local/isopacks/npm-container/npm/node_modules/webshot/lib/webshot.js' file (note, I'm still on Meteor 1.2, so this is wherever your npm webshot package is located):
// webshot.js line 201 - add , {stdio: "inherit"} to spawn method
var phantomProc = crossSpawn.spawn(options.phantomPath, phantomArgs, {stdio: "inherit"});
This passes all logs from the spawned process to your console. In addition to this, comment out the following code in the same file:
// comment out lines 234-239
// phantomProc.stderr.on('data', function(data) {
// if (options.errorIfJSException) {
// calledCallback = true;
// clearTimeout(timeoutID);
// cb(new Error('' + data))
// }
// });
Doing these two modifications will print logs from the phantomjs process to your Galaxy container. In addition to that, you will want to modify the webshot.phantom.js script that is located in the same directory to print to the console in order to debug. This is the script you will want to modify however you see fit to find your issue, but the phantomjs docs recommend using phantom callbacks to debug errors from the web page being loaded, such as:
page.onResourceError = function(resourceError) {
console.log('Unable to load resource (#' + resourceError.id + 'URL:' + resourceError.url + ')');
console.log('Error code: ' + resourceError.errorCode + '. Description: ' + resourceError.errorString);
};
For my particular issue, I was getting an SSL handshake issue:
Error code: 6. Description: SSL handshake failed
To fix this, I had to add the following code to my webshot options object:
phantomConfig: {
"ignore-ssl-errors": "true",
"ssl-protocol": "any"
},
This fixed the issue with loading the static images in my pdf over https (note: this worked correctly on Digital Ocean without the code above, I'm not sure what is different in the SSL configuration on Galaxy's containers).
In addition, I was having issues with attaching the pdf correctly to an email my app sent. This turned out to be an issue with rendering the url correctly for email using Meteor.absoluteUrl() in the mailcomposer attachments filePath object. I don't know why Meteor.absoluteUrl() did not render my app's url correctly on Galaxy in an email attachment, as Meteor.absoluteUrl() works in other places in my app, and it worked on Digital Ocean, but it didn't work here. When I switched the attachment object over to a hard coded URL, it worked fine, so that might be something worth checking if you are having issues.
I know quite a few Meteor developers have used webshot to create pdf's in their app, and I'm sure some will be migrating to Galaxy in the future, so hopefully this is helpful to others who decide to switch to Galaxy. Good luck!
I am using the following dirty workaround code to simulate an ajax file upload. This works fine, but when I set maxAllowedContentLength in web.config, my iframe loads 'normally' but with an error message as content:
dataAccess.submitAjaxPostFileRequest = function (completeFunction) {
$("#userProfileForm").get(0).setAttribute("action", $.acme.resource.links.editProfilePictureUrl);
var hasUploaded = false;
function uploadImageComplete() {
if (hasUploaded === true) {
return;
}
var responseObject = JSON.parse($("#upload_iframe").contents().find("pre")[0].innerText);
completeFunction(responseObject);
hasUploaded = true;
}
$("#upload_iframe").load(function() {
uploadImageComplete();
});
$("#userProfileForm")[0].submit();
};
In my Chrome console, I can see
POST http:/acmeHost:57810/Profile/UploadProfilePicture/ 404 (Not
Found)
I would much prefer to detect this error response in my code over the risky business of parsing the iframe content and guessing there was an error. For 'closer-to-homeerrors, I have code that sends a json response, but formaxAllowedContentLength`, IIS sends a 404.13 long before my code is ever hit.
There is not much you can do if you have no control over the error. If the submission target is in the same domain than the submitter and you are not limited by the SOP, you can try to access the content of the iframe and figure out if it is showing a success message or an error message. However, this is a very bad strategy.
Why an IFRAME? It is a pain.
If you want to upload files without the page flicking or transitioning, you can use the JS File API : File API File Upload - Read XMLHttpRequest in ASP.NET MVC
The support is very good: http://caniuse.com/#feat=filereader
For old browsers that does not support the File API just provide a normal form POST. Not pretty... but ok for old browsers.
UPDATE
Since there is no chance for you to use that API... Years ago I was in the same situation and the outcome was not straightforward. Basically, I created a upload ticket system where to upload a file you had to:
create a ticket from POST /newupload/ , that would return a GUID.
create an iframe to /newupload/dialog/<guid> that would show the file submission form pointing to POST /newupload/<guid>/file
serve the upload status at GET /newupload/guid/status
check from the submitter (the iframe outer container) the status of the upload every 500ms.
when upload is started, hide the iframe or show something fancy like an endless progress bar.
when the upload operation is completed of faulted, remove iframe and let the user know how it went.
When I moved to the FileReader API... was a good day.
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/dropbox.js/0.9.0/dropbox.min.js"></script>
Hi am using above file to access dropbox functions. It contain all the dropbox functions. i included this one to my application and i
used the below code to upload a file to dropbox using writefile function.
This was working fine in chrome and Mozilla browser but in IE its getting an error.
The error is: "Microsoft JScript runtime error: Access is denied".
Please anyone help me how to resolve thid IE Error and tell me the reason why this error coming only for IE Browser?
var UploadToDropbox = new Dropbox.Client({ key: consumerKey, secret: consumerSecret, token: accessToken, tokenSecret: accessTokenSecret, dropbox: true });
UploadToDropbox.authenticate(function (error, UploadToDropbox) {
if (error) {
alert('Something wrong here.');
}
else {
UploadToDropbox.writeFile("HelloWorld.txt", "Hello, world!\n", function (error, stat) {
if (error) {
return showError(error); // Something went wrong.
}
alert("File saved to your dropbox successfully. ");
});
}
});
Hi thank you for your reply my question and i tried like that but still that same error coming.
Ok now what should i do for resolve this error.
and i tried with this also
<script type="text/javascript">
// Hack to make dropbox.js works in IE8, IE9.
if (!window.btoa) window.btoa = base64.encode;
if (!window.atob) window.atob = base64.decode;
</script>
but same error.
Open IE->Tools-->InternetOptions
In Security Tab->select Zone as Internet-->Click Custom Level Button---> Check "Enable" in Access data source across Domains under Miscellaneous.
It seems that IE does not play well with javascript events that trigger a DOM control. So try to remove such event actions , if they are present .
Usually means that you are attempting to update a property or access content that is not permitted under your current security settings.
Sometimes, it also happens due to usage of deprecated method .
The hack in your question is not necessary. dropbox.js packages its own implementation of atob / btoa, which is used on IE <= 9. You can try it out by accessing Dropbox.Util.atob and Dropbox.Util.atob in the IE Developer Tools console.
base64 code: https://github.com/dropbox/dropbox-js/blob/master/src/base64.coffee
First, please run the checkbox.js sample code to check your IE settings. If the sample works (you can log in, add tasks, mark them as done and remove them) then your IE settings are OK, and the problem is elsewhere.
checkbox.js: https://dl-web.dropbox.com/spa/pjlfdak1tmznswp/checkbox.js/public/index.html
Second, make sure that you're serving your HTML page using https://. The Dropbox API server uses https, and IE <= 9 doesn't allow cross-domain requests from http pages to https servers.
Third, you shouldn't need the token and tokenSecret parameters in the authorize call.
If you still get the JScript runtime error, can you please point to the line of code that causes it? Also, consider opening an issue on the dropbox.js GitHub page. This will get faster responses.
I have the following code, which is supposed to be a simple example of using the google api javascript client, and simply displays the long-form URL for a hard-coded shortened URL:
<script>
function appendResults(text) {
var results = document.getElementById('results');
results.appendChild(document.createElement('P'));
results.appendChild(document.createTextNode(text));
}
function makeRequest() {
console.log('Inside makeRequest');
var request = gapi.client.urlshortener.url.get({
'shortUrl': 'http://goo.gl/fbsS'
});
request.execute(function(response) {
appendResults(response.longUrl);
});
}
function load() {
gapi.client.setApiKey('API_KEY');
console.log('After attempting to set API key');
gapi.client.load('urlshortener', 'v1', makeRequest);
console.log('After attempting to load urlshortener');
}
</script>
<script src="https://apis.google.com/js/client.js?onload=load"></script>
except with an actual API key instead of the text 'API_KEY'.
The console output is simply:
After attempting to set API key
After attempting to load urlshortener
but I never see 'Inside makeRequest', which is inside the makeRequest function, which is the callback function for the call to gapi.client.load, leading me to believe that the function is not working (or failing to complete).
Can anyone shed some light on why this might be so and how to fix it?
Thanks in advance.
After spending hours googling the problem, I found out the problem was because I was running this file on the local machine and not on a server.
When you run the above code on chrome you get this error in the developer console "Unable to post message to file://. Recipient has origin null."
For some reason the javascript loads only when running on a actual server or something like XAMPP or WAMP.
If there is any expert who can shed some light to why this happens, it would be really great full to learn.
Hope this helps the others noobies like me out there :D
Short answer (http://code.google.com/p/google-api-javascript-client/issues/detail?id=46):
The JS Client does not currently support making requests from a file:// origin.
Long answer (http://en.wikipedia.org/wiki/Same_origin_policy):
The behavior of same-origin checks and related mechanisms is not well-defined
in a number of corner cases, such as for protocols that do not have a clearly
defined host name or port associated with their URLs (file:, data:, etc.).
This historically caused a fair number of security problems, such as the
generally undesirable ability of any locally stored HTML file to access all
other files on the disk, or communicate with any site on the Internet.
I'm testing my project in the production server where I'm getting several errors of various features in my web application which is working perfectly on my computer.
Please go to http://qlimp.com and login using this username/password: nirmal/karurkarur Then go to http://qlimp.com/cover You'll find a palette where you can upload images and do something similar to flavors.me. I'm having several problems here(images,text,other information are not getting stored in the database).
I think there is no problem with the setup. The problem is it is not even entering into the Django views properly but working without any problem on my computer. Is there anyone experienced the same problem? I'm wondering why is it not working.
Also you can check out in http://qlimp.com/signup/ and you can find the problem where the datas are not get stored.
So there are many problems which I can't ask in one question(not a stackoverflow culture) and so I'm asking this.
When I upload the image I checked in chrome inspector 'network tab' it shows 502 bad gateway
Here is my Django views.py: https://gist.github.com/2778242
jQuery Code for the ajax image upload:
$('#id_tmpbg').live('change', function()
{
$("#ajax-loader").show();
$("#uploadform").ajaxForm({success: showResponse}).submit();
});
function showResponse(responseText, statusText, xhr, $form) {
$.backstretch(responseText)
$("#ajax-loader").hide();
}
And I also checked that it is actually entering into the request.is_ajax() but not into form.is_valid() in my views. Why is it so? I'm uploading the right format.
Could anyone identify the mistake I've done? Also I need an answer on Why the code is not working on production server which is actually working on the development server (this would be helpful for me to solve rest of the problems).
Development server: Ubuntu 11.10/Python 2.7/Django 1.3.1
Production server: Ubuntu 12.04/Python 2.7/Django 1.3.1
UPDATE
There is some problem in everyone signing in with the same user/password. So please register there and it shows [Errno 111] Connection refused, doesn't matter, you can login then.
UPDATE-2
Actually the problem is with form.is_valid() so I removed it and checked but now I'm getting this error:
Exception Type: ValueError
Exception Value: The BackgroundModel could not be created because the data didn't validate.
Exception Location: /home/nirmal/project/local/lib/python2.7/site-packages/django/forms/models.py in save_instance, line 73
I'm all-time uploading the right Image format and I don't know why it is not validating.
UPDATE-3
I'm getting 304 Not Modified for all the static files in http://qlimp.com/cover Will this be a problem for not working?
It's Nginx that gives the 502 error when gunicorn is not available.
gunicorn_django -bind=127.0.0.1:8001 only launches one synchronous worker process and it may be busy responding to other requests.
You may want to spawn more workers (-w2). If you need to handle big data transfers consider using an asynchronous worker flavor (e.g. -k gevent, you need gevent to be installed).
More info on choosing the worker class and the number of workers in Gunicorn FAQ.
I've found the problem which was stucking me for the past 3 days. It is because I've forget to do this sudo apt-get install libjpeg62 libjpeg62-dev zlib1g-dev before the PIL installation, that is why the image is not get validated.
The next issue is I've given a relative path for the MEDIA_ROOT in my settings.py file which leads to 404 NOT FOUND and I changed it to absolute path.
So these are simple mistakes which leads to some mysterious errors. Also Thanks to everyone for the help.