How to fix imgur api error status 400, 417 errors - javascript

While uploading pictures using the Imgur API,
some pictures are not uploaded and
returns status 400 or 417 errors.
{
status: 400,
success: false,
data: {
error: "We don't support that file type!",
request: '/3/upload'
}
{
status: 417,
success: false,
data: {
error: 'Internal expectation failed',
request: '/3/upload',
method: 'POST'
}
}
This error was fixed upon launching the console.
But every time I upload a picture I have to restart the console. How may I prevent this from happening?

The 417 error states that the Imgur CDN was expecting a file type such as .png, .mp4, .gif, etc. You may view the supported file types here.
The 400 indicates an improper / bad request , while requesting the API for a POST type request you must know how to properly request it, you may refer to the proper method here.

Note: Your "Account" and your "Application" are distinct entities.
If you try to upload to an album on your account, thinking your application has access rights to it because they are tied to the same email, you'll probably get the 417 error.
Also, it seems that imgur validates the .PNG file before validating access rights.
I have concluded that as you are trouble shooting , this is a rough gauge of "how close to succcess" you are:
1: VERY CLOSE :
"Internal Expectation Failed"
Your .PNG payload within the form-data is probably correct.
2: COLDER :
"We Don't Support That File Type"
You probably corrupted your .PNG binary file when attempting to concat it into the "form-data" payload.
3: COLDEST :
"bad request"
You really fucked up bad.
If you are rolling your own "multi-part form-data" like I did, one of the posters here has a good low-level example that does NOT use 3rd party libraries:
NodeJS Request how to send multipart/form-data POST request
Your payload is constructed something like this:
payload=Buffer.concat([
Buffer.from( formdata_string_top , "utf8" )
, Buffer.from( png_binary_file , "binary" )
, Buffer.from( final_formdata_boundary, "utf8" )
]);;
You might be tempted to do this , because it is readable in your
logs, but it WILL_CORRUPT_YOUR_BINARY_FILE
payload=Buffer.concat([
Buffer.from( formdata_string_top , "utf8" )
, Buffer.from( png_binary_file , "binary" )
, Buffer.from( final_formdata_boundary, "utf8" )
]).toString( "utf8" );

Related

Angular cannot get file download from express using res.download

In my application I create a file on the backend, and I wish then to deliver that to the user via a browser download. I have tried numerous
Here is the express backend:
app.get('/download', (req, res) => {
res.download('filename.txt', function (err) {
console.log(err);
});
})
The console call isn't returning, so presumably no error. Here is what I am doing in the front-end, following advice here and here:
window.open('localhost:3000/download');
The result is that I get a blank window popping up, but no download. I have also tried this:
const filePath = 'localhost:3000/download/';
const link = document.createElement('a');
link.href = filePath;
link.download = filePath.substr(filePath.lastIndexOf('/') + 1);
link.click();
But in this case, nothing happens.
What am i doing wrong? I at a loss even how to debug any further.
Update 1
Thanks #jal_a, I've made progress. I realise now that if I manually create a window and enter the url (as suggested) the download works. However, when the window is launched from the application using window.open(url) where url is the same, the window opens but the download doesn't initiate. If I then go to the created window, click in the url and press return ... voila! It works! How can I make the download initiate from the application launched window?
Update 2
Thanks #IceMetalPunk, I tried that and I'm getting the following error in the console - the file I'm trying to download is gps data in gpx format - which I can see in the response, but it seems to be expecting JSON?? Do I need to do any pre-processing to send a file?:
HttpErrorResponse {headers: HttpHeaders, status: 200, statusText: "OK",
url: "http://localhost:3000/download/", ok: false, …}
error:
error: SyntaxError: Unexpected token < in JSON at position 0 at JSON.parse
...
text: "<?xml version="1.0" encoding="UTF-8"?>
↵<gpx version="1.1" xmlns="http://www.topografix.com/GPX/1/0">
↵ <rte>
↵ <name>undefined</name>
↵ <rtept lat="50.70373" lon="-3.07241" />
↵ <rtept lat="50.70348" lon="-3.07237" />
↵ </rte>
↵</gpx>"
__proto__: Object
headers: HttpHeaders {normalizedNames: Map(0), lazyUpdate: null, lazyInit:
ƒ}
message: "Http failure during parsing for http://localhost:3000/download/"
name: "HttpErrorResponse"
ok: false
status: 200
statusText: "OK"
url: "http://localhost:3000/download/"
__proto__: HttpResponseBase
The issue was that my download link did not include the http:// element of the url, as this answer describes.

How to programmatically get network errors in AngularJS

I have the following code in say abcd.js:
$http({url: 'some url' , method: 'POST', data: , headers: {}
}).then(function(response)) {
.............
}, function error(response) {
..............
})
In the case of error the values of response.status = -1,
response.statusText ="". Basically no useful info. However in the Chrome Debugger console output I see:
POST: Some URL/analysis.php net::ERR_NETWORK_IO_SUSPENDED
The chrome debugger extracts the real error from the analysis.php network packet and displays it.
Q1: Why is that the status and statusText dont have useful info?
Q2: Is it possible to get programmatically the network error? In the above failure example I would get it as ERR_NETWORK_IO_SUSPENDED.
Q3: Programmatically is there any other way to get the network error when $http() fails?
Q1: Why is that the status and statusText dont have useful info?
because you cannot reach the server, you won't get useful status code from the server.
Q2: Is it possible to get programmatically the network error?
you can set timeout to $http to catch network problems manually.
Q3: Programmatically is there any other way to get the network error when $http() fails?
take advantage of error callback of $http? (I don't know well about this)
After much Googling I found out that when ever I get the status = -1 that means the server was not reached at all because of some unknown reasons. All the unknown reasons are bundled under one error string ERR_NETWORK_IO_SUSPENDED. I do get a -1 and now I have simply hard coded the error string as ERR_NETWORK_IO_SUSPENDED. For other error code such as 400, 404 the status does get the number bad statuText has the correct string. So this confirmed $http() call is fine. With that said we can close this case. Thanks to all who helped to arrive at this conclusion.

cordova-plugin-file-transfer: How do you upload a file to S3 using a signed URL?

I am able to upload to S3 using a file picker and regular XMLHttpRequest (which I was using to test the S3 setup), but cannot figure out how to do it successfully using the cordova file transfer plugin.
I believe it is either to do with the plugin not constructing the correct signable request, or not liking the local file uri given. I have tried playing with every single parameter from headers to uri types, but the docs aren't much help, and the plugin source is bolognese.
The string the request needs to sign match is like:
PUT
1391784394
x-amz-acl:public-read
/the-app/317fdf654f9e3299f238d97d39f10fb1
Any ideas, or possibly a working code example?
A bit late, but I just spent a couple of days struggling with this so in case anybody else is having problems, this is how managed to upload an image using the javascript version of the AWS SDK to create the presigned URL.
The key to solving the problem is in the StringToSign element of the XML SignatureDoesNotMatch error that comes back from Amazon. In my case it looked something like this:
<StringToSign>
PUT\n\nmultipart/form-data; boundary=+++++org.apache.cordova.formBoundary\n1481366396\n/bucketName/fileName.jpg
</StringToSign>
When you use the aws-sdk to generate a presigned URL for upload to S3, internally it will build a string based on various elements of the request you want to make, then create an SHA1 hash of it using your AWS secret. This hash is the signature that gets appended to the URL as a parameter, and what doesn't match when you get the SignatureDoesNotMatch error.
So you've created your presigned URL, and passed it to cordova-plugin-file-transfer to make your HTTP request to upload a file. When that request hits Amazon's server, the server will itself build a string based on the request headers etc, hash it and compare that hash to the signature on the URL. If the hashes don't match then it returns the dreaded...
The request signature we calculated does not match the signature you provided. Check your key and signing method.
The contents of the StringToSign element I mentioned above is the string that the server builds and hashes to compare against the signature on the presigned URL. So to avoid getting the error, you need to make sure that the string built by the aws-sdk is the same as the one built by the server.
After some digging about, I eventually found the code responsible for creating the string to hash in the aws-sdk. It is located (as of version 2.7.12) in:
node_modules/aws-sdk/lib/signers/s3.js
Down the bottom at line 168 there is a sign method:
sign: function sign(secret, string) {
return AWS.util.crypto.hmac(secret, string, 'base64', 'sha1');
}
If you put a console.log in there, string is what you're after. Once you make the string that gets passed into this method the same as the contents of StringToSign in the error message coming back from Amazon, the heavens will open and your files will flow effortlessly into your bucket.
On my server running node.js, I originally created my presigned URL like this:
var AWS = require('aws-sdk');
var s3 = new AWS.S3(options = {
endpoint: 'https://s3-eu-west-1.amazonaws.com',
accessKeyId: "ACCESS_KEY",
secretAccessKey: "SECRET_KEY"
});
var params = {
Bucket: 'bucketName',
Key: imageName,
Expires: 60
};
var signedUrl = s3.getSignedUrl('putObject', params);
//return signedUrl
This produced a signing string like this, similar to the OP's:
PUT
1481366396
/bucketName/fileName.jpg
On the client side, I used this presigned URL with cordova-plugin-file-transfer like so (I'm using Ionic 2 so the plugin is wrapped in their native wrapper):
let success = (result: any) : void => {
console.log("upload success");
}
let failed = (err: any) : void => {
let code = err.code;
alert("upload error - " + code);
}
let ft = new Transfer();
var options = {
fileName: filename,
mimeType: 'image/jpeg',
chunkedMode: false,
httpMethod:'PUT',
encodeURI: false,
};
ft.upload(localDataURI, presignedUrlFromServer, options, false)
.then((result: any) => {
success(result);
}).catch((error: any) => {
failed(error);
});
Running the code produced the signature doesn't match error, and the string in the <StringToSign> element looks like this:
PUT
multipart/form-data; boundary=+++++org.apache.cordova.formBoundary
1481366396
/bucketName/fileName.jpg
So we can see that cordova-plugin-file-transfer has added in its own Content-Type header which has caused a discrepancy in the signing strings. In the docs relating to the options object that get passed into the upload method it says:
headers: A map of header name/header values. Use an array to specify more than one value. On iOS, FireOS, and Android, if a header named Content-Type is present, multipart form data will NOT be used. (Object)
so basically, if no Content-Type header is set it will default to multipart form data.
Ok so now we know the cause of the problem, it's a pretty simple fix. On the server side I added a ContentType to the params object passed to the S3 getSignedUrl method:
var params = {
Bucket: 'bucketName',
Key: imageName,
Expires: 60,
ContentType: 'image/jpeg' // <---- content type added here
};
and on the client added a headers object to the options passed to cordova-plugin-file-transfer's upload method:
var options = {
fileName: filename,
mimeType: 'image/jpeg',
chunkedMode: false,
httpMethod:'PUT',
encodeURI: false,
headers: { // <----- headers object added here
'Content-Type': 'image/jpeg',
}
};
and hey presto! The uploads now work as expected.
I run into such issues with this plugin
The only working way I found to upload a file with a signature is the method of Christophe Coenraets : http://coenraets.org/blog/2013/09/how-to-upload-pictures-from-a-phonegap-app-to-amazon-s3/
With this method you will be able to upload your files using the cordova-plugin-file-transfer
First, I wanted to use the aws-sdk on my server to sign with getSignedUrl()
It returns the signed link and you only have to upload to it.
But, using the plugin it always end with 403 : signatures don't match
It may be related to the content length parameter but I didn't found for now a working solution with aws-sdk and the plugin

google app engine PHP - 500 error on upload

I have a PHP application running and I'm uploading a PDF for processing. I have not overridden any PHP ini values, so I'm expecting the post to be able to handle 32MB of data and a timeout of 60 seconds.
When I upload a large document (7.7MB in this case), the app fails in 2 very different ways.
The back end times out and returns successfully having apparently been passed duff data
The backend does not timeout (returning in under a minute) but has an internal server error
The timeout seems to manifest as the back end PHP page getting no data, i.e. whatever is doing the transport times out rather than my page timing out. I can handle this scenario because of the duff data passed in and pass back a useful error message. I don't seem to be able to reproduce this on my local development machine.
The second issue is perplexing as it happens almost immediately on my dev machine and also if the page response is under 1 minute on GAE. I can upload a document of 4.1MB and all is good. The 7.7MB doc causes this every time. The headers look fine and the form data has all the data it needs, although I haven't tried to decode the form data.
Is there another PHP setting causing this? Is there a way to alter this? What could possibly be going on? Could this be a javascript thing as I am using javascript file reader in an HTML5 dropzone - here are the appropriate portions of the handler code:
function loadSpecdoc(file) {
var acceptedTypes = {
'application/pdf' : true
};
if (acceptedTypes[file.type] === true) {
var reader = new FileReader();
reader.onload = function(event) {
$.ajax({
url : "my_handler.php",
type : 'POST',
dataType : "json",
data : {
spec_id : $('#list_spec_items_spec_id').val(),
file_content : event.target.result
},
success : function(response) {
// Handle Success
},
error : function(XMLHttpRequest, textStatus, exception) {
// Handle Failure
},
async : true
});
};
reader.readAsDataURL(file);
} else {
alert("Unsupported file");
console.log(file);
}
};
I have heard of the Blob store but I'm on a bit of a deadline and can't find an documentation on how to make it work in javascript, and can't get the file to my PHP application to use it.
Any help much appreciated.
EDIT: I have just checked in the chrome javascript network console and there are 2 sizes in the "Size Content" column, 79B and 0B, I'm guessing that this is the size up (79B) and response (0B). This being the case, this seems to be a javascript issue??
EDIT 2: Just found the log for my application and it says there is a post content limit of about 8MB... not quite the 32MB I was expecting:
06:26:03.476 PHP Warning: Unknown: POST Content-Length of 10612153 bytes exceeds the limit of 8388608 bytes in Unknown on line 0
EDIT 3: Just found this on limits:
https://cloud.google.com/appengine/docs/php/requests#PHP_Quotas_and_limits
I am expecting 32MB of post, assuming I can upload it quick enough :) How do I fix this?

Can't get node.js to return a valid signed PUT URL

Im tearing my hair out trying to get an S3 Direct Client Side PUT operation to work.
We have a version of the backend code working on Python without any issues(so we know the frontend works just fine) and we are trying to port the backend to Node.JS.
I have an endpoint setup that that returns a signed PUT URL, here is the code:
var objectKey = req.query.s3_object_name;
var objectType = req.query.s3_object_type;
var params = {
Bucket: s3Bucket,
Key: objectKey,
// ContentType: objectType, //(I have tried with and without this)
Expires: 60
};
s3.getSignedUrl('putObject', params, function(err, signedUrl){
if(err){
res.send(400);
}else{
res.end(JSON.stringify({
signed_request: signedUrl,
url: "http://"+s3Bucket+".s3.amazonaws.com/"+objectKey
}));
}
});
Unfortunately Amazon always returns the following error:
SignatureDoesNotMatch - The request signature we calculated does not match the signature you provided. Check your key and signing method.
Has anybody successfully got the JavaScript aws-sdk to perform this task successfully? Any pointers? I have double and triple checked my AWS Key and Secret.
Regards:
John Chipps-Harding
S3 seems to be giving you the answer in the error message, though you may not be recognizing it as such.
<StringToSign>PUT
image/jpeg
1390733729
x-amz-acl:public-read
/arenaupload/vV61536.jpg</StringToSign>
For any given request, there is only exactly one possible valid "string to sign," and if you don't (or the SDK doesn't) start with that string, then, of course, it won't work.
The error response from S3 isn't giving you the string you actually tried to sign, because it doesn't know that information. It's giving you the canonical version of the string you should have tried to sign, based on the request it is rejecting.
Your workaround is working, because you've added x-amz-acl:public-read to the string-to-sign... but your code, in the original question, isn't specifying that information anywhere anywhere. I don't know whether the format the JS-SDK expects is ACL: "public-read" or exactly what it will want to see, presumably in params, to make this work but it seems apparent enough that you aren't asking the SDK to sign a request that precisely matches the actual upload that you're subsequently attempting to do.

Categories