File upload Web API is getting null client from Javascript client - javascript

I am using Chrome and trying to upload a file from browser client. My API given below.
[HttpPost("attachment/{confirmId:long}")]
public async Task<IActionResult> UploadAttachmentAsync(long confirmId, IFormFile filePayload)
{
// ...
}
This works in Swagger but when I am trying from Javascript client, filePayload is null. The confirmId param comes through fine.
I used HttpToolkit and verified that the content of the file is there.
However, this is my client code anyway....
async uploadAttachmentAsync(file, confirmId) {
// buildUrl is my own utility, creates url correctly
const url = buildUrl(`paperConfirm/attachment/${confirmId}`);
const body = new FormData();
body.append('file', file);
// XH is a 3rd party library. I am confident there is no problem there
return await XH.fetchJson({
url,
body,
method: 'POST',
// Note: We must explicitly *not* set Content-Type headers to allow the browser to set the boundary.
// See https://stanko.github.io/uploading-files-using-fetch-multipart-form-data/ for further explanation.
headers: {'Content-Type': null}
});
}

Related

Cannot Upload File to FastAPI backend using Fetch API in the frontend

I'm trying to figure out how to send an image to my API, and also verify a generated token that is in the header of the request.
So far this is where I'm at:
#app.post("/endreProfilbilde")
async def endreProfilbilde(request: Request,file: UploadFile = File(...)):
token=request.headers.get('token')
print(token)
print(file.filename)
I have another function that triggers the change listener and upload function, passing the parameter: bildeFila
function lastOpp(bildeFila) {
var myHeaders = new Headers();
let data = new FormData();
data.append('file',bildeFila)
myHeaders.append('token', 'SOMEDATAHERE');
myHeaders.append('Content-Type','image/*');
let myInit = {
method: 'POST',
headers: myHeaders,
cache: 'default',
body: data,
};
var myRequest = new Request('http://127.0.0.1:8000/endreProfilbilde', myInit);
fetch(myRequest)//more stuff here, but it's irrelevant for the Q
}
The Problem:
This will print the filename of the uploaded file, but the token isn't passed and is printed as None. I suspect this may be due to the content-type, or that I'm trying to force FastAPI to do something that is not meant to be doing.
As per the documentation:
Warning: When using FormData to submit POST requests using XMLHttpRequest or the Fetch_API with the
multipart/form-data Content-Type (e.g. when uploading Files and
Blobs to the server), do not explicitly set the Content-Type
header on the request. Doing so will prevent the browser from being
able to set the Content-Type header with the boundary expression
it will use to delimit form fields in the request body.
Hence, you should remove the Content-Type header from your code. The same applies to sending requests through Python Requests, as described here and here. Read more about the boundary in multipart/form-data.
Working examples on how to upload file(s) using FastAPI in the backend and Fetch API in the frontend can be found here, here, as well as here and here.
So I figured this one out thanks to a helpful lad in Python's Discord server.
function lastOpp(bildeFila) {
let data = new FormData();
data.append('file',bildeFila)
data.append('token','SOMETOKENINFO')
}
#app.post("/endreProfilbilde")
async def endreProfilbilde(token: str = Form(...),file: UploadFile = File(...)):
print(file.filename)
print(token)
Sending the string value as part of the formData rather than as a header lets me grab the parameter.

Node.js Express send huge data to client vanilla JS

In my application I read huge data of images, and send the whole data to the client:
const imagesPaths = await getFolderImagesRecursive(req.body.rootPath);
const dataToReturn = await Promise.all(imagesPaths.map((imagePath) => new Promise(async (resolve, reject) => {
try {
const imageB64 = await fs.readFile(imagePath, 'base64');
return resolve({
filename: imagePath,
imageData: imageB64,
});
} catch {
return reject();
}
})));
return res.status(200).send({
success: true,
message: 'Successfully retreived folder images data',
data: dataToReturn,
});
Here is the client side:
const getFolderImages = (rootPath) => {
return fetch('api/getFolderImages', {
method: 'POST',
headers: { 'Content-type': 'application/json' },
body: JSON.stringify({ rootPath }),
});
};
const getFolderImagesServerResponse = await getFolderImages(rootPath);
const getFolderImagesServerData = await getFolderImagesServerResponse.json();
When I do send the data I get failure due to the huge data. Sending the data just with res.send(<data>) is impossible. So, then, how can I bypass this limitation - and how should I accept the data in the client side with the new process?
The answer to your problem requires some read :
Link to the solution
One thing you probably haven’t taken full advantage of before is that webserver’s http response is a stream by default.
They just make it easier for you to pass in synchron data, which is parsed to chunks under the hood and sent as HTTP packages.
We are talking about huge files here; naturally, we don’t want them to be stored in any memory, at least not the whole blob. The excellent solution for this dilemma is a stream.
We create a readstream with the help of the built-in node package ‘fs,’ then pass it to the stream compatible response.send parameter.
const readStream = fs.createReadStream('example.png');
return response.headers({
'Content-Type': 'image/png',
'Content-Disposition': 'attachment; filename="example.png"',
}).send(readStream);
I used Fastify webserver here, but it should work similarly with Koa or Express.
There are two more configurations here: naming the header ‘Content-Type’ and ‘Content-Disposition.’
The first one indicates the type of blob we are sending chunk-by-chunk, so the frontend will automatically give the extension to it.
The latter tells the browser that we are sending an attachment, not something renderable, like an HTML page or a script. This will trigger the browser’s download functionality, which is widely supported. The filename parameter is the download name of the content.
Here we are; we accomplished minimal memory stress, minimal coding, and minimal error opportunities.
One thing we haven’t mentioned yet is authentication.
For the fact, that the frontend won’t send an Ajax request, we can’t expect auth JWT header to be present on the request.
Here we will take the good old cookie auth approach. Cookies are set automatically on every request header that matches the criteria, based on the cookie options. More info about this in the frontend implementation part.
By default, cookies arrive as semicolon separated key-value pairs, in a single string. In order to ease out the parsing part, we will use Fastify’s Cookieparser plugin.
await fastifyServer.register(cookieParser);
Later in the handler method, we simply get the cookie that we are interested in and compare it to the expected value. Here I used only strings as auth-tokens; this should be replaced with some sort of hashing and comparing algorithm.
const cookies = request.cookies;
if (cookies['auth'] !== 'authenticated') {
throw new APIError(400, 'Unauthorized');
}
That’s it. We have authentication on top of the file streaming endpoint, and everything is ready to be connected by the frontend.

POST image to django rest API always returns 'No file was submitted'

I am trying to get this working now for days -.-
Using a simple NodeJS express server, I want to upload an image to a Django instance through Post request, but I just can't figure out, how to prepare the request and embed the file.
Later I would like to post the image, created from a canvas on the client side,
but for testing I was trying to just upload an existing image from the nodeJS server.
app.post('/images', function(req, res) {
const filename = "Download.png"; // existing local file on server
// using formData to create a multipart/form-data content-type
let formData = new FormData();
let buffer = fs.readFileSync(filename);
formData.append("data", buffer); // appending the file a buffer, alternatively could read as utf-8 string and append as text
formData.append('name', 'var name here'); // someone told me, I need to specify a name
const config = {
headers: { 'content-type': 'multipart/form-data' }
}
axios.post("http://django:8000/images/", formData, config)
.then(response => {
console.log("success!"); // never happens :(
})
.catch(error => {
console.log(error.response.data); // no file was submitted
});
});
What am I doing wrong or did I just miss something?
EDIT
I just found a nice snippet with a slighlty other approach on the npm form-data page, on the very bottom (npmjs.com/package/form-data):
const filename = "Download.png"; // existing local file on server
let formData = new FormData();
let stream = fs.createReadStream(filename);
formData.append('data', stream)
let formHeaders = formData.getHeaders()
axios.post('http://django:8000/images/', formData, {
headers: {
...formHeaders,
},
})
.then(response => {
console.log("success!"); // never happens :(
})
.catch(error => {
console.log(error.response.data); // no file was submitted
});
sadly, this doesn't change anything :( I still receive only Bad Request: No file was submitted
I don't really have much Django code just a basic setup using the rest_framework with an image model:
class Image(models.Model):
data = models.ImageField(upload_to='images/')
def __str__(self):
return "Image Resource"
which are also registered in the admin.py,
a serializer:
from .models import Image
class ImageSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = Image
fields = ('id', 'data')
using automatic URL routing.
I wrote a simple test script and put the same image on the django server, to verify that image uploads works, and it does:
import requests
url = "http://127.0.0.1:8000/images/"
file = {'data': open('Download.png', 'rb')}
response = requests.post(url, files=file)
print(response.status_code) # 201
I had a similar problem: I used the same Django endpoint to upload a file using axios 1) from the client side and 2) from the server side. From the client side it worked without any problem, but from the server side, the request body was always empty.
My solution was to use the following code:
const fileBuffer = await readFile(file.filepath)
const formData = new FormData()
formData.append('file', fileBuffer, file.originalFilename)
const response = await fetch(
urlJoin(BACKEND_URL),
{
method: 'POST',
body: formData,
headers: {
...formData.getHeaders(),
},
}
)
A few relevant references that I found useful:
This blog post, even though it seems the author manages to send form data from the server side using axios, I did not manage to reproduce it on my case.
This issue report in the axio repository, where one comment suggests to use fetch.
In your node.js express server instead of adding the image to the form data, try directly sending the stream in the API post.
const filename = "Download.png"; // existing local file on server
//let formData = new FormData();
let stream = fs.createReadStream(filename);
//formData.append('data', stream)
let formHeaders = formData.getHeaders()
axios.post('http://django:8000/images/', stream, {
headers: {
...formHeaders,
},
})
.then(response => {
console.log("success!"); // never happens :(
})
.catch(error => {
console.log(error.response.data); // no file was submitted
});
I still didn't manage to get this working with axios so I tried another package for sending files as post requests, namely unirest, which worked out of the box for me.
Also it is smaller, requires less code and does everything I needed:
const filename = "Download.png"; // existing local file on server
unirest
.post(url)
.attach('data', filename) // reads directly from local file
//.attach('data', fs.createReadStream(filename)) // creates a read stream
//.attach('data', fs.readFileSync(filename)) // 400 - The submitted data was not a file. Check the encoding type on the form. -> maybe check encoding?
.then(function (response) {
console.log(response.body) // 201
})
.catch((error) => console.log(error.response.data));
If I have some spare time in the future I may look into what was wrong with my axios implementation or someone does know a solution pls let me know :D

How to extract the Content-Type of a file sent via multipart/form-data

I receive a Request in my cloudflare worker and want to upload the data to Google cloud storage. My problem is that I can't extract the content type from the multipart/form-data data I receive in order to upload it with the correct content type to GCS.
When I read the request with await req.formData() I can get('file')from the formData and it returns the raw file data that I need for the GCS, but I can't seem to see anywhere the file content-type that I need (I can see it only when looking at the raw Request body).
Here is my (striped down) code :
event.respondWith((async () => {
const req = event.request
const formData = await req.formData()
const file = formData.get('file')
const filename = formData.get('filename')
const oauth = await getGoogleOAuth()
const gcsOptions = {
method: 'PUT',
headers: {
Authorization: oauth.token_type + ' ' + oauth.access_token,
'Content-Type': 'application/octet-stream' //this should by `'Content-Type': file.type`
},
body: file,
}
const gcsRes = await fetch(
`https://storage.googleapis.com/***-media/${filename}`,
gcsOptions,
)
if (gcsRes.status === 200) {
return new Response(JSON.stringify({filename}), gcsRes)
} else {
return new Response('Internal Server Error', {status: 500, statusText: 'Internal Server Error'})
}
})())
Reminder - the code is part of our cloudflare worker code.
It seems to me this should be straight forward, determining the type of file you extract from the multipart/form-data data.
Am I missing something?
Unfortunately, as of this writing, the Cloudflare Workers implementation of FormData is incomplete and does not permit extracting the Content-Type. In fact, it appears our implementation currently interprets all entries as text and return strings, which means binary content will be corrupted. This is a bug which will require care to fix since we don't want to break already-deployed scripts that might rely on the buggy behavior.
Thanks Kenton for your response.
What I ended up doing:
As the Cloudflare Workers don't support the multipart/form-data of Blob or any type other than String, I ended up using the raw bytes in the ArrayBuffer data type. After converting it to an Uint8Array I parsed it byte by byte to determine the file type and the start and end indexes of the file data. Once I found the start and end of the transferred file I was able create an array of the file data, add it to the request and send it to the GCS as I showed above.

cordova-plugin-file-transfer: How do you upload a file to S3 using a signed URL?

I am able to upload to S3 using a file picker and regular XMLHttpRequest (which I was using to test the S3 setup), but cannot figure out how to do it successfully using the cordova file transfer plugin.
I believe it is either to do with the plugin not constructing the correct signable request, or not liking the local file uri given. I have tried playing with every single parameter from headers to uri types, but the docs aren't much help, and the plugin source is bolognese.
The string the request needs to sign match is like:
PUT
1391784394
x-amz-acl:public-read
/the-app/317fdf654f9e3299f238d97d39f10fb1
Any ideas, or possibly a working code example?
A bit late, but I just spent a couple of days struggling with this so in case anybody else is having problems, this is how managed to upload an image using the javascript version of the AWS SDK to create the presigned URL.
The key to solving the problem is in the StringToSign element of the XML SignatureDoesNotMatch error that comes back from Amazon. In my case it looked something like this:
<StringToSign>
PUT\n\nmultipart/form-data; boundary=+++++org.apache.cordova.formBoundary\n1481366396\n/bucketName/fileName.jpg
</StringToSign>
When you use the aws-sdk to generate a presigned URL for upload to S3, internally it will build a string based on various elements of the request you want to make, then create an SHA1 hash of it using your AWS secret. This hash is the signature that gets appended to the URL as a parameter, and what doesn't match when you get the SignatureDoesNotMatch error.
So you've created your presigned URL, and passed it to cordova-plugin-file-transfer to make your HTTP request to upload a file. When that request hits Amazon's server, the server will itself build a string based on the request headers etc, hash it and compare that hash to the signature on the URL. If the hashes don't match then it returns the dreaded...
The request signature we calculated does not match the signature you provided. Check your key and signing method.
The contents of the StringToSign element I mentioned above is the string that the server builds and hashes to compare against the signature on the presigned URL. So to avoid getting the error, you need to make sure that the string built by the aws-sdk is the same as the one built by the server.
After some digging about, I eventually found the code responsible for creating the string to hash in the aws-sdk. It is located (as of version 2.7.12) in:
node_modules/aws-sdk/lib/signers/s3.js
Down the bottom at line 168 there is a sign method:
sign: function sign(secret, string) {
return AWS.util.crypto.hmac(secret, string, 'base64', 'sha1');
}
If you put a console.log in there, string is what you're after. Once you make the string that gets passed into this method the same as the contents of StringToSign in the error message coming back from Amazon, the heavens will open and your files will flow effortlessly into your bucket.
On my server running node.js, I originally created my presigned URL like this:
var AWS = require('aws-sdk');
var s3 = new AWS.S3(options = {
endpoint: 'https://s3-eu-west-1.amazonaws.com',
accessKeyId: "ACCESS_KEY",
secretAccessKey: "SECRET_KEY"
});
var params = {
Bucket: 'bucketName',
Key: imageName,
Expires: 60
};
var signedUrl = s3.getSignedUrl('putObject', params);
//return signedUrl
This produced a signing string like this, similar to the OP's:
PUT
1481366396
/bucketName/fileName.jpg
On the client side, I used this presigned URL with cordova-plugin-file-transfer like so (I'm using Ionic 2 so the plugin is wrapped in their native wrapper):
let success = (result: any) : void => {
console.log("upload success");
}
let failed = (err: any) : void => {
let code = err.code;
alert("upload error - " + code);
}
let ft = new Transfer();
var options = {
fileName: filename,
mimeType: 'image/jpeg',
chunkedMode: false,
httpMethod:'PUT',
encodeURI: false,
};
ft.upload(localDataURI, presignedUrlFromServer, options, false)
.then((result: any) => {
success(result);
}).catch((error: any) => {
failed(error);
});
Running the code produced the signature doesn't match error, and the string in the <StringToSign> element looks like this:
PUT
multipart/form-data; boundary=+++++org.apache.cordova.formBoundary
1481366396
/bucketName/fileName.jpg
So we can see that cordova-plugin-file-transfer has added in its own Content-Type header which has caused a discrepancy in the signing strings. In the docs relating to the options object that get passed into the upload method it says:
headers: A map of header name/header values. Use an array to specify more than one value. On iOS, FireOS, and Android, if a header named Content-Type is present, multipart form data will NOT be used. (Object)
so basically, if no Content-Type header is set it will default to multipart form data.
Ok so now we know the cause of the problem, it's a pretty simple fix. On the server side I added a ContentType to the params object passed to the S3 getSignedUrl method:
var params = {
Bucket: 'bucketName',
Key: imageName,
Expires: 60,
ContentType: 'image/jpeg' // <---- content type added here
};
and on the client added a headers object to the options passed to cordova-plugin-file-transfer's upload method:
var options = {
fileName: filename,
mimeType: 'image/jpeg',
chunkedMode: false,
httpMethod:'PUT',
encodeURI: false,
headers: { // <----- headers object added here
'Content-Type': 'image/jpeg',
}
};
and hey presto! The uploads now work as expected.
I run into such issues with this plugin
The only working way I found to upload a file with a signature is the method of Christophe Coenraets : http://coenraets.org/blog/2013/09/how-to-upload-pictures-from-a-phonegap-app-to-amazon-s3/
With this method you will be able to upload your files using the cordova-plugin-file-transfer
First, I wanted to use the aws-sdk on my server to sign with getSignedUrl()
It returns the signed link and you only have to upload to it.
But, using the plugin it always end with 403 : signatures don't match
It may be related to the content length parameter but I didn't found for now a working solution with aws-sdk and the plugin

Categories