I am writing a very dumb application using React Native / Flask but can't seem to be able to construct the formData. The formData.append function gives me this error:Argument of type '{ uri: any; type: string; name: string; }' is not assignable to parameter of type 'string | Blob'. Object literal may only specify known properties, and 'uri' does not exist in type 'Blob'.
This is how I'm fetching, the problem seems to be in the way the formDAta is being created, but I have tried pretty much everything and still doesn't work so I thought of asking here.
const ImagePickerScreen = () => {
const [image, setImage] = useState(null);
const pickImage = async () => {
// No permissions request is necessary for launching the image library
let result = await ImagePicker.launchImageLibraryAsync({
mediaTypes: ImagePicker.MediaTypeOptions.All,
allowsEditing: true,
aspect: [4, 3],
quality: 1,
});
console.log(result);
if (!result.canceled) {
setImage(result.assets[0].uri);
}
};
const handleIsHotdog = async () => {
if (!image) {
alert("No image selected");
return;
}
try {
const formData = new FormData();
formData.append('image', {
uri: image,
type: 'image/jpeg',
name: 'selectedImage.jpg'
});
const response = await fetch('http://my endopoint here', {
method: 'POST',
body: formData,
headers: {
'Content-Type': 'multipart/form-data',
},
});
const responseJson = await response.text();
Alert.alert(
'Result',
responseJson,
[
{ text: 'OK', onPress: () => console.log('OK Pressed') },
],
{ cancelable: false },
);
} catch (error) {
console.error(error);
}
};
And this is on the server-side:
#app.route("/process_image", methods=["POST"])
def classify_image():
image = request.files["image"]
image = PIL.Image.open(image)
# Load the model
model = tf.keras.models.load_model('hotdog_model.h5')
# Preprocess the image
target_size = (224, 224)
image = image.resize(target_size)
image = tf.keras.preprocessing.image.img_to_array(image)
image = np.expand_dims(image, axis=0)
# Use the model to make a prediction
prediction = model.predict(image)
# Return the result
if prediction[0][0] > 0.5:
return "Hotdog"
else:
return "Not Hotdog"
The server works when I do postman request (using formdata.
Related
I use the Firestore Rest API in nextJs getServersideProps to fetch a firestore doc. It works as expected, but every 5:30min the function getServersideProps gets retriggered without reloading or navigating (is this only on dev environment?) and then the result of the rest api is simply
[ { readTime: '2022-10-28T14:24:01.348248Z' } ]
The document key is missing and no data is present, which breaks the server function (App behaves without showing error).
The fetching function looks like this:
const { GoogleToken } = require('gtoken');
const { documentToJson } = require('./helpers');
const getConfig = require('next/config').default;
const FIRESTORE = getConfig().serverRuntimeConfig.firestore;
export async function fetchWebsitePropsByPath(path: string) {
const body = JSON.stringify({
structuredQuery: {
from: [{ collectionId: 'websites' }],
where: {
compositeFilter: {
op: 'AND',
filters: [
{
fieldFilter: {
field: {
fieldPath: 'path',
},
op: 'ARRAY_CONTAINS',
value: {
stringValue: path,
},
},
},
],
},
},
limit: 1,
},
});
// Authenticate with Google
const gtoken = new GoogleToken({
key: FIRESTORE.key,
email: FIRESTORE.email,
scope: ['https://www.googleapis.com/auth/datastore'], // or space-delimited string of scopes
eagerRefreshThresholdMillis: 5 * 60 * 1000,
});
const getToken = () =>
new Promise((resolve, reject) => {
gtoken.getToken((err, token) => {
if (err) {
reject(err);
}
resolve(token);
});
});
const token: any = await getToken();
let headers = new Headers();
headers.append('Authorization', 'Bearer ' + token.access_token);
const res = await fetch(`${FIRESTORE.api}:runQuery`, {
method: 'POST',
headers,
body: body,
});
const rawData = await res.json();
const id = rawData[0].document.name.split('/').pop();
const docData = documentToJson(rawData[0].document.fields);
docData.id = id;
return docData;
}
I would like to know if I can prevent the refetching every 5:30 min if it is not dev env specific and why the rest api returns nothing here.
I am making an expense tracker and accounting app for my client. I have added a file input and upload those file to cloudinary and store public_id and secure_url in backend and database. I am using Next JS for frontend and backend. I am getting following errors.
I have added unsigned upload_preset here is the image of my cloudinary settings page.
One more thing this code works some time and don't show this error instead it mood change and shows error that I have not provided public_id or secure_url in backend database.
here is the code:
Frontend:
const handleSubmit = async (e) => {
e.preventDefault();
const form = e.currentTarget;
const miningInput = Array.from(form.elements).find(
({ name }) => name === "miningPapers"
);
const miningFormData = new FormData();
for (const file of miningInput.files) {
miningFormData.append("file", file);
}
miningFormData.append("upload_preset", "mining-papers");
const miningRes = await axios.post(
"https://api.cloudinary.com/v1_1/cloudName/image/upload",
miningFormData
);
const insuranceInput = Array.from(form.elements).find(
({ name }) => name === "insurancePapers"
);
const insuranceFormData = new FormData();
for (const file of insuranceInput.files) {
insuranceFormData.append("file", file);
}
insuranceFormData.append("upload_preset", "insurance-papers");
const insuranceRes = await axios.post(
"https://api.cloudinary.com/v1_1/cloudName/image/upload",
insuranceFormData
);
const insurancePubId = insuranceRes.data.public_id;
const insuranceUrl = insuranceRes.data.secure_url;
const miningPapersPubId = miningRes.data.public_id;
const miningUrl = miningRes.data.secure_url;
dispatch(
create(
//Other Data
//ERROR here \/
{
insurancePapers: {
public_id: insuranceRes.data.public_id,
url: insuranceRes.data.secure_url,
},
},
//ERROR here \/
{
miningPapers: {
public_id: miningRes.data.public_id,
url: miningRes.data.secure_url,
},
},
)
)
}
Dispatch function:
export const create = (insurancePapers, miningPapers)=> async(dispatch)=>{
dispatch({ type: ADD_DISPATCH_REQUEST });
const { data } = await axiosInstance.post(`/api/dispatch/create`,{
insurancePapers
miningPapers
})
dispatch({
type: ADD_DISPATCH_SUCCESS,
payload: data,
});
}
Backend:
const handler = async (req, res) => {
if (req.method === "POST") {
let reciept = await DispatchReceipt.findOne({
ICDN_No: req.body.ICDN_No,
});
if (reciept) {
return res
.status(409)
.json("Dispatch Receipt With this ICDN Number already exists");
}
if (req.body.differenceInQty) {
req.body.differenceInQty =
req.body.loadedMaterialQty - req.body.unloadedMaterialQty;
} else {
req.body.differenceInQty = 0;
}
//Creating Dispatch Receipt
reciept = await DispatchReceipt.create(req.body);
//Adding receipt ID to customer
const customer = await Customer.findOne({ name: req.body.party });
if (!customer) {
res.status(404).json("Customer Not Found");
}
if (customer) {
customer.dispatchReceipts.push(reciept._id);
customer.save();
}
//response
res.status(200).json(reciept);
} else {
return res.status(500).send("INVALID REQUEST");
}
};
Backend MongoDB Models :
const DispatchReceiptSchema = new mongoose.Schema({
//Other data
//ERROR here \/
insurancePapers: {
public_id: { type: String, required:[true, "Please add public ID"] },
url: { type: String , required:[true, "Please add url"]},
},
//ERROR here \/
miningPapers: {
public_id: {
type: String,
required: [true, "Please Upload Mining Papers"],
},
url: { type: String, required: [true, "Please Upload Mining Papers"] },
},
});
mongoose.models = {};
const DispatchReceipt = mongoose.model(
"DispatchReceipt",
DispatchReceiptSchema
);
export default DispatchReceipt;
I am using react-dropzone to upload mp3 files and using a metadata npm to grab all the contents of the file. When sending it to axios.post(), I am getting an Error of "Body Exceeded 1mb limit"
This is where I'm sending the new data:
async function insertArtists(uploadedArtists, doneCallback) {
// console.log(uploadedArtists); // [5]
// console.log(artistsData);// []
const originalArtists = [...artistsData];
const newArtists = [...uploadedArtists, ...artistsData];
try {
setArtistsData(newArtists);
const results = await axios.post(`${restUrl}/multi`, newArtists);
console.log(results);
if (doneCallback) {
doneCallback();
}
} catch (error) {
console.log("error thrown inside insertArtists", error);
setArtistsData(originalArtists);
}
}
found this doc here https://github.com/vercel/next.js/issues/19684. But it didnt explain how to add other params.
my Dropzone:
function Dropzone() {
const { insertArtists } = useReqRest();
const { getRootProps, getInputProps } = useDropzone({
accept: "audio/*",
onDrop: useCallback(async (acceptedFiles) => {
const filesData = await Promise.all(
acceptedFiles.map(async (file) => {
let fileContents = {
_id: uuidv4(),
path: file.path,
fileName: file.name,
fileSize: file.size,
fileType: file.type,
};
const meta = await musicMetadata.parseBlob(file);
return { ...fileContents, ...meta.common };
})
);
const fullDb = [...artists, ...filesData];
insertArtists(fullDb);
}),
});
}
If your issue is just "Body Exceeded 1MB limit" you can add this at the end of your API file and it will work
export const config = {
api: {
bodyParser: {
sizeLimit: '1mb',
},
},
}
if you want to add all the detail in files consider using FormData() and then append files and send it as the body of the request
You need to set custom config inorder to send data more than 1mb
export const config = {
api: {
bodyParser : {
sizeLimit : '1mb' // change this
},
},
}
For more info check this out: https://nextjs.org/docs/api-routes/api-middlewares#custom-config
I am trying to create a process that uploads an image, previews it once, and then uploads it to Imgur if the image is OK.
The code is as follows.
const [img, setImg] = useState([])
const previewImg = ({ target: { files } }) => {
if (img.length > 5) return
const reader = new FileReader()
reader.onload = ({ target: { result } }) => {
setImg((img) => [...img, { id: generateID(), src: result }])
}
reader.readAsDataURL(files[0])
}
const uploadImugr = async (e) => {
e.preventDefault();
const base64 = img[0].src.toString().replace(/data:.*\/.*;base64,/, '');
const res = await fetch('/api/upload/', {
method: 'POST',
body: base64,
});
console.log(await res.json());
}
return (
<>
<input type="file" onChange={previewImg} />
{img.length > 0 && img.map((item) => {
return <img key={item.id} src={item.src} />}
}
<button onClick={uploadImgur}>Upload Imgur</button>
</>
)
The following is the API route for next.js.
Imgur API
const uploadImugrAPI = async (req: NextApiRequest, res: NextApiResponse) => {
const formData = new FormData();
 formData.append('image', req.body);
const resImgur = await fetch("https://api.imgur.com/3/upload", {
method: 'POST',
headers: {
Authorization: 'Client-ID MY-CLIEND-ID',
},
body: formData,
})
res.status(200).json(resImgur.json());
};
export default uploadImugrAPI;
When the above API is executed, the following error message will be displayed.
POST http://localhost:3000/api/upload 500 (Internal Server Error)
Uncaught (in promise) SyntaxError: Unexpected token I in JSON at position 0
I'm new to Next.js and external APIs, so I'm not sure what keywords to search on Google for to solve this problem.
Please help me.
Thank you.
Add
When I tried with Postman, I was able to upload images to Imugr by passing a binary file.
Therefore, I changed the code as follows to pass a binary file instead of base64 and tried it.
const [imgArray, setImgArray] = useState([])
+ const [srcArray, setSrcArray] = useState([])
const uploadImg = ({ target: { files } }) => {
if (imgArray.length > 5) return
+ setImgArray((imgArray) => [...imgArray, files[0]])
const reader = new FileReader()
reader.onload = ({ target: { result } }) => {
const uploadImgSrc = result.toString()
setSrcArray((srcArray) => [
...srcArray,
{ id: generateID(), src: uploadImgSrc.toString() },
])
formRef.current.inputImg.value = ''
}
reader.readAsDataURL(files[0])
}
const uploadImugr = async (e) => {
e.preventDefault();
+ const formData = new FormData();
+ formData.append("image", imgArray[0])
const res = await fetch('/api/upload/', {
method: 'POST',
body: formData,
});
console.log(await res.json());
}
The result was that the following error was displayed in the console.
POST http://localhost:3000/api/upload 500 (Internal Server Error)
Request failed with status code 500
After 2 days of frustration, I've patched together a solution based on several answers I stumbled upon. Convert the file to base64 client side and send that as json to the API.
//client.tsx
async function submit(e: React.FormEvent<HTMLFormElement>) {
e.preventDefault();
if (!file) return;
let base64Img = await getBase64(file);
if (typeof base64Img == 'string') {
base64Img = base64Img.replace(/^data:.+base64,/, '')
}
const result = await fetch('/api/upload', {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({image: base64Img}),
})
const response = await result.json() // response.data is an object containing the image URL
}
function getBase64(file: File): Promise<string | ArrayBuffer | null> {
return new Promise((resolve, reject) => {
const reader = new FileReader()
reader.readAsDataURL(file)
reader.onload = () => resolve(reader.result)
reader.onerror = error => reject(error)
})
}
//upload.ts
export default async function handler(req: NextApiRequest, res: NextApiResponse) {
const fd = new FormData();
fd.append('image', req.body.image)
fd.append('type', 'base64')
const response = await fetch('https://api.imgur.com/3/image', {
method: "POST",
headers: {
Authorization: "Client-ID process.env.IMGUR_ID",
},
body: fd,
redirect: 'follow',
})
const data = await response.json();
return res.json(data)
}
Also I found using https://api.imgur.com/3/image instead of https://api.imgur.com/3/upload better as the errors were more helpful.
So I am writing a Lambda that will take in some form data via a straight POST through API Gateway (testing using Postman for now) and then send that image to S3 for storage. Every time I run it, the image uploaded to S3 is corrupted and won't open properly. I have seen people having to decode/encode the incoming data but I feel like I have tried everything using Buffer.from. I am only looking to store either .png or .jpg. The below code does not reflect my attempts using Base64 encoding/decoding seeing they all failed. Here is what I have so far -
Sample Request in postman
{
image: (uploaded .jpg/.png),
metadata: {tag: 'iPhone'}
}
Lambda
const AWS = require('aws-sdk')
const multipart = require('aws-lambda-multipart-parser')
const s3 = new AWS.S3();
exports.handler = async (event) => {
const form = multipart.parse(event, false)
const s3_response = await upload_s3(form)
return {
statusCode: '200',
body: JSON.stringify({ data: data })
}
};
const upload_s3 = async (form) => {
const uniqueId = Math.random().toString(36).substr(2, 9);
const key = `${uniqueId}_${form.image.filename}`
const request = {
Bucket: 'bucket-name',
Key: key,
Body: form.image.content,
ContentType: form.image.contentType,
}
try {
const data = await s3.putObject(request).promise()
return data
} catch (e) {
console.log('Error uploading to S3: ', e)
return e
}
}
EDIT:
I am now atempting to save the image into the /tmp directory then use a read stream to upload to s3. Here is some code for that
s3 upload function
const AWS = require('aws-sdk')
const fs = require('fs')
const s3 = new AWS.S3()
module.exports = {
upload: (file) => {
return new Promise((resolve, reject) => {
const key = `${Date.now()}.${file.extension}`
const bodyStream = fs.createReadStream(file.path)
const params = {
Bucket: process.env.S3_BucketName,
Key: key,
Body: bodyStream,
ContentType: file.type
}
s3.upload(params, (err, data) => {
if (err) {
return reject(err)
}
return resolve(data)
}
)
})
}
}
form parser function
const busboy = require('busboy')
module.exports = {
parse: (req, temp) => {
const ctype = req.headers['Content-Type'] || req.headers['content-type']
let parsed_file = {}
return new Promise((resolve) => {
try {
const bb = new busboy({
headers: { 'content-type': ctype },
limits: {
fileSize: 31457280,
files: 1,
}
})
bb.on('file', function (fieldname, file, filename, encoding, mimetype) {
const stream = temp.createWriteStream()
const ext = filename.split('.')[1]
console.log('parser -- ext ', ext)
parsed_file = { name: filename, path: stream.path, f: file, type: mimetype, extension: ext }
file.pipe(stream)
}).on('finish', () => {
resolve(parsed_file)
}).on('error', err => {
console.err(err)
resolve({ err: 'Form data is invalid: parsing error' })
})
if (req.end) {
req.pipe(bb)
} else {
bb.write(req.body, req.isBase64Encoded ? 'base64' : 'binary')
}
return bb.end()
} catch (e) {
console.error(e)
return resolve({ err: 'Form data is invalid: parsing error' })
}
})
}
}
handler
const form_parser = require('./form-parser').parse
const s3_upload = require('./s3-upload').upload
const temp = require('temp')
exports.handler = async (event, context) => {
temp.track()
const parsed_file = await form_parser(event, temp)
console.log('index -- parsed form', parsed_file)
const result = await s3_upload(parsed_file)
console.log('index -- s3 result', result)
temp.cleanup()
return {
statusCode: '200',
body: JSON.stringify(result)
}
}
The above edited code is a combination of other code and a github repo I found that is trying to achieve the same results. Even with this solution the file is still corrupted
Figured out this issue. Code works perfectly fine - it was an issue with API Gateway. Need to go into the API Gateway settings and set thee Binary Media Type to multipart/form-data then re-deploy the API. Hope this helps someone else who is banging their head against the wall on figuring out sending images via form data to a lambda.