Linode Storage With NodeJs - javascript

I am new with linode. i see linode provide cloud storage just aws s3. i want to use it with my nodejs app.i can not find any sdk to do it like s3 any solution please help me .
any body tell me how can we upload file from nodejs to linode storage in javascript

new to linode too. Got my free $100 2 month trial and I figured I'd try the bucket feature.
I used AWS S3 in the past, this is pretty much identical as far as the SDK goes. The only hurdle here was to configure the endpoint. With AWS S3 you put the region, with linode you put the endpoint instead. The list of endpoints is here:
https://www.linode.com/docs/products/storage/object-storage/guides/urls/#cluster-url-s3-endpoint
As you didn't mention if you wanted an example on the server (nodejs) or the browser, I'll go with the one I've got. It's for nodejs (server side).
Steps
I used node stable (currently 18.7). I set up package.json to start the index.js script (e.g. "scripts": {"start": "node index.js"}).
Install aws-sdk
npm i aws-sdk
Code for index.js
const S3 = require('aws-sdk/clients/s3')
const fs = require('fs')
const config = {
endpoint: 'https://us-southeast-1.linodeobjects.com/',
accessKeyId: 'BLEEPBLEEPBLEEP',
secretAccessKey: 'BLOOPBLOOPBLOOP',
}
var s3 = new S3(config)
function listObjects() {
console.debug("List objects")
const bucketParams = {
Bucket: 'vol1'
}
s3.listObjects(bucketParams, (err, data) => {
if(err) {
console.error("Error ", err)
} else {
console.info("Objects vol1 ", data)
}
})
}
function uploadFile() {
const fileStream = fs.createReadStream('./testfile.txt')
var params = {Bucket: 'vol1', Key: 'testfile', Body: fileStream}
s3.upload(params, function(err, data) {
if(err) {
console.error("Error uploading test file", err)
} else {
console.info("Test file uploaded ", data)
listObjects()
}
})
}
// Start
uploadFile()
Run "npm start".
Output I get:
Test file uploaded {
ETag: '"0ea76c859582d95d2c2c0caf28e6d747"',
Location: 'https://vol1.us-southeast-1.linodeobjects.com/testfile',
key: 'testfile',
Key: 'testfile',
Bucket: 'vol1'
}
List objects
Objects vol1 {
IsTruncated: false,
Marker: '',
Contents: [
{
Key: 'Inflation isnt transitory.mp4',
LastModified: 2023-01-10T15:38:42.045Z,
ETag: '"4a77d408defc08c15fe42ad4e63fefbd"',
ChecksumAlgorithm: [],
Size: 58355708,
StorageClass: 'STANDARD',
Owner: [Object]
},
{
Key: 'testfile',
LastModified: 2023-02-13T20:28:01.178Z,
ETag: '"0ea76c859582d95d2c2c0caf28e6d747"',
ChecksumAlgorithm: [],
Size: 18,
StorageClass: 'STANDARD',
Owner: [Object]
}
],
Name: 'vol1',
Prefix: '',
MaxKeys: 1000,
CommonPrefixes: []
}
Adjust the config with your own creds/data center. Hope this helps.
Note: if you want to upload files > 1gb, you'll want to use the multipart upload feature. It's a bit more complex, but this should get you started. Any AWS S3 code example should do, there are plenty out there.

Related

How to Upload file in a directory to minIO bucket

Hello everyone i have bucket in minio server and bucket name is 'geoxing' and geoxing have directory img/site. i want to upload picture in site directry using nodejs. below is code and i am getting error Invalid bucket name: geoxing/img/site. how can i solve this error. thanks
savefile() {
const filePath = 'D://repositories//uploads//geoxing//site//b57e46b4bcf879839b7074782sitePic.jpg';
const bucketname = 'geoxing/img/site'
var metaData = {
'Content-Type': 'image/jpg',
'Content-Language': 123,
'X-Amz-Meta-Testing': 1234,
example: 5678,
};
this.minioClient.fPutObject(
bucketname,
'b57e46b4bcf879839b7074782sitePic.jpg',
filePath,
metaData,
function (err, objInfo) {
if (err) {
return console.log(err);
}
return console.log('Success', objInfo.etag);
},
);
}
In Amazon S3 and Minio:
Bucket should just be the name of the bucket (eg geoxing)
Key should include the full path as well as the filename (eg img/site/b57e46b4bcf879839b7074782sitePic.jpg)
Amazon S3 and Minio do not have 'folders' or 'directories' but they emulate directories by including the path name in the Key. Folders do not need to be created prior to uploading to a folder -- they just magically appear when files are stored in that 'path'.

By using Express-fileupload library ,Upload image to AWS S3 bucket

any one please help me, by using express-fileupload library, I want to upload my file and image to AWS S3 bucket using restful API.
Hope you found the answer.
In case you haven't found the answer, here is what I just found out
const uploadSingleImage = async (file, s3, fileName) => {
const bucketName = process.env.S3_BUCKET_NAME;
if (!file.mimetype.startsWith('image')) {
return { status: false, message: 'File uploaded is not an image' };
}
const params = {
Bucket: bucketName,
Key: fileName,
Body: file.data,
ACL: 'public-read',
ContentType: file.mimetype,
};
return s3.upload(params).promise();
};
s3.upload(params).promise()
will return an object which contains the Location of the file you uploaded, in fact you could generate it yourself but that would not cover in case any error occur so I think what I posted here is a better solution.

S3 node module, attempting to upload file gives ENETUNREACH error

I'm trying to upload a file to my Amazon S3 bucket but I'm getting an ENETUNREACH error. I do have permissions to upload/delete files for my buckets and also edited the CORS configuration to allow POST/GET requests from all origins. I'm thinking it might be a faulty key(s) that I received from someone. What is a good way to test if the keys I have are valid if that happens to be the issue?
Code below:
var s3 = require('s3');
/* Create a client for uploading or deleting files */
var client = s3.createClient({
maxAsyncS3: 20, // this is the default
s3RetryCount: 3, // this is the default
s3RetryDelay: 1000, // this is the default
multipartUploadThreshold: 20971520, // this is the default (20 MB)
multipartUploadSize: 15728640, // this is the default (15 MB)
s3Options: {
accessKeyId: 'xxxxxxxx',
secretAccesskey: 'xxxxxxxx',
region: 'xxxxxxxx'
},
});
exports.uploadFile = function(fileName, bucket){
console.log('Uploading File: ' +fileName+'\nBucket: ' +bucket);
var params = {
localFile: fileName,
s3Params: {
Bucket: bucket,
Key: 'testfile',
},
};
var uploader = client.uploadFile(params);
uploader.on('error', function(err) {
console.error("unable to upload:", err.stack);
});
uploader.on('progress', function() {
console.log("progress", uploader.progressMd5Amount, uploader.progressAmount, uploader.progressTotal);
});
uploader.on('end', function() {
console.log("done uploading");
});
};
Console log when trying to upload a txt small file:
Console Log
Disabled IIS services to fix my error.

Node.js script works once, then fails subsequently

I need a Node.js script that does the following:
1 - Triggers when an image is added to a specified S3 bucket. 2
- Creates a thumbnail of that image (360x203 pixels). 3 - Saves a copy of that thumbnail inside of a separate S3 folder. 4 -
Uploads the thumbnail to a specified FTP server, SIX (6) times using a
"FILENAME-X"naming convention.
The code works just as expected at first. The sample event pulls the image. Creates a thumnail. Saves it to the other S3 bucket. Then uploads it to the FTP server.
The problem: It works for the test file HappyFace.jpg once, but then each subsequent test fails. Also, I tried doing it with a different file, but was unsuccessful.
Also: If I could get some help writing a loop to name the different files that get uploaded, it would be very much appreciated. I usually code in PHP, so it'd probably take me longer than I hope to write.
Note: I removed my FTP credentials for privacy.
Problem Code Snippet:
function upload(contentType, data, next) {
// Upload test file to FTP server
c.append(data, 'testing.jpg', function(err) {
console.log("CONNECTION SUCCESS!");
if (err) throw err;
c.end();
});
// Connect to ftp
c.connect({
host: "",
port: 21, // defaults to 21
user: "", // defaults to "anonymous"
password: "" // defaults to "#anonymous"
});
// S3 Bucket Upload Function Goes Here
}
Full Code:
// dependencies
var async = require('async');
var AWS = require('aws-sdk');
var util = require('util');
var Client = require('ftp');
var fs = require('fs');
var gm = require('gm')
.subClass({ imageMagick: true }); // Enable ImageMagick integration.
// get reference to FTP client
var c = new Client();
// get reference to S3 client
var s3 = new AWS.S3();
exports.handler = function(event, context) {
// Read options from the event.
console.log("Reading options from event:\n", util.inspect(event, {depth: 5}));
// Get source bucket
var srcBucket = event.Records[0].s3.bucket.name;
// Get source object key
// Object key may have spaces or unicode non-ASCII characters.
var srcKey =
decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));
var url = 'http://' + srcBucket + ".s3.amazonaws.com/" + srcKey;
// Set destination bucket
var dstBucket = srcBucket + "-thumbs";
// Set destination object key
var dstKey = "resized-" + srcKey;
// Infer the image type.
var typeMatch = srcKey.match(/\.([^.]*)$/);
if (!typeMatch) {
console.error('unable to infer image type for key ' + srcKey);
return;
}
var imageType = typeMatch[1];
if (imageType != "jpg" && imageType != "png") {
console.log('skipping non-image ' + srcKey);
return;
}
// Download the image from S3, transform, and upload to a different S3 bucket.
async.waterfall([
function download(next) {
// Download the image from S3 into a buffer.
s3.getObject({
Bucket: srcBucket,
Key: srcKey
},
next);
},
function transform(response, next) {
gm(response.Body).size(function(err, size) {
// Transform the image buffer in memory.
this.toBuffer(imageType, function(err, buffer) {
if (err) {
next(err);
} else {
next(null, response.ContentType, buffer);
}
});
});
},
function upload(contentType, data, next) {
// Upload test file to FTP server
c.append(data, 'testing.jpg', function(err) {
console.log("CONNECTION SUCCESS!");
if (err) throw err;
c.end();
});
// Connect to ftp
c.connect({
host: "",
port: 21, // defaults to 21
user: "", // defaults to "anonymous"
password: "" // defaults to "#anonymous"
});
// Stream the thumb image to a different S3 bucket.
s3.putObject({
Bucket: dstBucket,
Key: dstKey,
Body: data,
ContentType: contentType
},
next);
}
], function (err) {
if (err) {
console.error(
'Unable to resize ' + srcBucket + '/' + srcKey +
' and upload to ' + dstBucket + '/' + dstKey +
' due to an error: ' + err
);
} else {
console.log(
'Successfully resized ' + srcBucket + '/' + srcKey +
' and uploaded to ' + dstBucket + '/' + dstKey
);
}
// context.done();
}
);
};
The logs:
START RequestId: edc808c1-712b-11e5-aa8a-ed7c188ee86c Version: $LATEST
2015-10-12T21:55:20.481Z edc808c1-712b-11e5-aa8a-ed7c188ee86c Reading options from event: { Records: [ { eventVersion: '2.0', eventTime: '1970-01-01T00:00:00.000Z', requestParameters: { sourceIPAddress: '127.0.0.1' }, s3: { configurationId: 'testConfigRule', object: { eTag: '0123456789abcdef0123456789abcdef', sequencer: '0A1B2C3D4E5F678901', key: 'HappyFace.jpg', size: 1024 }, bucket: { arn: 'arn:aws:s3:::images', name: 'images', ownerIdentity: { principalId: 'EXAMPLE' } }, s3SchemaVersion: '1.0' }, responseElements: { 'x-amz-id-2': 'EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH', 'x-amz-request-id': 'EXAMPLE123456789' }, awsRegion: 'us-east-1', eventName: 'ObjectCreated:Put', userIdentity: { principalId: 'EXAMPLE' }, eventSource: 'aws:s3' } ] }
2015-10-12T21:55:22.411Z edc808c1-712b-11e5-aa8a-ed7c188ee86c Successfully resized images/HappyFace.jpg and uploaded to images-thumbs/resized-HappyFace.jpg
2015-10-12T21:55:23.432Z edc808c1-712b-11e5-aa8a-ed7c188ee86c CONNECTION SUCCESS!
END RequestId: edc808c1-712b-11e5-aa8a-ed7c188ee86c
REPORT RequestId: edc808c1-712b-11e5-aa8a-ed7c188ee86c Duration: 3003.76 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 43 MB
Task timed out after 3.00 seconds
START RequestId: d347e7e3-712d-11e5-bfdf-05baa36d50fd Version: $LATEST
2015-10-12T22:08:55.910Z d347e7e3-712d-11e5-bfdf-05baa36d50fd Reading options from event: { Records: [ { eventVersion: '2.0', eventTime: '1970-01-01T00:00:00.000Z', requestParameters: { sourceIPAddress: '127.0.0.1' }, s3: { configurationId: 'testConfigRule', object: { eTag: '0123456789abcdef0123456789abcdef', sequencer: '0A1B2C3D4E5F678901', key: 'HappyFace.jpg', size: 1024 }, bucket: { arn: 'arn:aws:s3:::images', name: 'images', ownerIdentity: { principalId: 'EXAMPLE' } }, s3SchemaVersion: '1.0' }, responseElements: { 'x-amz-id-2': 'EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH', 'x-amz-request-id': 'EXAMPLE123456789' }, awsRegion: 'us-east-1', eventName: 'ObjectCreated:Put', userIdentity: { principalId: 'EXAMPLE' }, eventSource: 'aws:s3' } ] }
END RequestId: d347e7e3-712d-11e5-bfdf-05baa36d50fd
REPORT RequestId: d347e7e3-712d-11e5-bfdf-05baa36d50fd Duration: 3003.33 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 17 MB
Task timed out after 3.00 seconds
The line:
var c = new Client();
is only going to get executed once; all calls to your handler() function will use the same instance of your FTP client.
If there could be multiple overlapping calls to handler()—and in an async world it sure seems likely—then the calls to the FTP client, including c.connect(…) and c.end() will be invoked multiple times against the same FTP client, which may already have an upload in progress, leading to a scenario like this:
Call to handler(). Upload begins.
Call to handler(). Second upload begins.
First upload completes and calls c.end().
Second upload is canceled.
The solution is to create a new FTP client instance for each upload or, if your FTP server has a problem with that (limits the number of client connections), you’ll need to serialize your uploads somehow. One way to do that, since you’re using the async library, would be to use async.queue.

Node.js 303 Permanent Redirect when connecting to AWS-SDK

Good afternoon,
I'm trying to set up a connection to my aws product api, however I keep getting a 301 Permanent Redirect Error as follows:
{ [PermanentRedirect: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.]
message: 'The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.',
code: 'PermanentRedirect',
name: 'PermanentRedirect',
statusCode: 301,
retryable: false }
The code I am using to connect to the API is as follows:
var aws = require('aws-sdk');
//Setting up the AWS API
aws.config.update({
accessKeyId: 'KEY',
secretAccessKey: 'SECRET',
region: 'eu-west-1'
})
var s3 = new aws.S3();
s3.createBucket({Bucket: 'myBucket'}, function() {
var params = {Bucket: 'myBucket', Key: 'myKey', Body: 'Hello!'};
s3.putObject(params, function(err, data) {
if (err)
console.log(err)
else
console.log("Successfully uploaded data to myBucket/myKey");
});
});
If I try using different regions, like us-west-1 I just get the same error.
What am I doing wrong?
Thank you very much in advance!
I have fixed this issue:
You have to make sure that you already have created a bucket with the same name; in this case, the name of the bucket would be 'myBucket'.
s3.createBucket({Bucket: 'myBucket'}, function() {
var params = {Bucket: 'myBucket', Key: 'myKey', Body: 'Hello!'};
Once you created the bucket, go to properties and see what region it is using - add this into:
aws.config.update({
accessKeyId: 'KEY',
secretAccessKey: 'SECRET',
region: 'eu-west-1'
})
Now it should work! Best wishes
I've just come across this issue and I believe the example at http://aws.amazon.com/sdkfornodejs/ is incorrect.
The issue with the demo code is that Bucket names should be in lowercase - http://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html
Had the demo actually output the err argument on the callback in s3.createBucket this would have been immediately obvious.
Also bucket names need to be unique in the region your creating them.
s3.createBucket({Bucket: 'mylowercaseuniquelynamedtestbucket'}, function(err) {
if(err){
console.info(err);
}else{
var params = {Bucket: 'mylowercaseuniquelynamedtestbucket', Key: 'testKey', Body: 'Hello!'};
s3.putObject(params, function(err, data) {
if (err)
console.log(err)
else
console.log("Successfully uploaded data to testBucket/testKey");
});
}
})
Was stuck on this for a while...
Neither setting the region name or leaving it blank in my configuration file fixed the issue. I came across the gist talking about the solution and it was quite simple:
Given:
var AWS = require('aws-sdk');
Prior to instantiating your S3 object do the following:
AWS.config.update({"region": "us-west-2"}) //replacing your region there
I was then able to call the following with no problem:
var storage = new AWS.S3();
var params = {
Bucket: config.aws.s3_bucket,
Key: name,
ACL:'authenticated-read',
Body: data
}
storage.putObject(params, function(storage_err, data){...})

Categories