i am trying to send a pdf file using the url of the file and using the method "sendDocument", the problem is that i cant access the file directly because of the server where its stored. I tried to use the answer provided in this post:
readFileSync from an URL for Twitter media - node.js
It works, but the file is send as "file.doc". If i change the extension to pdf, it is the correct file. Is there any extra step i need to do to send the file with the correct name and extension, or is there another way i can achieve what i need?
EDIT: The code i am using to get the pdf looks exactly like the code in the anwser of the post i provided:
function getImage(url, callback) {
https.get(url, res => {
// Initialise an array
const bufs = [];
// Add the data to the buffer collection
res.on('data', function (chunk) {
bufs.push(chunk)
});
// This signifies the end of a request
res.on('end', function () {
// We can join all of the 'chunks' of the image together
const data = Buffer.concat(bufs);
// Then we can call our callback.
callback(null, data);
});
})
// Inform the callback of the error.
.on('error', callback);
}
To send the file i use something like this:
getImage(url, function(err, data){
if(err){
throw new Error(err);
}
bot.sendDocument(
msg.chat.id,
data,
);
})
Found the solution. I am using the telebot api (sorry for not mentionig that detail, but i did not knew it, i did not make the project).
I used the following line to send the file:
bot.sendDocument(chat_id, data, {fileName: 'file.pdf'});
You can specify the file name and file type by using this code:
const fileOptions = {
// Explicitly specify the file name.
filename: 'mypdf.pdf',
// Explicitly specify the MIME type.
contentType: 'application/pdf',
};
Full function:
getImage("https://your.url/yourfile.pdf", function(err, data){
if(err){
throw new Error(err);
}
const fileOptions = {
// Explicitly specify the file name.
filename: 'mypdf.pdf',
// Explicitly specify the MIME type.
contentType: 'application/pdf',
};
bot.sendDocument(msg.chat.id, data, {}, fileOptions);
});
NOTE: You MUST provide an empty object ({}) in place of Additional Telegram query options, if you have no query options to specify. For example,
// WRONG!
// 'fileOptions' will be taken as additional Telegram query options!!!
bot.sendAudio(chatId, data, fileOptions);
// RIGHT!
bot.sendAudio(chatId, data, {}, fileOptions);
More informations here:
https://github.com/yagop/node-telegram-bot-api/blob/master/doc/usage.md#sending-files
Related
How to properly post data to server using Sapper JS lib ?
Saying : I have a page 'board-editor' where I can select/unselect tiles from an hexagonal grid written in SVG, and adds/substract hex coordinates in an store array.
Then user fills a form, with board: name, author, and version... Clicking on save button would POST the form data plus the array in store. The server's job, is to store the board definition in a 'static/boards/repository/[name].json' file.
Today, there's few details on the net to use correctly Sapper/Svelte with POSTing data concerns.
How to proceed ? Thanks for replies !
EDIT:
to avoid reposting of the whole page, which implies to loss of the app state, I consider using a IFRAME with a form inside.... but how to init a copy of sapper inside the IFRAME to ensure I can use the this.fetch() method in it ?
I use Sapper + Svelte for a website, it's really amazing! In my contact form component, data are sent to the server. This is how it's done without iframe. The data sent and received is in JSON format.
On the client side (component):
var data = { ...my contact JSON data... }
var url = '/process/contact' // associated script = /src/routes/process/contact.js
fetch(url, {
method: 'POST',
body: JSON.stringify(data),
headers: {
'Content-Type': 'application/json'
}
})
.then(r => {
r.json()
.then(function(result) {
// The data is posted: do something with the result...
})
})
.catch(err => {
// POST error: do something...
console.log('POST error', err.message)
})
On the server side:
script = /src/routes/process/contact.js
export async function post(req, res, next) {
/* Initializes */
res.setHeader('Content-Type', 'application/json')
/* Retrieves the data */
var data = req.body
// Do something with the data...
/* Returns the result */
return res.end(JSON.stringify({ success: true }))
}
Hope it helps!
Together with the solution above, you might get undefined when you try to read the posted data on the server side.
If you are using the standard degit of Sapper you are using Polka. In order to enable body-parse in Polka you can do the following.
npm install body-parser
In server.js, add the following
const { json } = require('body-parser');
And under polka() add imported
.use(json())
So that it in the end says something like
...
const { json } = require('body-parser');
polka() // You can also use Express
.use(json())
.use(
compression({ threshold: 0 }),
sirv('static', { dev }),
sapper.middleware()
)
.listen(PORT, err => {
if (err) console.log('error', err);
});
Iam trying to create a post call which basically takes a file(eg img,pdf file) and then it need to upload in to object storage on bluemix.I was able to authenticate and get the token and create the authurl.I just need to pass file which we upload along with the url.But Iam out of ideas how I can get the file uploaded from postman to be passed to that url with in the post call..Below is my code
app.post('/uploadfile',function(req,res){
getAuthToken().then(function(token){
if(!token){
console.log("error");
}
else{
var fileName = req.body.file;
console.log("data",file);
console.log(SOFTLAYER_ID_V3_AUTH_URL,"url");
var apiUrl = SOFTLAYER_ID_V3_AUTH_URL + config.projectId + '/' + containerName + fileName ;
url : apiurl,
method :'PUT',
headers :{
'X-Auth-Token': token
},function(error, response, body) {
if(!error && response.statusCode == 201) {
res.send(response.headers);
} else {
console.log(error, body);
res.send(body);
}
}
}
})
});
Can someone help here.
Since you're using Express, you should use something like:
https://www.npmjs.com/package/express-fileupload
https://github.com/mscdex/connect-busboy
https://github.com/expressjs/multer
https://github.com/andrewrk/connect-multiparty
https://github.com/mscdex/reformed
Without a body parser that handles file uploads you will not be able to get the uploaded file in the Express request handler.
Then, you need to pass the uploaded file to the request that you're making.
For that you should use this module:
https://www.npmjs.com/package/bluemix-object-storage
There is no need to reinvent the wheel when there are tested and eay to use solutions available. Especially when you're dealing with sensitive information like API keys and secrets I would not advice you to implement your own solution from scratch, unless you really know what you're doing. And if you really know what you're doing, then you don't need to seek advice for things like that.
Here is the official Object Storage SDK for Node.js:
https://github.com/ibm-bluemix-mobile-services/bluemix-objectstorage-serversdk-nodejs
Connect to Object Storage:
var credentials = {
projectId: 'project-id',
userId: 'user-id',
password: 'password',
region: ObjectStorage.Region.DALLAS
};
var objStorage = new ObjectStorage(credentials);
Create a container:
objstorage.createContainer('container-name')
.then(function(container) {
// container - the ObjectStorageContainer that was created
})
.catch(function(err) {
// AuthTokenError if there was a problem refreshing authentication token
// ServerError if any unexpected status codes were returned from the request
});
}
Create a new object or update an existing one:
container.createObject('object-name', data)
.then(function(object) {
// object - the ObjectStorageObject that was created
})
.catch(function(err) {
// TimeoutError if the request timed out
// AuthTokenError if there was a problem refreshing authentication token
// ServerError if any unexpected status codes were returned from the request
});
I've tried all sorts to get this to work. I'm trying to request a PDF from an API on node, then send this back to the client who called it to begin with.
For the minute I just want to successfully save and view the PDF on the node server.
The issue is the PDF file is always empty when I open it (Even though it has a size of 30kb).
The basic flow is like this (removed a few bits, but the below code works and returns me the PDF fine)
// We pass through session ID's, request dates etc through in body
app.post("/getPayslipURL", function(client_request, res) {
// create request, which will simply pass on the data to the database (In order to get the NI number we need for the pay API)
const NI_NUMBER_REQUEST = db_api.createRequestTemplate({
body: JSON.stringify(client_request.body)
});
// Create a chain of HTTPS Requests, Starting with our call to the DB
requestPromise(NI_NUMBER_REQUEST)
.then((db_response) => {
const PAY_API_OPTIONS = /*Code to generate options based on furhter DB info (Includes dates etc)*/
return requestPromise(PAY_API_OPTIONS); // Call pay API
})
.then((pay_pdf_data) => {
console.log(typeof pay_pdf_data); // It's a string
// At this point I can log pay_pdf_data, But if I try to save it to file it's always empty
// No matter how I encode it etc
fs.writeFile("./test.pdf", pay_pdf_data, 'binary', function(err) {
if(err) {
return console.log(err);
}
console.log("The file was saved!");
});
})
.catch(err => `Error caught: ${console.log}`) // Catch any errors on our request chain
});
}
I've tried saving with / without the binary flag as suggested in other posts in both the file save aswell as within the requests itself. Also various types of decoding methods have been tried, I always get an empty PDF saved.
My return data looks like this (is much bigger, when saved as test.pdf I get a 30kb file as before mentioned)
%PDF-1.4
%����
1 0 obj
0 obj
<
I've found a post which says about piping the data all the way through, I have a feeling my pdf_data is corrupted when getting converted to a string
Any ideas how would I go about doing this with the current setup?
e/ RequestPromise is a library, could also use the standards request library if it's easier
https://github.com/request/request-promise -
https://github.com/request/request
Thanks!
Your code doesn't work because the underlying request library (used by request-promise) requires the option encoding set to null for binary data - see https://github.com/request/request#requestoptions-callback.
Here's how you download binary data using that module -
app.post("/getPayslipURL", function(client_request, res) {
const NI_NUMBER_REQUEST = db_api.createRequestTemplate({
body: JSON.stringify(client_request.body),
encoding: null
});
requestPromise(NI_NUMBER_REQUEST)
.then((db_response) => {
const PAY_API_OPTIONS = /*Code to generate options based on furhter DB info (Includes dates etc)*/
return requestPromise(PAY_API_OPTIONS); // Call pay API
})
.then((pay_pdf_data) => {
fs.writeFile("./test.pdf", pay_pdf_data, 'binary', (err) => {
if(err) {
return console.log(err);
}
console.log("The file was saved!");
});
})
.catch(err => `Error caught: ${console.log}`) // Catch any errors on our request chain
});
}
I tried to add additional attachment to my document in PouchDB in my electron application. However I can only add last attachment and the old one is overwritten.
The following data is not amended in a way which add new file:
_attachments":{"someFile.jpg":{"content_type":"image/jpeg","revpos":5,"length":38718,"digest":"md5-X+MOUwdHmNeORSl6xdtZUg=="}
Should I read document first and recreate it adding additional file using multiple attachments with the following method:
db.put({
_id: 'mydoc',
_attachments: {
'myattachment1.txt': {
content_type: 'text/plain',
data: blob1
},
'myattachment2.txt': {
content_type: 'text/plain',
data: blob2
},
'myattachment3.txt': {
content_type: 'text/plain',
data: blob3
},
// etc.
}
});
?
Below you can see part of the code i try to run to check if i can add two attachments to one document (actually i try to use the same file twice to simplify test):
pdb.putAttachment(id, name, rev, file, type).then(function (result) {
console.log("att saved:");
console.log(result);
}).catch(function (err) {
console.log(err);
});
var newFileName = "new" + name;
pdb.putAttachment(id, newFileName, rev, file, type).then(function (result) {
console.log("att saved 2:");
console.log(result);
}).catch(function (err) {
console.log(err);
});
The outcome is:
Object {ok: true, id: "1489351796004", rev: "28-a4c41eff6fbdde8a722a920c9d5a1390"}
id
:
"1489351796004"
ok
:
true
rev
:
"28-a4c41eff6fbdde8a722a920c9d5a1390"
CustomPouchError {status: 409, name: "conflict", message: "Document update conflict", error: true, id: "1489351796004"}
error
:
true
id
:
"1489351796004"
message
:
"Document update conflict"
name
:
"conflict"
status
:
409
It looks I don't understand something or I do not know how to use putAttachment properly.
I would also add what how data in sqlite looks like(by-sequence table, json row):
{...,"_attachments":{"testPicture.jpg":{"content_type":"image/jpeg","revpos":34,"length":357677,"digest":"md5-Bjqd6RHsvlCsDkBKe0r7bg=="}}}
The problem here is how to add another attachment to the structure. Somehow I cannot achive that via putAttachment
Your question and especially the code are quite hard to read, so the error was not so easy to spot: You didn't wait for the promise to be resolved. When you update a document with revision 1, you have to wait for the results, read the revision from there, and only then write the second attachment. This would be my (untested) take on your code:
pdb.putAttachment(id, name, rev, file, type)
.then(function (result) {
// Use the new revision here:
return putAttachment(id, newFileName, result.rev, file, type);
}).then(function (result) {
console.log(result);
}).catch(function (err) {
console.log(err);
});
Adding two attachments at once is possible if you encode them correctly, but you're on your own with it. I'd recommend that you shouldn't do that – better use the abstractions that PouchDB provides.
Also don't analyze the underlying data structures too much, because depending on the storage adapter used the data storage might differ drastically. It's quite interesting how different adapters store their data, but never rely on anything you find out – data formats might change.
put replaces the document. If you want to add an attachment to an existing document without overwriting its contents you should use putAttachment.
I'm trying to download a tar file (non-compressed) over HTTP and piping it's response to the tar-stream parser for further processing. This works perfect when executed on the terminal without any errors. For the same thing to be utilized on browser, a bundle.js file is generated using browserify and is included in the HTML.
The tar stream contains 3 files. This browserified code when executed on the browser parses 2 entries successfully but raises the following error for the third one:
Error: Invalid tar header. Maybe the tar is corrupted or it needs to be gunzipped?
Whereas with the same HTTP download and parsing code, the tar file is downloaded and parsed completely without errors on terminal. Why is this happening?!
Code snippet is along these lines:
. . . .
var req = http.request(url, function(res){
res.pipe(tar.extract())
.on('entry', function(header, stream, callback) {
console.log("File found " + header.name);
stream.on('end', function() {
console.log("<<EOF>>");
callback();
})
stream.resume();
})
.on('finish', function(){
console.log("All files parsed");
})
.on('error', function(error){
console.log(error); //Raises the above mentioned error here
})
});
. . . .
Any Suggestions? Headers?
The problem here (and its solution) are tucked away in the http-browserify documentation. First, you need to understand a few things about browserify:
The browser environment is not the same as the node.js environment
Browserify does its best to provide node.js APIs that don't exist in the browser when the code you are browserifying needs them
The replacements don't behave exactly the same as in node.js, and are subject to caveats in the browser
With that in mind, you're using at least three node-specific APIs that have browserify reimplementations/shims: network connections, buffers, and streams. Network connections by necessity are replaced in the browser by XHR calls, which have their own semantics surrounding binary data that don't exist within Node [Node has Buffers]. If you look here, you'll notice an option called responseType; this sets the response type of the XHR call, which must be done to ensure you get binary data back instead of string data. Substack suggested to use ArrayBuffer; since this must be set on the options object of http.request, you need to use the long-form request format instead of the string-url format:
http.request({
method: 'GET',
hostname: 'www.site.com',
path: '/path/to/request',
responseType: 'arraybuffer' // note: lowercase
}, function (res) {
// ...
});
See the xhr spec for valid values for responseType. http-browserify passes it along as-is. In Node, this key will simply be ignored.
When you set the response type to 'arraybuffer', http-browserify will emit chunks as Uint8Array. Once you're getting a Uint8Array back from http.request, another problem presents itself: the Stream API only accepts string and Buffer for input, so when you pipe the response to the tar extractor stream, you'll receive TypeError: Invalid non-string/buffer chunk. This seems to me to be an oversight in stream-browserify, which should accept Uint8Array values to go along nicely with the other parts of the browserified Node API. You can fairly simply work around it yourself, though. The Buffer shim in the browser accepts a typed array in the constructor, so you can pipe the data yourself, converting each chunk to a Buffer manually:
http.request(opts, function (res) {
var tarExtractor = tar.extract();
res.on('data', function (chunk) {
tarExtractor.write(new Buffer(chunk));
});
res.on('end', function () {
tarExtractor.end();
});
res.on('error', function (err) {
// do something with your error
// and clean up the tarExtractor instance if necessary
});
});
Your code, then, should look something like this:
var req = http.request({
method: 'GET',
// Add your request hostname, path, etc. here
responseType: 'arraybuffer'
}, function(res){
var tarExtractor = tar.extract();
res.on('data', function (chunk) {
tarExtractor.write(new Buffer(chunk));
});
res.on('end', tarExtractor.end.bind(tarExtractor));
res.on('error', function (error) {
console.log(error);
});
tarExtractor.on('entry', function(header, stream, callback) {
console.log("File found " + header.name);
stream.on('end', function() {
console.log("<<EOF>>");
callback();
})
stream.resume(); // This won't be necessary once you do something with the data
})
.on('finish', function(){
console.log("All files parsed");
});
});