Waiting for an API request to end to call another function - javascript

In my Vue.js app I process user uploaded pictures with Google Vision API.
axios.post(`https://vision.googleapis.com/v1/images:annotate?key=${this.apiKey}`, axiosConfig)
.then(response => {
let slicedLabelArray = response.data.responses[0].labelAnnotations.slice(0,5)
slicedLabelArray.forEach(function(label) {
let labelToAdd= label.description
store.addLabels(labelToAdd)
})
})
I get the data I need but I have troubles setting up another function to be called right after my API request is done.
imageConversion () {
console.log('converting')
var byteString = atob(this.image.dataUrl.split(',')[1]);
var ab = new ArrayBuffer(byteString.length);
var ia = new Uint8Array(ab);
for (var i = 0; i < byteString.length; i++) {
ia[i] = byteString.charCodeAt(i);
}
var blob = new Blob([ia], {
type: 'image/jpeg'
});
var file = new File([blob], "image.jpg");
store.setFile(file)
}
As soon I get the first 5 labels of my base64 image I need it to be converted. I know there is something call promise in JavaScript that could help me but in this case I can't find a proper way to use it.
My issue is that after the google Vision API request is done (and data is not yet returned), imageConversion() is executed and after that a store.state data is changed so a div with a button to upload the picture is displayed.
The purpose of the Google Vision Api call is to change the endpoint of where the picture is uploaded, depending of its label.
But if I don't wait for the response of the Google Vision API, I will have no endpoint to upload my picture.
So I want my Google Vision API to return the labels and only when they are returned, I want the other functions to be called.

Related

Use separate channels or streams for emit in socket.io

I have a multi client chat application where clients can share both texts and images.
But I'm facing some issue like when user sends and image and the image is quite large and send a text just after it, the users have to wait until the image is fully recieved.
Is there a way to separately emit and recieve the text and image data? Like text is recieved but the image is still being recieved.
Currently I'm using one emitter for both the text and image.
socket.emit('message', data, type, uId, msgId);
And if I have to use another protocol like UDP or WebRTC then which one would be the best? As far I know UDP cannot be used in the browser scripts.
So, what I've figuared out is,
Split the large image data into many [eg. 100] parts. Then use socket.emit();
let image = new Image();
image.src = selectedImage;
image.onload = async function() {
let resized = resizeImage(image, image.mimetype); //resize it before sending.
//store image in 100 parts
let partSize = resized.length / 100;
let partArray = [];
socket.emit('fileUploadStart', 'image', resized.length, id);
for (let i = 0; i < resized.length; i += partSize) {
partArray.push(resized.substring(i, i + partSize));
socket.emit('fileUploadStream', resized.substring(i, i + partSize), id);
await sleep(100); //wait a bit for other events to be sent while image is being sent.
}
socket.emit('fileUploadEnd', id);
Then finally gather the image parts back together.
I've used Map() on the server side and it's pretty fast.
Server:
const imageDataMap = new Map();
socket.on('fileUploadStart', (...) => {
imageDataMap.set(id, {metaData});
});
socket.on('fileUploadStream', (...) => {
imageDataMap.get(id).data += chunk;
});
socket.on('fileUploadEnd', (...) => {
//emit the data to other clients
});

Trying to upload multiple images to firestore with Nodejs (ClientSide firebase SDK)

I am trying to upload multiple images to firestore in my nodeJS server-side code.
I initially implemented it with the firestore bucket API
admin.storage.bucket().upload()
I am placing the above code in a for loop.
for(let x = 0; images.length > x; x++){
admin.storage.bucket().upload(filepath, {options}).then(val => {
//get image download URL and add to a list
imageUrls.push(url);
if(images.length == x+1){
// break out of loop and add the imagesUrls list to firestore
}
})
}
but what happens is that the code sometimes doesn't add all the image urls to the imageUrls list and I'll have only 1 or 2 image urls saved to firestore while in firestorage I see it uploaded all the images.
I understand that uploading takes some time and would like to know the best way to implement this as I assumed the .then() method is an async approach and would take care of any await instances.
Your response would be highly appreciated.
It's not clear how you get the values of filepath in the for block, but let's imagine that images is an array of fully qualified paths to the images you wish to upload to the bucket.
The following should do the trick (untested):
const signedURLs = [];
const promises1 = [];
images.forEach(path => {
promises.push(admin.storage.bucket().upload(filepath, {options}))
})
Promise.all(promises1)
.then(uploadResponsesArray => {
const promises2 = [];
const config = { // See https://googleapis.dev/nodejs/storage/latest/File.html#getSignedUrl
action: 'read',
expires: '03-17-2025',
//...
};
uploadResponsesArray.forEach(uploadResponse => {
const file = uploadResponse[0];
promises2.push(file.getSignedUrl(config))
})
return Promise.all(promises2);
})
.then(getSignedUrlResponsesArray => {
signedURLs.push(getSignedUrlResponsesArray[0])
});
// Do whatever you want with the signedURLs array

Google Apps Script: MailApp.sendEmail() can't send anything but TEXT and HTML files, nothing else?

I’m trying to attach a file to an email I send in Google Apps Script with MailApp.sendEmail(). In the browser JavaScript I read in the file manually with this code because I’ve already processed the equivalent of the submit button in my HTML form, and it works:
var file = document.getElementById('myfile');
var fileInfo = [];
if(file.files.length) // if there is at least 1 file
{
if (file.files[0].size < maxEmailAttachmentSize) // and its size is < 25M
{
var reader = new FileReader();
reader.onload = function(e)
{
fileInfo[0] = e.target.result;
};
reader.readAsBinaryString(file.files[0]);
fileInfo[1] = file.files[0].name;
fileInfo[2] = file.files[0].type;
fileInfo[3] = file.files[0].size;
}
console.log(fileInfo); // here I see the full file and info. All looks correct.
}
Then I send it up to the server.
google.script.run.withSuccessHandler(emailSent).sendAnEmail(fileInfo);
On the server I pull out the fields and send the email like so:
var fname = fileInfo[1];
var mimeType = fileInfo[2];
var fblob = Utilities.newBlob(fileInfo[0], mimeType, fname);
// all looks right in the Logger at this point
try {
GmailApp.sendEmail(emaiRecipient, emailSubject, emailBody,
{
name: 'Email Sender', // email sender
attachments: [fblob]
}
);
catch …
This works fine when the file is a text file or HTML file but doesn’t when the file is anything else. The file is sent but it's empty and apparently corrupt. Can anyone see what’s wrong with this code? (It doesn’t work with MailApp.sendEmail() either.) I did see in another stackoverflow post that the document has to be saved once, but that is something I definitely don’t want to do. Isn’t there any other way? What am I missing?
Modification points:
FileReader works with the asynchronous process. This has already been mentioned by Rubén's comment.
In the current stage, when the binary data is sent to Google Apps Script side, it seems that it is required to convert it to the string and byte array. This has already been mentioned by TheMaster's comment.
In order to use your Google Apps Script, in this case, I think that converting the file content to the byte array of int8Array might be suitable.
For this, I used readAsArrayBuffer instead of readAsBinaryString.
When above points are reflected to your script, it becomes as follows.
Modified script:
HTML&Javascript side:
// Added
function getFile(file) {
return new Promise((resolve, reject) => {
var reader = new FileReader();
reader.onload = (e) => resolve([...new Int8Array(e.target.result)]);
reader.onerror = (err) => reject(err);
reader.readAsArrayBuffer(file);
});
}
async function main() { // <--- Modified
var file = document.getElementById('myfile');
var fileInfo = [];
if(file.files.length) {
if (file.files[0].size < maxEmailAttachmentSize) {
fileInfo[0] = await getFile(file.files[0]); // <--- Modified
fileInfo[1] = file.files[0].name;
fileInfo[2] = file.files[0].type;
fileInfo[3] = file.files[0].size;
}
console.log(fileInfo); // here I see the full file and info. All looks correct.
google.script.run.withSuccessHandler(emailSent).sendAnEmail(fileInfo);
}
}
Although I'm not sure about your whole script from your question, at the modified script, it supposes that main() is run. When main() is run, the file is converted to the byte array and put it to fileInfo[0].
At Google Apps Script side, from fileInfo, var fblob = Utilities.newBlob(fileInfo[0], mimeType, fname); has the correct blob for Google Apps Script.
Google Apps Script side:
In this modification, your Google Apps Script is not modified.
References:
FileReader
FileReader.readAsArrayBuffer()
Added:
This code looks good but we can't use it because we are running on the Rhino JavaScript engine, not V8. We don't have support for newer JavaScript syntax. Could you give us an example of how it's done with older syntax? Ref
From your above comment, I modified as follows.
Modified script:
HTML&Javascript side:
function main() {
var file = document.getElementById('myfile');
var fileInfo = [];
if(file.files.length) {
if (file.files[0].size < maxEmailAttachmentSize) {
var reader = new FileReader();
reader.onload = function(e) {
var bytes = new Int8Array(e.target.result);
var ar = [];
for (var i = 0; i < bytes.length; i++) {
ar.push(bytes[i]);
}
fileInfo[0] = ar;
fileInfo[1] = file.files[0].name;
fileInfo[2] = file.files[0].type;
fileInfo[3] = file.files[0].size;
console.log(fileInfo); // here I see the full file and info. All looks correct.
google.script.run.withSuccessHandler(emailSent).sendAnEmail(fileInfo);
}
reader.onerror = function(err) {
reject(err);
}
reader.readAsArrayBuffer(file.files[0]);
}
}
}
Google Apps Script side:
In this modification, your Google Apps Script is not modified.

Render an image byte stream on angular client side app

I have a NodeJS / Express RESTful API that proxies requests from an Active Directory LDAP Server. I do this because LDAP queries tend to be slow. I use the RESTful API to cache and refresh data intermittently. I recently attempted to add the thumbnail photo. In research it appears the library that I am using ldapjs is converting the native ldap byte array to a string.
Example of what this looks like:
\ufffd\ufffd\ufffd\ufffd\u0000\u0010JFIF\u0000\u0001\u0000\u0001\u0000x\u0000x\u0000\u0000\ufffd\ufffd\u0000\u001fLEAD
Technologies Inc.
V1.01\u0000\ufffd\ufffd\u0000\ufffd\u0000\u0005\u0005\u0005\b\
Due to this fact the image does not render correctly on the angular client app. So based on my research, here are some attempts in correcting this problem:
Convert the string to a byte array using different methods (See code examples)
Modify the ldapjs library to render the data as a byte array in the RESTFUL as in the following, then bind the byte stream to the angular page:
https://github.com/joyent/node-ldapjs/issues/137
https://csjdpw.atlassian.net/wiki/spaces/~Genhan.Chen/pages/235044890/Display+LDAP+thumbnail+photos
html binding:
<div>
<img *ngIf="userImage" [src]="userImage" alt="{{dataSource.sAMAccountName}}">
</div>
controller:
public get userImage() {
let value = null;
if(this.dataSource.thumbnailPhoto) {
const byteArray = this.string2Bin(this.dataSource.thumbnailPhoto);
const image = `data:image/jpeg;base64,${Buffer.from(byteArray).toString('base64')}`;
value = this.domSanitizer.bypassSecurityTrustUrl(image);
}
return value;
}
private string2Bin(str) {
var result = [];
for (var i = 0; i < str.length; i++) {
result.push(str.charCodeAt(i));
}
return result;
}
and
alternate version of controller:
public get userImage() {
let value = null;
if(this.dataSource.thumbnailPhoto) {
const byteArray = new TextEncoder().encode(this.dataSource.thumbnailPhoto);
const image = `data:image/jpeg;base64,${Buffer.from(byteArray).toString('base64')}`;
value = this.domSanitizer.bypassSecurityTrustUrl(image);
}
return value;
}
another alternate version of controller:
public get userImage() {
let value = null;
if(this.dataSource.thumbnailPhoto) {
const blob = new Blob( [Buffer.from(this.dataSource.thumbnailPhoto).toString('base64')], { type: 'image/jpeg' } );
const value = window.URL.createObjectURL(blob);
return value;
}
I expected a rendered image on the angular page but all I get is the non-rendered placeholder.
Here are the versions of the libraries I am using
Angular - 8.0.3
NodeJS - 10.15.0
ldapjs - 1.0.2
I am sure I am missing something, I am just not sure what it is. Any assistance would be appreciated.
So after some guidance provided by #Aritra Chakraborty, I checked the RESTful api source code. It appears to be a problem with a ldapjs library. When using the entry object conversion, it is doing something strange with the data to which it is not usable. I then realized, I had access to the entry raw format which is the byte array . Instead of trying to convert to base64 on the client, I moved this to the API. Then just mapped it back on the client binding and bang it worked.
Here is some example code:
RESTFul api
_client.search(this._search_dn, opts, (error, res) => {
res.on("searchEntry", (entry) => {
let result = {};
result.id = string_service.formatGUID(JSON.parse(JSON.stringify(entry.raw)).objectGUID);
result = Object.assign({}, result, entry.object);
if(entry.raw.thumbnailPhoto) {
result.thumbnailPhoto = entry.raw.thumbnailPhoto.toString('base64');
}
// The previous 3 lines did not exist previously
On the Angular 8 client I simplified the binding:
public get userImage() {
let value = null;
if(this.dataSource.thumbnailPhoto) {
const image = `data:image/jpeg;base64,${this.dataSource.thumbnailPhoto}`;
value = this.domSanitizer.bypassSecurityTrustUrl(image);
}
return value;
}
I hope someone finds some value out of this.

Missing attachment from Outlook JS Api

I'm creating an Outlook add-in and I would like to get each attachment from a mail using JavaScript.
So far, it worked fine with this:
var attch = Office.context.mailbox.item.attachments;
for(i = 0; i < attch.length; i++) {
// Logic here
}
But today, I found that an .msi file was missing from the attch variable. I think that's because this file is an executable and thus is considered as dangerous
As a workaround, I know that I can do an AJAX Request to my ASP.Net webserver and consume Exchange API to get the full attachment list :
var exService = new ExchangeService
{
Url = new Uri(data.EwsURL),
Credentials = new WebCredentials(data.Login, data.Password)
};
var message = EmailMessage.Bind(exService, new ItemId(data.mailId));
var propertySet = new PropertySet(ItemSchema.Attachments);
message.Load(propertySet);
if (message.Attachments.Count == 0 || data.Type == "text" || data.Type == "full") return;
foreach (var attch in message.Attachments.OfType<FileAttachment>().Select(attachment => attachment))
{
// Logic here: returns the attachments info to the webpage
}
Is there a better way to get the full attachments list, using the Office.JS API?

Categories