XMPP - How do I delete all messages between two jids, but only for one user? - javascript

Problem:
I want to delete all the messages (and thread) from one side of an equation between two users, A and B. I have no idea if this is even possible and if so, how.
I have the:
jid of each user
an XMPP library in JS (custom) that allows me to send IQ or any other type of stanza.
For example, this is how I get my friends (roster) list:
async getFriends() {
const requestId = this.sendStanza(
'iq',
{ type: 'get' },
(stanza) => stanza.c('query', { xmlns: 'jabber:iq:roster' }),
)
const result = await this.once('*', requestId);
const requests = result.children[0].children.map(child => child.attrs.jid);
return requests;
}
Hopefully this is enough for someone to advise me. Thanks.

If you have full access to the client logic, you can implement your own logic, for instance, you can send an IQ stanza with a specific name space (xmlns) along with some elements/attributes, when the receiving side receives that IQ, it will do whatever logic you want (delete messages, thread, etc..)
Check this out:
https://xmpp.org/extensions/xep-0424.html
it is an extension to delete (retract) single message.

Related

How to make safe PUT DELETE requests?

I am making backend for my project and I have a question about safety.
As an example my task is handle different "/notes" requests.
/notes => get all notes of authorized user
/notes => create new note
/notes => delete note
So... Reciving data is safe. Noone can get these notes from another URL because of CORS.
If we will use GET params to create or delete notes
/notes?action=delete&note_id=7
bad people can send link to authorized user and he will lose his data by accident.
So next step is making POST requests.
Everything is much better, but there is still a little hole. If someone will add post form with hidden input params it can be dangerous.
So last thing that I'd add is sending extra param, that only authorized user knows.
User ID
Temporary hash
or something like that.
Is there any other solutions?
/notes?action=delete&note_id=7
You need a know id of owner note smt like that:
$note_id = (int)$_GET['note_id'];
$note = 'SELECT * FROM `notes` WHERE `id` = $note_id'; (fetch)
if($user['id'] == $note['user_id']) {
//delete code
} else {
exit('Bye bye');
}

Expo notification click to action

I can't seem to find a solution when I get a push notification and click on it redirects me to a screen, chat, etc. link to push notification.
I would also like to add a square image to the side and could not find an answer.
The push notifications are sent from a NodeJS server I looked at the docs and search the internet and I did not find anything of interest.
https://docs.expo.dev/versions/latest/sdk/notifications/#managing-notification-categories-interactive-notifications
https://github.com/expo/expo-server-sdk-node
Thank you in advance for your answers ❤️
I'm not quite sure about the square image, but in order to handle redirects you can look at this documentation from expo: https://docs.expo.dev/push-notifications/receiving-notifications/.
You can then pass the data you need for your redirect (i.e. notification_type, relevant id etc) via the data property on your message (this will need to be done wherever the message is created, which from your question is from the node api):
messages.push({
to: pushToken,
body: 'This is a test notification',
data: { notification_type: 'something', id: 'something_else' },
});
It is then up to you to decide how to handle that message based on the extra data you have provided.
For example, taking the code provided in the link above as an example, you could have a handle function as follows:
_handleNotification = response => {
const data = response.notification.request.content;
if (data.type === "new_message") {
// navigate to MessageScreen with data.id as param
} else {
// do something else based on the type or...
}
};

Ajax GET: multiple data-specific calls, or fewer less specific calls?

I'm developing a web app using a Node.js/express backend and MongoDB as a database.
The below example is for an admin dashboard page where I will display cards with different information relating to the users on the site. I might want to show - on the sample page - for example:
The number of each type of user
The most common location for each user type
How many signups there are by month
Most popular job titles
I could do this all in one route, where I have a controller that performs all of these tasks, and bundles them as an object to a url that I can then pull data from using ajax. Or, I could split each task into its own route/controller, with a separate ajax call to each. What I'm trying to decide is what are the best practices around making multiple ajax calls on a single page.
Example:
I am building up a page where I will make an interactive table using DataTables for different types of user ( currently have two: mentors and mentees). This example requires just two data requests (one for each user type), but my final page will be more like 10.
For each user type, I am making an ajax get call for each user type, and building the table from the returned data:
User type 1 - Mentees
$.get('/admin/' + id + '/mentees')
.done(data => {
$('#menteeTable').DataTable( {
data: data,
"columns": [
{ "data": "username"},
{ "data": "status"}
]
});
})
User type 2 - Mentors
$.get('/admin/' + id + '/mentors')
.done(data => {
$('#mentorTable').DataTable( {
data: data,
"columns": [
{ "data": "username"},
{ "data": "position"}
]
});
})
This then requires two routes in my Node.js backend:
router.get("/admin/:id/mentors", getMentors);
router.get("/admin/:id/mentees", getMentees);
And two controllers, that are structured identically (but filter for differnt user types):
getMentees(req, res, next){
console.log("Controller: getMentees");
let query = { accountType: 'mentee', isAdmin: false };
Profile.find(query)
.lean()
.then(users => {
return res.json(users);
})
.catch(err => {
console.log(err)
})
}
This works great. However, as I need to make multiple data requests I want to make sure that I'm building this the right way. I can see several options:
Make individual ajax calls for each data type, and do any heavy lifting on the backend (e.g. tally user types and return) - as above
Make individual ajax calls for each data type, but do the heavy lifting on the frontend. In the above example I could have just as easily filtered out isAdmin users on the data returned from my ajax call
Make fewer ajax calls that request less refined data. In the above example I could have made one call (requiring only one route/controller) for all users, and then filtered data on the frontend to build two tables
I would love some advice on which strategy is most efficient in terms of time spent sourcing data
UPDATE
To clarify the question, I could have achieved the same result as above using a controller setup something like this:
Profile.find(query)
.lean()
.then(users => {
let mentors = [],
mentees = []
users.forEach(user => {
if(user.accountType === 'mentee') {
mentees.push(user);
} else if (user.accountType === 'mentor') {
mentors.push(user);
}
});
return res.json({mentees, mentors});
})
And then make one ajax call, and split the data accordingly. My question is: which is the preferred option?
TL;DR: Option 1
IMO I wouldn't serve unprocessed data to the front-end, things can go wrong, you can reveal too much, it could take a lot for the unspecified client machine to process (could be a low power device with limited bandwidth and battery power for example), you want a smooth user experience, and javascript on the client churning out information from a mass of data would detract from that. I use the back-end for the processing (prepare the information how you need it), JS for retrieving and placing the information (AJAX) on the page and things like switching element states, and CSS for anything moving around (animations and transitions etc) as much as possible before resorting to JS.
Also for the routes, my approach would be each distinct package of information (dataTable) has a route, so you're not overloading a method with too many purposes, keep it simple and maintainable. You can always abstract away anything that's identical and repeated often.
So to answer your question, I'd go with Option 1.
You could also offer a single 'page-load' endpoint, then if anything changes update the individual tables later using their distinct endpoints. This initial 'page-load' call could collate the information from the endpoints on the backend and serve as one package of data to populate all tables initially. One initial request with one lot of well-defined data, then the ability to update an individual table if the user requests it (or there is a push if you get into that).
It is really good question. First of all you should realize how your application will manage with received data. If it is huge amount of data that are not changed on fronend but with different views and whole data needs for these views it might be cached into frontend (like user settings data - application always reads it but rare changes) then you could follow with your second options. Other case if frontend works only with small part of huge amount of database data (like log data for specific user) it is preferably to preprocess (filtering) on server side your first and third options. Actually second options is preferable only for caching unchanged data on frontend as for me.
After clarifying the question you could use grouping for your request and lodash library:
Profile.find(query)
.lean()
.then(users => {
let result = [];
result = _(users)
.groupBy((elem) => elem.accountType)
.map((vals, key) => ({accountType: key, users: vals}))
.value();
});
return res.json(result);
});
Certainly you could map your data as you comfortable. This way allows to get all types of accounts (not only 'mentee' and 'mentor')
Usually there are 3 things in such architectures:
1. Client
2. API Gateway
3. Micro services (Servers)
In your case :
1. Client is JS application code
2. API Gateway + Server is Nodejs/express (Dual responsibility)
Point 1 to be noted
Servers only provides core APIs. So this API for a server should be only a user api like:
/users?type={mentor/mentee/*}&limit=10&pageNo=8
i.e anyone can ask for all data or filtered data using type query string.
Point 2 to be noted
Since Web pages are composed of multiple data points and making call for every data point to the same server increases the round trip and makes the UX worse, API gateways are there. So in this case JS would not directly communicate with core server, it communicates with API Gateway with and APIs like:
/home
The above API internally calls below APIs and aggregates the data in a single json with mentor and mentee list
/users?type={mentor/mentee/*}&limit=10&pageNo=8
This API simply passes the call to core server with query attributes
Now since in your code, API gateway and Core server is merged into single layer, this is how you should setup your code:
getHome(req, res, next){
console.log("Controller: /home");
let queryMentees = { accountType: 'mentee', isAdmin: false };
let queryMentors = { accountType: 'mentor', isAdmin: true };
mentes = getProfileData(queryMentees);
mentors = getProfileData(queryMentors);
return res.json({mentes,mentors});
}
getUsers(req, res, next){
console.log("Controller: /users");
let query = {accountType:request.query.type,isAdmin:request.query.isAdmin};
return res.json(getProfileData(query));
}
And a common ProfileService.js class with a function like:
getProfileData(query){
Profile.find(query)
.lean()
.then(users => {
return users;
})
.catch(err => {
console.log(err)
})
}
More info about API Gateway Pattern here
If you can't estimate how many types need on your app then needs to be use parameters,
If I wrote like this application I don't write multiple function for calling ajax and don't write multiple route and controller,
Client side like this
let getQuery = (id,userType)=>{
$.get('/admin/' + id + '/userType/'+userType)
.done(data => {
let dataTable = null;
switch(userType){
case "mentee":
dataTable = $('#menteeTable');
break;
case "mentor":
dataTable = $('#mentorTable');
break;
//.. you can add more selector for datatables but I wouldn't prefer this way you can generate "columns" property on server like "data" so meaning that you can just use one datatable object on client side
}
dataTable.DataTable( {
data: data,
"columns": [
{ "data": "username"},
{ "data": "status"}
]
});
})
}
My prefer for client side
let getQuery = (id,userType)=>{
$.get('/admin/' + id + '/userType/'+userType)
.done(data => {
$('#dataTable').DataTable( {
data: data.rows,
"columns": data.columns
]
});
})
}
Server response should support {data: [{}...], columns:[{}....]} like this on this scenario Datatables examples
Server side like this
Router just one
router.get("/admin/:id/userType/:userType", getQueryFromDB);
Controller
getQueryFromDB(req, res, next){
let query = { accountType: req.params.userType, isAdmin: false };
Profile.find(query)
.lean()
.then(users => {
return res.json(users);
})
.catch(err => {
console.log(err)
})
}
So main meaning about your question for me that mentees, mentors etc... are parameters like as "id"
make sure that your authentication checked which users have access userType data for both code samples mine and your code, someone can reach your data with just change routing
Have a nice weekend
from performance and smoothness of ui on user device:
Sure it would be better to do 1 ajax request for all core data (which is important to show as soon as possible), and possibly perform more requests for less priority data with some tiny delay. Or do 2 requests: one for 'fast' data and another for 'slow' (if this is applicable) because:
On one hand, many ajax requests could slowdown ui there could be a limitation for amount of ajax requests getting done at same time (it is browser dependent an could be from 2 to 10) so if for ex. in ie there will be limit of 2 then with 10 ajaxes there will be an queue of waiting ajax requests
But on the other hand if there is much data to show or some data takes longer to prepare it could result in long waiting for backend response to show something.
Talking of heavy lifting: It is not good to make such things on UI side anyway, because:
User device can be not good with resources and 'slow'.
Javascript is synchronous and as a consequence, any long loop 'freeze' UI for time it required to run that loop.
Talking of filtering users:
Profile.find(query)
.lean()
.then(users => {
let mentors = [],
mentees = []
users.forEach(user => {
if(user.accountType === 'mentee') {
mentees.push(user);
} else if (user.accountType === 'mentor') {
mentors.push(user);
}
});
return res.json({mentees, mentors});
})
seems to have one problem, possibly query will have sortings and limits, if so final result will be inconsistent, it possibly end up with only mentees or only mentors, i think you should do 2 separate queries to data storage anyways
from project structuring, maintainability, flexibility, reusability, and so on, of course it is good to decouple things as much as possible.
So, finally, imagine you made:
1. many microservices like for each widget 1 backend microcervice but there is a layer which allows to aggregate results to optimize traffic from UI in 1-2 ajax query.
2. many ui modules each working with own data, received from some service, which do 1-2 calls for aggregating backend and distributes different datasets it recieved to many frontend modules.
At back end just make one dynamic parametric method API. you can pass mentor, mentee,admin etc as role.you should have some type of user authentication and authorization to check if user a can see users in role B or not.
Regarding UI its up to user they want one page with drop-down filter or they want URLs to bookmark.
Like multiple url /admin /mentor etc.
or one url with querystring and dropdown./user?role=mentor,/user?role=admin.
Based on url you have to make controllers. I generally prefer drop down and fetch data (by default all mentors might be the selection).
This is a specific invitation suited for invitations of a romantic nature (e.g. dates or engagement parties).

Meteor: Best practice for modifying document data with user data

Thanks for looking at my question. It should be easy for anyone who has used Meteor in production, I am still at the learning stage.
So my meteor setup is I have a bunch of documents with ownedBy _id's reflecting which user owns each document (https://github.com/rgstephens/base/tree/extendDoc is the full github, note that it is the extendDoc branch and not the master branch).
I now want to modify my API such that I can display the real name of each owner of the document. On the server side I can access this with Meteor.users.findOne({ownedBy}) but on the client side I have discovered that I cannot do this due to Meteor security protocols (a user doesnt have access to another user's data).
So I have two options:
somehow modify the result of what I am publishing to include the user's real name on the server side
somehow push the full user data to the clientside and do the mapping of the _id to the real names on the clientside
what is the best practice here? I have tried both and here are my results so far:
I have failed here. This is very 'Node' thinking I know. I can access user data on clientside but Meteor insists that my publications must return cursors and not JSON objects. How do I transform JSON objects into cursors or otherwise circumvent this publish restriction? Google is strangely silent on this topic.
Meteor.publish('documents.listAll', function docPub() {
let documents = Documents.find({}).fetch();
documents = documents.map((x) => {
const userobject = Meteor.users.findOne({ _id: x.ownedBy });
const x2 = x;
if (userobject) {
x2.userobject = userobject.profile;
}
return x2;
});
return documents; //this causes error due to not being a cursor
}
I have succeeded here but I suspect at the cost of a massive security hole. I simply modified my publish to be an array of cursors, as below:
Meteor.publish('documents.listAll', function docPub() {
return [Documents.find({}),
Meteor.users.find({}),
];
});
I would really like to do 1 because I sense there is a big security hole in 2, but please advise on how I should do it? thanks very much.
yes, you are right to not want to publish full user objects to the client. but you can certainly publish a subset of the full user object, using the "fields" on the options, which is the 2nd argument of find(). on my project, i created a "public profile" area on each user; that makes it easy to know what things about a user we can publish to other users.
there are several ways to approach getting this data to the client. you've already found one: returning multiple cursors from a publish.
in the example below, i'm returning all the documents, and a subset of all the user object who own those documents. this example assumes that the user's name, and whatever other info you decide is "public," is in a field called publicInfo that's part of the Meteor.user object:
Meteor.publish('documents.listAll', function() {
let documentCursor = Documents.find({});
let ownerIds = documentCursor.map(function(d) {
return d.ownedBy;
});
let uniqueOwnerIds = _.uniq(ownerIds);
let profileCursor = Meteor.users.find(
{
_id: {$in: uniqueOwnerIds}
},
{
fields: {publicInfo: 1}
});
return [documentCursor, profileCursor];
});
In the MeteorChef slack channel, #distalx responded thusly:
Hi, you are using fetch and fetch return all matching documents as an Array.
I think if you just use find - w/o fetch it will do it.
Meteor.publish('documents.listAll', function docPub() {
let cursor = Documents.find({});
let DocsWithUserObject = cursor.filter((doc) => {
const userobject = Meteor.users.findOne({ _id: doc.ownedBy });
if (userobject) {
doc.userobject = userobject.profile;
return doc
}
});
return DocsWithUserObject;
}
I am going to try this.

MeteorJS - No user system, how to filter data at the client end?

The title might sound strange, but I have a website that will query some data in a Mongo collection. However, there is no user system (no logins, etc). Everyone is an anonymouse user.
The issue is that I need to query some data on the Mongo collection based on the input text boxes the user gives. Hence I cannot use this.userId to insert a row of specifications, and the server end reads this specifications, and sends the data to the client.
Hence:
// Code ran at the server
if (Meteor.isServer)
{
Meteor.publish("comments", function ()
{
return comments.find();
});
}
// Code ran at the client
if (Meteor.isClient)
{
Template.body.helpers
(
{
comments: function ()
{
return comments.find()
// Add code to try to parse out the data that we don't want here
}
}
);
}
It seems possible that at the user end I filter some data based on some user input. However, it seems that if I use return comments.find() the server will be sending a lot of data to the client, then the client would take the job of cleaning the data.
By a lot of data, there shouldn't be much (10,000 rows), but let's assume that there are a million rows, what should I do?
I'm very new to MeteorJS, just completed the tutorial, any advice is appreciated!
My advice is to read the docs, in particular the section on Publish and Subscribe.
By changing the signature of your publish function above to one that takes an argument, you can filter the collection on the server, and limiting the data transferred to what is required.
Meteor.publish("comments", function (postId)
{
return comments.find({post_id: postId});
});
Then on the client you will need a subscribe call that passes a value for the argument.
Meteor.subscribe("comments", postId)
Ensure you have removed the autopublish package, or it will ignore this filtering.

Categories