I need to use some data from a 3rd party API in my app, poll for the needed data with certain frequency from the server, and make it available to the client. The easiest way would be to create a collection and update it, and make the data available to the client via pub/sub. But, in this particular case I don't need to store that data or keep track of it, and it updates very frequently, so storing it to db would actually be just additional unneeded work. I would prefer to store it somehow in the RAM, and make it available to the client in some other way except collections (perhaps, return from a method call). But I'm not sure, how to do that. Could someone suggest some nice approach?
You could use this package meteor-publish-join to fetch data from external API and publish to client periodically (disclaimer: I am the author):
Server:
import { JoinServer } from 'meteor-publish-join';
Meteor.publish('test', function() {
// Publish a random value from an external API, plays well with promise, re-run every 10 seconds
JoinServer.publish({
context: this,
name: 'withPromise',
interval: 10000,
doJoin() {
const id = parseInt(Math.random() * 100, 10);
return fetch(`https://jsonplaceholder.typicode.com/posts/${id}`)
.then(res => res.json())
.then(data => data.title)
.catch(err => console.error(err));
},
});
});
Client:
import { JoinClient } from 'meteor-publish-join';
Meteor.subscribe('test');
// Get the values published within `test` publication. All these values are reactive
JoinClient.get('withPromise')
Related
I made an Axios get request to get data to display it with react
export function Wareh() {
const [wareh, setWareh] = useState([{}]);
useEffect(() => {
axios.get("http://localhost:1515/wareh").then((response) => {
setWareh((existingData) => {
return response.data;
});
});
}, []);
return wareh;
}
the problem here if i update my data i have to refresh the page to see the udpate Here is my question :
how to make itto be like r like if any changes happens in the database it reflects the get request.
There are 3 ways to achieve this. Either you can do it using
Long pooling
In this technique, you can set Interval and call the same endpoint to refresh the data with the passage of time.
setInterval(() => fetchWareh(), 5000)
Let's assume you can shift your Axios call to a function named: fetchWareh and call the same function after every 5 seconds
Server Sent Event
This is approach is quite similar to the first approach. You read more about from here:
https://www.w3schools.com/html/html5_serversentevents.asp
Implement socket on server & client
A most recommended to do it through socket.io to fetch real-time data.
Socket Documentation Here
I am using Firebase JavaScript Modular Web Version 9 SDK with my Vue 3 / TypeScript app.
My understanding is that when using Firestore real-time listeners with offline persistence it should work like this:
When the listener is started the callback fires with data read from the local cache, and then immediately after it also tries to read from the server to make sure the local cache has up to date values. If the server data matches the local cache the callback listener should only fire once with data read from the local cache.
When data changes, the callback listener fires with data read from the server. It uses that data to update the local cache.
When data doesn't change, all subsequent calls to the listener trigger a callback with data read from the local cache.
But I have setup offline persistence, created a listener for my Firestore data, and monitored where the reads were coming from...
And in my app I see an initial read from the local cache (expected), and then a second immediate read from the server (unexpected). And after that all subsequent reads are coming from the server (also unexpected).
During this testing none of my data has changed. So I expected all reads from the callback listener to be coming from the local cache, not the server.
And actually the only time I see a read from the local cache is when the listener is first started, but this was to be expected.
What could be the problem?
P.S. To make those "subsequent calls" I am navigating to a different page of my SPA and then coming back to the page where my component lives to trigger it again.
src/composables/database.ts
export const useLoadWebsite = () => {
const q = query(
collection(db, 'websites'),
where('userId', '==', 'NoLTI3rDlrZtzWCbsZpPVtPgzOE3')
);
const firestoreWebsite = ref<DocumentData>();
onSnapshot(q, { includeMetadataChanges: true }, (querySnapshot) => {
const source = querySnapshot.metadata.fromCache ? 'local cache' : 'server';
console.log('Data came from ' + source);
const colArray: DocumentData[] = [];
querySnapshot.docs.forEach((doc) => {
colArray.push({ ...doc.data(), id: doc.id });
});
firestoreWebsite.value = colArray[0];
});
return firestoreWebsite;
};
src/components/websiteUrl.vue
<template>
<div v-if="website?.url">{{ website.url }}</div>
</template>
<script setup lang="ts">
import { useLoadWebsite } from '../composables/database';
const website = useLoadWebsite();
</script>
Nothing is wrong. What you're describing is working exactly the way I would expect.
Firestore local persistence is not meant to be a full replacement for the backend. By default, It's meant to be a temporary data source in the case that the backend is not available. If the backend is available, then the SDK will prefer to ensure that the client app is fully synchronized with it, and serve all updates from that backend as long as it's available.
If you want to force a query to use only the cache and not the backend, you can programmatically specify the cache as the source for that query.
If you don't want any updates at all from the server for whatever reason, then you can disable network access entirely.
See also:
Firestore clients: To cache, or not to cache? (or both?)
I figured out why I was getting a result different than expected.
The culprit was { includeMetadataChanges: true }.
As explained here in the docs, that option will trigger a listener event for metadata changes.
So the listener callback was also triggering on each metadata change, instead of just data reads and writes, causing me to see strange results.
After removing that it started to work as expected, and I verified it by checking it against the Usage graphs in Firebase console which show the number of reads and snapshot listeners.
Here is the full code with that option removed:
export const useLoadWebsite = () => {
const q = query(
collection(db, 'websites'),
where('userId', '==', 'NoLTI3rDlrZtzWCbsZpPVtPgzOE3')
);
const firestoreWebsite = ref<DocumentData>();
onSnapshot(q, (querySnapshot) => {
const source = querySnapshot.metadata.fromCache ? 'local cache' : 'server';
console.log('Data came from ' + source);
const colArray: DocumentData[] = [];
querySnapshot.docs.forEach((doc) => {
colArray.push({ ...doc.data(), id: doc.id });
});
firestoreWebsite.value = colArray[0];
});
return firestoreWebsite;
};
I'm developing a web app using a Node.js/express backend and MongoDB as a database.
The below example is for an admin dashboard page where I will display cards with different information relating to the users on the site. I might want to show - on the sample page - for example:
The number of each type of user
The most common location for each user type
How many signups there are by month
Most popular job titles
I could do this all in one route, where I have a controller that performs all of these tasks, and bundles them as an object to a url that I can then pull data from using ajax. Or, I could split each task into its own route/controller, with a separate ajax call to each. What I'm trying to decide is what are the best practices around making multiple ajax calls on a single page.
Example:
I am building up a page where I will make an interactive table using DataTables for different types of user ( currently have two: mentors and mentees). This example requires just two data requests (one for each user type), but my final page will be more like 10.
For each user type, I am making an ajax get call for each user type, and building the table from the returned data:
User type 1 - Mentees
$.get('/admin/' + id + '/mentees')
.done(data => {
$('#menteeTable').DataTable( {
data: data,
"columns": [
{ "data": "username"},
{ "data": "status"}
]
});
})
User type 2 - Mentors
$.get('/admin/' + id + '/mentors')
.done(data => {
$('#mentorTable').DataTable( {
data: data,
"columns": [
{ "data": "username"},
{ "data": "position"}
]
});
})
This then requires two routes in my Node.js backend:
router.get("/admin/:id/mentors", getMentors);
router.get("/admin/:id/mentees", getMentees);
And two controllers, that are structured identically (but filter for differnt user types):
getMentees(req, res, next){
console.log("Controller: getMentees");
let query = { accountType: 'mentee', isAdmin: false };
Profile.find(query)
.lean()
.then(users => {
return res.json(users);
})
.catch(err => {
console.log(err)
})
}
This works great. However, as I need to make multiple data requests I want to make sure that I'm building this the right way. I can see several options:
Make individual ajax calls for each data type, and do any heavy lifting on the backend (e.g. tally user types and return) - as above
Make individual ajax calls for each data type, but do the heavy lifting on the frontend. In the above example I could have just as easily filtered out isAdmin users on the data returned from my ajax call
Make fewer ajax calls that request less refined data. In the above example I could have made one call (requiring only one route/controller) for all users, and then filtered data on the frontend to build two tables
I would love some advice on which strategy is most efficient in terms of time spent sourcing data
UPDATE
To clarify the question, I could have achieved the same result as above using a controller setup something like this:
Profile.find(query)
.lean()
.then(users => {
let mentors = [],
mentees = []
users.forEach(user => {
if(user.accountType === 'mentee') {
mentees.push(user);
} else if (user.accountType === 'mentor') {
mentors.push(user);
}
});
return res.json({mentees, mentors});
})
And then make one ajax call, and split the data accordingly. My question is: which is the preferred option?
TL;DR: Option 1
IMO I wouldn't serve unprocessed data to the front-end, things can go wrong, you can reveal too much, it could take a lot for the unspecified client machine to process (could be a low power device with limited bandwidth and battery power for example), you want a smooth user experience, and javascript on the client churning out information from a mass of data would detract from that. I use the back-end for the processing (prepare the information how you need it), JS for retrieving and placing the information (AJAX) on the page and things like switching element states, and CSS for anything moving around (animations and transitions etc) as much as possible before resorting to JS.
Also for the routes, my approach would be each distinct package of information (dataTable) has a route, so you're not overloading a method with too many purposes, keep it simple and maintainable. You can always abstract away anything that's identical and repeated often.
So to answer your question, I'd go with Option 1.
You could also offer a single 'page-load' endpoint, then if anything changes update the individual tables later using their distinct endpoints. This initial 'page-load' call could collate the information from the endpoints on the backend and serve as one package of data to populate all tables initially. One initial request with one lot of well-defined data, then the ability to update an individual table if the user requests it (or there is a push if you get into that).
It is really good question. First of all you should realize how your application will manage with received data. If it is huge amount of data that are not changed on fronend but with different views and whole data needs for these views it might be cached into frontend (like user settings data - application always reads it but rare changes) then you could follow with your second options. Other case if frontend works only with small part of huge amount of database data (like log data for specific user) it is preferably to preprocess (filtering) on server side your first and third options. Actually second options is preferable only for caching unchanged data on frontend as for me.
After clarifying the question you could use grouping for your request and lodash library:
Profile.find(query)
.lean()
.then(users => {
let result = [];
result = _(users)
.groupBy((elem) => elem.accountType)
.map((vals, key) => ({accountType: key, users: vals}))
.value();
});
return res.json(result);
});
Certainly you could map your data as you comfortable. This way allows to get all types of accounts (not only 'mentee' and 'mentor')
Usually there are 3 things in such architectures:
1. Client
2. API Gateway
3. Micro services (Servers)
In your case :
1. Client is JS application code
2. API Gateway + Server is Nodejs/express (Dual responsibility)
Point 1 to be noted
Servers only provides core APIs. So this API for a server should be only a user api like:
/users?type={mentor/mentee/*}&limit=10&pageNo=8
i.e anyone can ask for all data or filtered data using type query string.
Point 2 to be noted
Since Web pages are composed of multiple data points and making call for every data point to the same server increases the round trip and makes the UX worse, API gateways are there. So in this case JS would not directly communicate with core server, it communicates with API Gateway with and APIs like:
/home
The above API internally calls below APIs and aggregates the data in a single json with mentor and mentee list
/users?type={mentor/mentee/*}&limit=10&pageNo=8
This API simply passes the call to core server with query attributes
Now since in your code, API gateway and Core server is merged into single layer, this is how you should setup your code:
getHome(req, res, next){
console.log("Controller: /home");
let queryMentees = { accountType: 'mentee', isAdmin: false };
let queryMentors = { accountType: 'mentor', isAdmin: true };
mentes = getProfileData(queryMentees);
mentors = getProfileData(queryMentors);
return res.json({mentes,mentors});
}
getUsers(req, res, next){
console.log("Controller: /users");
let query = {accountType:request.query.type,isAdmin:request.query.isAdmin};
return res.json(getProfileData(query));
}
And a common ProfileService.js class with a function like:
getProfileData(query){
Profile.find(query)
.lean()
.then(users => {
return users;
})
.catch(err => {
console.log(err)
})
}
More info about API Gateway Pattern here
If you can't estimate how many types need on your app then needs to be use parameters,
If I wrote like this application I don't write multiple function for calling ajax and don't write multiple route and controller,
Client side like this
let getQuery = (id,userType)=>{
$.get('/admin/' + id + '/userType/'+userType)
.done(data => {
let dataTable = null;
switch(userType){
case "mentee":
dataTable = $('#menteeTable');
break;
case "mentor":
dataTable = $('#mentorTable');
break;
//.. you can add more selector for datatables but I wouldn't prefer this way you can generate "columns" property on server like "data" so meaning that you can just use one datatable object on client side
}
dataTable.DataTable( {
data: data,
"columns": [
{ "data": "username"},
{ "data": "status"}
]
});
})
}
My prefer for client side
let getQuery = (id,userType)=>{
$.get('/admin/' + id + '/userType/'+userType)
.done(data => {
$('#dataTable').DataTable( {
data: data.rows,
"columns": data.columns
]
});
})
}
Server response should support {data: [{}...], columns:[{}....]} like this on this scenario Datatables examples
Server side like this
Router just one
router.get("/admin/:id/userType/:userType", getQueryFromDB);
Controller
getQueryFromDB(req, res, next){
let query = { accountType: req.params.userType, isAdmin: false };
Profile.find(query)
.lean()
.then(users => {
return res.json(users);
})
.catch(err => {
console.log(err)
})
}
So main meaning about your question for me that mentees, mentors etc... are parameters like as "id"
make sure that your authentication checked which users have access userType data for both code samples mine and your code, someone can reach your data with just change routing
Have a nice weekend
from performance and smoothness of ui on user device:
Sure it would be better to do 1 ajax request for all core data (which is important to show as soon as possible), and possibly perform more requests for less priority data with some tiny delay. Or do 2 requests: one for 'fast' data and another for 'slow' (if this is applicable) because:
On one hand, many ajax requests could slowdown ui there could be a limitation for amount of ajax requests getting done at same time (it is browser dependent an could be from 2 to 10) so if for ex. in ie there will be limit of 2 then with 10 ajaxes there will be an queue of waiting ajax requests
But on the other hand if there is much data to show or some data takes longer to prepare it could result in long waiting for backend response to show something.
Talking of heavy lifting: It is not good to make such things on UI side anyway, because:
User device can be not good with resources and 'slow'.
Javascript is synchronous and as a consequence, any long loop 'freeze' UI for time it required to run that loop.
Talking of filtering users:
Profile.find(query)
.lean()
.then(users => {
let mentors = [],
mentees = []
users.forEach(user => {
if(user.accountType === 'mentee') {
mentees.push(user);
} else if (user.accountType === 'mentor') {
mentors.push(user);
}
});
return res.json({mentees, mentors});
})
seems to have one problem, possibly query will have sortings and limits, if so final result will be inconsistent, it possibly end up with only mentees or only mentors, i think you should do 2 separate queries to data storage anyways
from project structuring, maintainability, flexibility, reusability, and so on, of course it is good to decouple things as much as possible.
So, finally, imagine you made:
1. many microservices like for each widget 1 backend microcervice but there is a layer which allows to aggregate results to optimize traffic from UI in 1-2 ajax query.
2. many ui modules each working with own data, received from some service, which do 1-2 calls for aggregating backend and distributes different datasets it recieved to many frontend modules.
At back end just make one dynamic parametric method API. you can pass mentor, mentee,admin etc as role.you should have some type of user authentication and authorization to check if user a can see users in role B or not.
Regarding UI its up to user they want one page with drop-down filter or they want URLs to bookmark.
Like multiple url /admin /mentor etc.
or one url with querystring and dropdown./user?role=mentor,/user?role=admin.
Based on url you have to make controllers. I generally prefer drop down and fetch data (by default all mentors might be the selection).
This is a specific invitation suited for invitations of a romantic nature (e.g. dates or engagement parties).
I have this fetch statement that returns 19 building names, but I only want 10; the following is what I attempted, but I still get 19 building names.
fetchBuildings(energyProgramId) {
fetch(`http://localhost:1001/api/energyprograms/${energyProgramId}/buildings/?results=10`)
.then(res => res.json())
.then(json => {
this.setState({
isLoaded: true,
buildings: json,
})
});
}
Is there something extra I need to add?
Here is an example:
1.fetch('http://jsonplaceholder.typicode.com/posts/')
The above URL gives array of objects with 100 elements because it originally is an array of 100 elements.
2.fetch('http://jsonplaceholder.typicode.com/posts/?_limit=10')
This URL gives array of objects with 10 elements.
Notice the difference?
I only did this : ?_limit=10 ----> Add any number in place of 10 and hopefully you will get desired results.
As the other answer already points out the best/most normal solution would be to change on the backend how the API returns data. Typically REST API's support query parameters such as limit and start or page and resultsPerPage.
If this is not available - e.g. when you're fetching an external resource - an alternative which is often supported by static file servers and sometimes by API's is the Range header which allows you to retrieve only a specific byte range of the resource (do note, in the case that an API supports this it will still load the entire resource on the server, but it will not transmit the entire resource). An example with fetch would look like
fetch('', { headers: { range: 'bytes=0-1000'} })
When doing this on XML or JSON resources it can be somewhat difficult to work with, but with for example CSV files it's ideal.
No different from fetch to XHR or axios or anything else. actually, no different from react or angular or vue or anything else.
This is an API that backend developers wrote it and it is based on REST API, so when you call it as GET or POST and anything else you just fetch the JSON that the backend developers designed it. BUT
There is a new technology that name is GraphQL. you can call API and then you just fetch the JSON what you want. Also, it must be implemented in backend but it is possible.
It's not closely bound up with React. If you need less data you must reduce data before set the state.
const reducedBuildings = [];
fetch(`http://localhost:1001/api/energyprograms/${energyProgramId}/buildings/?results=10`)
.then(res => res.json())
.then(json => {
json.forEach(building => {
if (reducedBuildings.length < 10) {
reducedBuildings.push(building);
}
});
this.setState({
isLoaded: true,
buildings: reducedBuildings,
})
});
This post isn't really a question anymore; I just want to post this to help other people out in the future to avoid lost time.
Goal: Retrieve the client IP address and set some specific values based on certain octet in IP.
I was developing a react web-app for my company and needed to support three facilities. The three locations of-course existed in different geographical regions and had slightly different IP schema's.
I needed to set some session identifier based on an octet value from the client IP. To do so, I did the following steps.
Setup express route for user to hit on initial visit of app.
Get client IP and store in const/var.
Explode IP string by ".".
Perform If/Then or Switch to determine value of desired octet.
Set some session/logic within matching condition.
Thanks to express, the req object contains an ip key with the value of the requests IP address. We can utilize this or some other third party library to get the needed info. Of course there are better/more secure ways to do this, but this is a simple method I've researched and setup. Definitely thanks to community for helping me resolve my issue with this.
apiRouter.route('/test')
.get((req, res) => {
const request_ip = req.ip; // Returns string like ::ffff:192.168.0.1
const ip_array = request_ip.split('.') // Returns array of the string above separated by ".". ["::ffff:192","168","0","1"]
// The switch statement checks the value of the array above for the index of 2. This would be "0"
switch(ip_array[2]) {
case('0'):
res.json({'request-ip':ip_array, 'location':'Location A'});
break;
case('1'):
res.json({'request-ip':ip_array, 'location':'Location B'});
break;
case('2'):
res.json({'request-ip':ip_array, 'location':'Location C'});
break;
default:
res.json({'request-ip':ip_array, 'location':'Default Location'});
}
})
One of my main issues was that I was developing on my local laptop. My node server was running express here. I was also trying to get my request ip from my local machine. This didn't make sense because I was constantly getting back "::1" as my request IP. Baffled, I did much research and finally found it to be an obvious PEBKAC issue. Thanks to nikoss in this post, it made all the sense in the world.
You can get this information by fetching it from an open IP
https://api.ipdata.co/
fetch("https://api.ipdata.co")
.then(response => {
return response.json();
}, "jsonp")
.then(res => {
console.log(res.ip)
})
.catch(err => console.log(err))
This works!
async componentDidMount() {
const response = await fetch('https://geolocation-db.com/json/');
const data = await response.json();
this.setState({ ip: data.IPv4 })
alert(this.state.ip)
}
use it in jsx as
{this.state.ip}
It seems like https://api.ipdata.co doesn't work anymore, even when specifying a key. I ended up using Ipify (typescript):
private getMyIp() {
fetch('https://api.ipify.org?format=json').then(response => {
return response.json();
}).then((res: any) => {
this.myIp = _.get(res, 'ip');
}).catch((err: any) => console.error('Problem fetching my IP', err))
}
This is a good reference for alternative IP retrieval services: https://ourcodeworld.com/articles/read/257/how-to-get-the-client-ip-address-with-javascript-only
If https://api.ipdata.co doesn't work you can use geolocation-db.com/json. Advantage of geolocation it also gives you other important values like latitude, longitude, country, state, zip
fetch(`https://geolocation-db.com/json/`)
.then(res => res.json())
You can console.log(...) the res.json() to view the JSON values.
You can use this one as well.
fetch('https://get-ip-only.herokuapp.com/') .then(r => r.json()) .then(resp => console.log(resp.ip))
https://get-ip-only.herokuapp.com/
This API provides you the IP only.