Struggling with typing an object, with computed property keys - javascript

I'm struggling to correctly type the endpointMeta object in my project.
It contains computed object keys that are related to 2 objects
services - contains the names of the services currently known. The number of services may increase later.
endpoints - define the endpoints available in the service, that are being called in the project.
enum services {
serviceNameOne = 'service1',
serviceNameTwo = 'service2',
serviceNameThree = 'service3',
};
const endpoints = {
[services.serviceNameOne]: {
getUser: '/getUser',
verifyDetails: '/s/verifyDetails',
},
[services.serviceNameTwo]: {
fetchCustomersv1: '/v1/fetchCustomers',
fetchCustomersv2: '/v2/fetchCustomersv1',
},
[services.serviceNameThree]: {
getAllProjects: '/g/a/p',
getProject: '/g/p',
updateProject: '/u/p',
}
}
endpointMeta - object contains some meta info about these endpoints
const endpointMeta = {
[services.serviceNameOne]:{
[endpoints.serviceOne.getUser]: {
description: 'something about getUser',
version: 1
},
[endpoints.serviceOne.verifyDetails]:{
description: 'something about verifyDetails',
version: 3
},
},
[services.serviceNameTwo]:{
[endpoints.serviceTwo.fetchCustomersv1]: {
description: 'something about fetchCustomersv1',
version: 1
},
[endpoints.serviceTwo.fetchCustomersv1]:{ // duplicate object :(
description: 'something about fetchCustomersv1',
version: 1
},
}
}
Edit
What I ideally want is for when other devs are introducing new services or endpoints, they should be forced to add an entry to the endpointmeta object.
TS Playground

Related

How to get the cluster node IPs using Pulumi

Here is an example of creating a managed Kubernetes cluster on DigitalOcean.
import * as pulumi from "#pulumi/pulumi";
import * as digitalocean from "#pulumi/digitalocean";
const foo = new digitalocean.KubernetesCluster("foo", {
region: "nyc1",
version: "1.20.2-do.0",
nodePool: {
name: "front-end-pool",
size: "s-2vcpu-2gb",
nodeCount: 3,
},
});
Example code taken from the Pulumi DigitalOcean package documentation.
How do I retrieve the droplet node IPv4 addresses for use in say creating DnsRecord resources?
const _default = new digitalocean.Domain("default", {name: "example.com"});
// This code doesn't work because foo.nodePool is just the inputs.
const dnsRecords = foo.nodePool.nodes(node => new digitalocean.DnsRecord("www", {
domain: _default.name,
type: "A",
value: node.ipv4Address,
}));
DigitalOcean doesn't return a list of node IP addresses from the Kubernetes cluster you created. You can retrieve these values using the getDroplet function.
However, you'll need to do this iteration inside an apply() like so:
const addresses = foo.nodePool.nodes.apply(
nodes => nodes.forEach(
(node) => {
let n = digitalocean.getDropletOutput({
name: node.name
})
new digitalocean.DnsRecord("www", {
domain: domain.name,
type: "A",
value: n.ipv4Address,
})
}
)
)
Using an apply here lets us wait until the foo.nodePool.nodes has been created by the API. We can then iterate over it like a normal array, get the droplet, assign it to the variable n and then create a new DNS record for each of the nodes

How can I limit the objects from a group in a query in Gatsby?

I have this query in my code which allows me to build a tag cloud for this blog front page
tagCloud:allContentfulBlogPost {
group(field: tags, limit: 8) {
fieldValue
}
}
It's passing data that I map in my component using {data.tagCloud.group.map(tag => (...))};. The code works nicely, but it won't be limited by the filter I'm passing above in the group(fields: tags, limit: 8) in my query. It renders all the tags and not only the first eight.
I've unsuccessfully tried the skip filter as well for the sake of seeing if it works.
Is this the proper way to limit the count to my mapping component in Gatsby?
The Contentful source plugin doesn't define arguments on any of the nodes it creates, unfortunately. Instead you would need to create these yourself. The easiest way to do that is through the createResolvers API.
Here's a similar example from a project of mine:
// in gatsby-node.js
exports.createResolvers = ({ createResolvers }) => {
createResolvers({
SourceArticleCollection: {
// Add articles from the selected section(s)
articles: {
type: ["SourceArticle"],
args: {
// here's where the `limit` argument is added
limit: {
type: "Int",
},
},
resolve: async (source, args, context, info) => {
// this function just needs to return the data for the field;
// in this case, I'm able to fetch a list of the top-level
// entries that match a particular condition, but in your case
// you might want to instead use the existing data in your
// `source` and just slice it in JS.
const articles = await context.nodeModel.runQuery({
query: {
filter: {
category: {
section: {
id: {
in: source.sections.map((s) => s._ref),
},
},
},
},
},
type: "SourceArticle",
})
return (articles || []).slice(0, args.limit || source.limit || 20)
},
},
},
})
}
Because resolvers run as part of the data-fetching routines that support the GraphQL API, this will run server-side at build-time and only the truncated/prepared data will be sent down to the client at request time.

Construct objects for client in Firebase Cloud Functions

I'm working on a simple registration system using Firebase as a backend. I am successfully authenticating users and writing to the database. I have an index of courses and users with the following structure:
{
courses: { // index all the courses available
key1: {
title: "Course 1",
desc: "This is a description string.",
date: { 2018-01-01 12:00:00Z }
members: {
user1: true
...
}
},
key2 { ... },
},
users: { // track individual user registrations
user1: {
key1: true,
...
},
user2: { ... }
}
}
I have a cloud function that watches for the user to add a course and it builds an array with the corresponding courseId that will look at the courses node to return the appropriate items.
exports.listenForUserClasses = functions.database.ref('users/{userId}')
.onWrite(event => {
var userCourses = [];
var ref = functions.database.ref('users/{userId}');
for(var i=0; i<ref.length; i++) {
userCourses.push(ref[i])
}
console.log(userCourses); // an array of ids under the user's node
});
So, my question has two parts:
How can I build the updated object when the page is loaded?
How do I return the function to the client script?
Question 1: From the client side you want to get the reference to the database path. Then you want to call the child_added event. Keep it in-memory, this will be called whenever one is add then you can update your UI.
var ref = db.ref("path/to/courses");
ref.on("child_added", function(snapshot, prevChildKey) {
var newClass = snapshot.val();
});
If you are completely refreshing the page then you can always grab the data again from the database path by using the value option and calling once
Questions 2: You don't. This is considered an asynchronous function. If you wanted a response from the function then you would setup an HTTP trigger and wait for the response from that function.

Querying multiple collections in Firebase

I've been working with Firebase for a little while, and like their suggestion to keep data denormalized. My only problem is figuring out the best way to query across multiple collections. For example, I have an Identity object which holds info on my users:
identities: {
$identity: {
name: string,
studio: $studioID
}}
That corresponds to a Studio object:
studios: {
$studio: {
name: string,
owner: $identityID,
location: $locationID
}}
This object references the owner, and also their location. The location object references Classes, which references Students.... on and on. Right now, in order to fetch a referenced object, I'm doing something like this:
Auth.loginUser(email, password, (success) => {
const identityRef = firebase.database().ref('/identity');
identityRef.child(success.uid).on("value", function(identitySnapshot) {
const identity = identitySnapshot.val();
const studioRef = firebase.database().ref('/studios');
dispatch({type: 'UPDATE_IDENTITY', identity})
studioRef.child(identity.studioID).on("value", function(studioSnapshot) {
const studio = studioSnapshot.val();
dispatch({type: 'UPDATE_STUDIO', studio});
})}); });
I would continue nesting my calls for Location, Classes, Students, etc. Is there a better way to do this?
Consider the following structures:
identities: {
$identityKey: {
name: string
}
}
identities_studios: {
$identityKey: {
$studioKey: {
name: string
}
}
}
identities_studios_locations: {
$identityKey: {
$studioKey: {
$locationKey: {
lat: string,
lng: string
}
}
}
}
The first, identities, only stores info about the identities as usual.
The second, identities_studios, only stores info about the studios, but the studios are grouped by $identityKey.
The third, identities_studios_locations, only stores info about the locations, but they are grouped firstly by $studioKey and secondly by $identityKey.
Now you can do this:
const db = firebase.database()
Auth.loginUser(email, password, success => {
db.ref(`/identities/${success.uid}`).on("value", snap => { ... }
db.ref(`/identities_studios/${success.uid}`).on("value", snap => { ... }
db.ref(`/identities_studios_locations/${success.uid}`).on("value", snap => { ... }
}
Instead of making multiple requests one after the other, we get them to run simultaneously.
If you want, after getting all this info from the database, you can transform the data structure to whatever you want: one array for identities, another for studios, another for locations, etc.; or a single array of identities with nested studios which in turn have nested locations etc.

Emberjs - Calling directly nested resource's URL

I have merged together from this two problem (How to pass model in Nested routes - emberjs and Embedded data from RestApi) a JsBin example: http://jsbin.com/OxIDiVU/544
It works fine if you navigate customers-> info -> contact, but it will break if one calls directly a customer's contact eg.:http://jsbin.com/OxIDiVU/544#/customers/3/contact
Error while loading route: customer.contact Cannot set property 'store' of undefined TypeError: Cannot set property 'store' of undefined
When you do a request for a single record, it uses a different serializer endpoint and expects the data in a different format. The format it expects is:
{
customer: {
id: 1,
currency:1
},
currencies: [
{
id:1,
prop: 'foo'
}
]
}
And the endpoint in the serializer is extractSingle. Feel free to extract out the portions of extractArray that are similar and share those.
Pretending your payload is:
{
customer:{
id:3,
name:"Joue",
currency:{
id:5,
iso_code:"BDT"
}
}
}
Your extractSingle would be
extractSingle: function(store, type, payload, id) {
var customer = payload.customer,
currencies = [];
var currency = customer.currency;
delete customer.currency;
if(currency){
currencies.push(currency);
customer.currency = currency.id;
}
payload = { customer:customer, currencies: currencies };
return this._super(store, type, payload, id);
}
Here's the example, with a response for customer 3
http://jsbin.com/OxIDiVU/545#/customers/3/contact
your property name should match inside the model, and the root name (currencies here) should be the plural version of the type of record it is.
{
customer: {
id: 1,
default_currency:1
},
currencies: [
{
id:1,
prop: 'foo'
}
]
}

Categories