My application is rather large, so to have a more organized translation file I want to use nasted namespaces. Example:
{
"contract": {
"index": {
"pageTitle": "Contract"
}
}
The problem with this is when I'm accessing it. With the help of this question I found out I can access the keys inside index by using it as below:
const { t, i18n } = useTranslation('contract', { useSuspense: false });
...
t('index.pageTitle')
The problem is It seems rather unecessary to prefix index. to every key I want to access. What I would like to do is import the namespace index instead of contract, and use it as below:
const { t, i18n } = useTranslation('contract:index', { useSuspense: false });
...
t('pageTitle')
Which doesn't work. I tried contract.index as well. In the official documentation I found nothing about nesting. Is it possible to accomplish what I'm trying to do or will I have to stick with prexifing every key?
Nested namespaces are not supported.
You can decorate the useTranslation hook to provide this extended functionality for pages in the namespace.
import { useTranslation as useTranslationBase } from "react-i18next";
const useTranslation = (ns, page, props={}) => {
const trans = useTranslationBase(ns, props);
return {
...trans,
t: (keys, options) => {
let _keys = keys;
if (!Array.isArray(keys)) _keys = [String(keys)];
_keys = _keys.map(key =>`${page}.${key}`)
return trans.t(_keys, options)
}
}
}
Usage
export default function () {
const { t } = useTranslation('contract', 'index');
return <div>{t(["pageTitle"])}-{t("pageTitle")}</div>
}
Related
I have this JavaScript data file(src/test/test.js):
module.exports = {
"title": "...",
"Number": "number1",
"Number2": ({ number1 }) => number1 / 2,
}
I want to pass this file verbatim(functions preserved) to a page, so that the page can use that data to build itself. I already have the page template and everything else sorted out, I just need to find a way to pass this into the page.
The first approach I tried is requireing this file in gatsby-node.js and then passing it as pageContext.
gatsby-node.js
const path = require('path');
exports.createPages = ({actions, graphql}) => {
const { createPage } = actions;
return graphql(`
query loadQuery {
allFile(filter: {sourceInstanceName: {eq: "test"}}) {
edges {
node {
relativePath
absolutePath
}
}
}
}
`).then(result => {
if (result.errors) {
throw result.errors;
}
for (const node of result.data.allFile.edges.map(e => e.node)) {
const data = require(node.absolutePath);
createPage({
path: node.relativePath,
component: path.resolve('./src/templates/test.js'),
context: data,
});
}
});
};
gatsby-config.js
module.exports = {
plugins: [
{
resolve: `gatsby-source-filesystem`,
options: {
name: `test`,
path: `${__dirname}/src/test/`,
},
},
],
}
src/templates/test.js
import React from 'react';
const index = ({ pageContext }) => (
<p>{pageContext.Number2()}</p>
);
export default index;
However, I get this warning when running the dev server:
warn Error persisting state: ({ number1 }) => number1 / 2 could not be cloned.
If I ignore it and try to use the function anyway, Gatsby crashes with this error:
WebpackError: TypeError: pageContext.Number2 is not a function
After searching for a while, I found this:
The pageContext was always serialized so it never worked to pass a function and hence this isn't a bug. We might have not failed before though.
- Gatsby#23675
which told me this approach wouldn't work.
How could I pass this data into a page? I've considered JSON instead, however, JSON can't contain functions.
I've also tried finding a way to register a JSX object directly, however I couldn't find a way.
Regarding the main topic, as you spotted, can't be done that way because the data is serialized.
How could I pass this data into a page? I've considered JSON instead,
however, JSON can't contain functions.
Well, this is partially true. You can always do something like:
{"function":{"arguments":"a,b,c","body":"return a*b+c;"}}
And then:
let func = new Function(function.arguments, function.body);
In this case, you are (de)serializing a JSON function, creating and casting a function based on JSON parameters. This approach may work in your scenario.
Regarding the JSX, I guess you can try something like:
for (const node of result.data.allFile.edges.map(e => e.node)) {
const data = require(node.absolutePath);
createPage({
path: node.relativePath,
component: path.resolve('./src/templates/test.js'),
context:{
someComponent: () => <h1>Hi!</h1>
},
});
}
And then:
import React from 'react';
const Index = ({ pageContext: { someComponent: SomeComponent} }) => (
return <div><SomeComponent /></div>
);
export default index;
Note: I don't know if it's a typo from the question but index should be capitalized as Index
In this case, you are aliasing the someComponent as SomeComponent, which is a valid React component.
I'm importing an array called CHART_CARDS in a Vue component. This is meant to provide the initial state for another array, called chartCards, which a user can change.
import { CHART_CARDS } from '~/constants/chartCards'
...
export default {
data(){
return {
chartCards: []
}
},
async created(){
if (this.$auth.user.settings && this.$auth.user.settings.length) {
this.chartCards = this.$auth.user.settings
} else {
this.chartCards = CHART_CARDS
}
}
}
So the data property chartCards is either taken from the imported variable or from a pre-existing database table.
Here's where things get weird: I have a method called reset, which is supposed to restore the chartCards variable to the value of the imported array:
async reset () {
console.log('going to reset. CHART_CARDS looks like:')
console.log(CHART_CARDS)
this.chartCards = CHART_CARDS
await this.updateCards()
console.log('chart cards after updating:')
console.log(this.chartCards)
}
Somehow, CHART_CARDS is also changed when chartCards is updated. The two console logs above print the same array, so the reset doesn't work. CHART_CARDS is changed nowhere else in the code; all references to CHART_CARDS are shown in the above code. How is its value being updated?
As others have mentioned in the comments, your CHART_CARDS array probably contains objects, and you are most likely changing these objects somewhere in your code.
There is an easy way around this with some minor API tweaks.
~/constants/chartCards.js
export function getChartCards () {
return [
{...},
{...},
...
]
}
App.vue
import { getChartCards } from '~/constants/chartCards'
...
export default {
data(){
return {
chartCards: []
}
},
async created(){
if (this.$auth.user.settings && this.$auth.user.settings.length) {
this.chartCards = this.$auth.user.settings
} else {
this.chartCards = getChartCards()
}
}
}
Since we're always creating a new array with different objects, changes made to one chartCards instance will not reflect in another.
If you absolutely want to stick with your current API, then that can potentially be achieved as well. You just need to create a deep-copy of your CHART_CARDS object before assigning it.
import { CHART_CARDS } from '~/constants/chartCards'
...
export default {
data(){
return {
chartCards: []
}
},
async created(){
if (this.$auth.user.settings && this.$auth.user.settings.length) {
this.chartCards = this.$auth.user.settings
} else {
this.chartCards = JSON.parse(JSON.stringify(CHART_CARDS)) // This will not work if your CHART_CARDS has methods
}
}
}
I've manage to test Vuex getters that are isolated from other code. I'm now facing some issues when a getter depends on other getters, see the following example:
getters.js
export const getters = {
getFoo(state) => prefix {
return `${prefix}: ${state.name}`;
},
getFancyNames(state, getters) {
return [
getters.getFoo('foo'),
getters.getFoo('bar')
]
}
}
getters.spec.js
import { getters } = './getters';
const state = {
name: 'stackoverflow'
};
describe('getFoo', () => {
it('return name with prefix', () => {
expect(getters.getFoo(state)('name')).toBe('name: stackoverflow');
});
});
describe('getFancyNames', () => {
// mock getters
const _getters = {
getFoo: getters.getFoo(state)
}
it('returns a collection of fancy names', () => {
expect(getters.getFancyNames(state, _getters)).toEqual([
'foo: stackoverflow',
'bar: stackoverflow'
]);
});
});
When the tested getter depends on other getter that has arguments this means that I've reference the original getter.getFoo on the mock, and this breaks the idea of mocking, since the tests start to have relation with each other. When the getters grow, and the dependency graph has several levels it makes the tests complex.
Maybe this is the way to go, just wanted to check that I'm not missing anything...
I agree with you that referencing the actual collaborator in your mock defeats the purpose of a mock. So instead I would simply directly return whatever you want your collaborator to return.
In your example, instead of doing something like this:
// mock getters
const _getters = {
getFoo: getters.getFoo(state)
}
You would simply put in whatever getters.getFoo(state) would return:
const _getters = {
getFoo: 'foobar'
}
If you have a getter that takes an additional argument you would simply return a function that returns a constant:
const _getters = {
getFoo: x => 'foobar',
}
Since I'm using Jest there is an option in the jest mock function that let's specify the return value when called:
mockReturnValueOnce or mockReturnValue
More information can be found here: https://facebook.github.io/jest/docs/en/mock-functions.html#mock-return-values
Using the same code as in the question this could be solved like this:
const state = {
name: 'stackoverflow'
}
describe('getFancyNames', () => {
const getFoo = jest.fn()
getFoo.mockReturnValueOnce('foo: stackoverflow')
getFoo.mockReturnValueOnce('bar: stackoverflow')
it('returns a collection of fancy names', () => {
expect(getters.getFancyNames(state, { getFoo })).toEqual([
'foo: stackoverflow',
'bar: stackoverflow'
])
})
})
A cleaner way that I have found is to create your own mocked getters object. This only works if the getter uses the unaltered state like the question does.
const state = {
name: 'stackoverflow'
}
describe('getFancyNames', () => {
const mockedGetters = {
...getters, // This can be skipped
getFoo: getters.getFoo(state), // We only overwrite what is needed
};
it('returns a collection of fancy names', () => {
expect(getters.getFancyNames(state, mockedGetters)).toEqual([
'foo: stackoverflow',
'bar: stackoverflow'
])
})
})
Extra
In the case that you do need to call other getter functions just pass the mocked getters objects into another mocked getters object. It sounds worse than is actually is.
getters.py
export const getters = {
getBar(state) = { // new extra hard part!
return state.bar,
},
getFoo(state, getters) => prefix {
return `${prefix}: ${state.name} with some ${getters.getBar}`;
},
getFancyNames(state, getters) {
return [
getters.getFoo('foo'),
getters.getFoo('bar')
]
}
}
const _mockedGetters = {
...getters, // This can be skipped
getFoo: getters.getFoo(state), // We only overwrite what is needed
};
const mockedGetters = {
.._mockedGetters, // Use the mocked object!
getBar: getters.getBar(state, _mockedGetters), // We only overwrite what is needed
};
// continue down the line as needed!
I'm playing around with cyclejs and I'm trying to figure out what the idiomatic way to handle many sources/intents is supposed to be. I have a simple cyclejs program below in TypeScript with comments on the most relevant parts.
Are you supposed to model sources/intents as discreet events like you would in Elm or Redux, or are you supposed to be doing something a bit more clever with stream manipulation? I'm having a hard time seeing how you would avoid this event pattern when the application is large.
If this is the right way, wouldn't it just end up being a JS version of Elm with the added complexity of stream management?
import { div, DOMSource, h1, makeDOMDriver, VNode, input } from '#cycle/dom';
import { run } from '#cycle/xstream-run';
import xs, { Stream } from 'xstream';
import SearchBox, { SearchBoxProps } from './SearchBox';
export interface Sources {
DOM: DOMSource;
}
export interface Sinks {
DOM: Stream<VNode>
}
interface Model {
search: string
searchPending: {
[s: string]: boolean
}
}
interface SearchForUser {
type: 'SearchForUser'
}
interface SearchBoxUpdated {
type: 'SearchBoxUpdated',
value: string
}
type Actions = SearchForUser | SearchBoxUpdated;
/**
* Should I be mapping these into discreet events like this?
*/
function intent(domSource: DOMSource): Stream<Actions> {
return xs.merge(
domSource.select('.search-box')
.events('input')
.map((event: Event) => ({
type: 'SearchBoxUpdated',
value: ((event.target as any).value as string)
} as SearchBoxUpdated)),
domSource.select('.search-box')
.events('keypress')
.map(event => event.keyCode === 13)
.filter(result => result === true)
.map(e => ({ type: 'SearchForUser' } as SearchForUser))
)
}
function model(action$: Stream<Actions>): Stream<Model> {
const initialModel: Model = {
search: '',
searchPending: {}
};
/*
* Should I be attempting to handle events like this?
*/
return action$.fold((model, action) => {
switch (action.type) {
case 'SearchForUser':
return model;
case 'SearchBoxUpdated':
return Object.assign({}, model, { search: action.value })
}
}, initialModel)
}
function view(model$: Stream<Model>): Stream<VNode> {
return model$.map(model => {
return div([
h1('Github user search'),
input('.search-box', { value: model.search })
])
})
}
function main(sources: Sources): Sinks {
const action$ = intent(sources.DOM);
const state$ = model(action$);
return {
DOM: view(state$)
};
}
run(main, {
DOM: makeDOMDriver('#main-container')
});
In my opinion you shouldn't be multiplexing intent streams like you do (merging all the intent into a single stream).
Instead, you can try returning multiple streams your intent function.
Something like:
function intent(domSource: DOMSource): SearchBoxIntents {
const input = domSource.select("...");
const updateSearchBox$: Stream<string> = input
.events("input")
.map(/*...*/)
const searchForUser$: Stream<boolean> = input
.events("keypress")
.filter(isEnterKey)
.mapTo(true)
return { updateSearchBox$, searchForUser$ };
}
You can then map those actions to reducers in the model function, merge those reducers and finally fold them
function model({ updateSearchBox$, searchForUser$ }: SearchBoxIntents): Stream<Model> {
const updateSearchBoxReducer$ = updateSearchBox$
.map((value: string) => model => ({ ...model, search: value }))
// v for the moment this stream doesn't update the model, so you can ignore it
const searchForUserReducer$ = searchForUser$
.mapTo(model => model);
return xs.merge(updateSearchBoxReducer$, searchForUserReducer$)
.fold((model, reducer) => reducer(model), initialModel);
}
Multiple advantages to this solution:
you can type the arguments of your function and check that the right stream are passed along;
you don't need a huge switch if the number of actions increases;
you don't need actions identifiers.
In my opinion, multiplexing/demultiplexing streams is good when there is a parent/child relationship between two components. This way, the parent can only consume the events it needs to (this is more of an intuition than a general rule, it would need some more thinking :))
I have a mutation like
mutation deleteRecord($id: ID) {
deleteRecord(id: $id) {
id
}
}
and in another location I have a list of elements.
Is there something better I could return from the server, and how should I update the list?
More generally, what is best practice for handling deletes in apollo/graphql?
I am not sure it is good practise style but here is how I handle the deletion of an item in react-apollo with updateQueries:
import { graphql, compose } from 'react-apollo';
import gql from 'graphql-tag';
import update from 'react-addons-update';
import _ from 'underscore';
const SceneCollectionsQuery = gql `
query SceneCollections {
myScenes: selectedScenes (excludeOwner: false, first: 24) {
edges {
node {
...SceneCollectionScene
}
}
}
}`;
const DeleteSceneMutation = gql `
mutation DeleteScene($sceneId: String!) {
deleteScene(sceneId: $sceneId) {
ok
scene {
id
active
}
}
}`;
const SceneModifierWithStateAndData = compose(
...,
graphql(DeleteSceneMutation, {
props: ({ mutate }) => ({
deleteScene: (sceneId) => mutate({
variables: { sceneId },
updateQueries: {
SceneCollections: (prev, { mutationResult }) => {
const myScenesList = prev.myScenes.edges.map((item) => item.node);
const deleteIndex = _.findIndex(myScenesList, (item) => item.id === sceneId);
if (deleteIndex < 0) {
return prev;
}
return update(prev, {
myScenes: {
edges: {
$splice: [[deleteIndex, 1]]
}
}
});
}
}
})
})
})
)(SceneModifierWithState);
Here is a similar solution that works without underscore.js. It is tested with react-apollo in version 2.1.1. and creates a component for a delete-button:
import React from "react";
import { Mutation } from "react-apollo";
const GET_TODOS = gql`
{
allTodos {
id
name
}
}
`;
const DELETE_TODO = gql`
mutation deleteTodo(
$id: ID!
) {
deleteTodo(
id: $id
) {
id
}
}
`;
const DeleteTodo = ({id}) => {
return (
<Mutation
mutation={DELETE_TODO}
update={(cache, { data: { deleteTodo } }) => {
const { allTodos } = cache.readQuery({ query: GET_TODOS });
cache.writeQuery({
query: GET_TODOS,
data: { allTodos: allTodos.filter(e => e.id !== id)}
});
}}
>
{(deleteTodo, { data }) => (
<button
onClick={e => {
deleteTodo({
variables: {
id
}
});
}}
>Delete</button>
)}
</Mutation>
);
};
export default DeleteTodo;
All those answers assume query-oriented cache management.
What if I remove user with id 1 and this user is referenced in 20 queries across the entire app? Reading answers above, I'd have to assume I will have to write code to update the cache of all of them. This would be terrible in long-term maintainability of the codebase and would make any refactoring a nightmare.
The best solution in my opinion would be something like apolloClient.removeItem({__typeName: "User", id: "1"}) that would:
replace any direct reference to this object in cache to null
filter out this item in any [User] list in any query
But it doesn't exist (yet)
It might be great idea, or it could be even worse (eg. it might break pagination)
There is interesting discussion about it: https://github.com/apollographql/apollo-client/issues/899
I would be careful with those manual query updates. It looks appetizing at first, but it won't if your app will grow. At least create a solid abstraction layer at top of it eg:
next to every query you define (eg. in the same file) - define function that clens it properly eg
const MY_QUERY = gql``;
// it's local 'cleaner' - relatively easy to maintain as you can require proper cleaner updates during code review when query will change
export function removeUserFromMyQuery(apolloClient, userId) {
// clean here
}
and then, collect all those updates and call them all in final update
function handleUserDeleted(userId, client) {
removeUserFromMyQuery(userId, client)
removeUserFromSearchQuery(userId, client)
removeIdFrom20MoreQueries(userId, client)
}
For Apollo v3 this works for me:
const [deleteExpressHelp] = useDeleteExpressHelpMutation({
update: (cache, {data}) => {
cache.evict({
id: cache.identify({
__typename: 'express_help',
id: data?.delete_express_help_by_pk?.id,
}),
});
},
});
From the new docs:
Filtering dangling references out of a cached array field (like the Deity.offspring example above) is so common that Apollo Client performs this filtering automatically for array fields that don't define a read function.
Personally, I return an int which represents the number of items deleted. Then I use the updateQueries to remove the document(s) from the cache.
I have faced the same issue choosing the appropriate return type for such mutations when the rest API associated with the mutation could return http 204, 404 or 500.
Defining and arbitrary type and then return null (types are nullable by default) does not seem right because you don't know what happened, meaning if it was successful or not.
Returning a boolean solves that issue, you know if the mutation worked or not, but you lack some information in case it didn't work, like a better error message that you could show on FE, for example, if we got a 404 we can return "Not found".
Returning a custom type feels a bit forced because it is not actually a type of your schema or business logic, it just serves to fix a "communication issue" between rest and Graphql.
I ended up returning a string. I can return the resource ID/UUID or simply "ok" in case of success and return an error message in case of error.
Not sure if this is a good practice or Graphql idiomatic.