Pointfree-Style with template-string in ramda.js - javascript

I've problems writing a pointfree-style function in ramda.js and wondered if anybody can help me with that. The getEnv function reads an env-variable and logs to console if it couldn't be found.
Here is my code
const env = name => R.path(['env', name], process);
const getEnv = name => R.pipe(
env,
R.when(R.isNil, () => log(`Missing env "${name}"`))
)(name);
console.log(getEnv('myenv'))
I'd want to drop the name parameter of the getEnv function (and if possible also on the env function) but don't know how to do this.

The function getEnv does more than it should. It actualy returns the content of the path or logs a validation message.
Split it into two separate functions. In my example below, I call it findPath andvalidatePath, which works generically for all paths. I've wrapped validatePath into another function calledvalidateEnvPath, which searches directly for "env"
To get rid of env you can do the following: R.flip (R.curry (R.path)). This will turn the function curry and then the arguments around, so you can tell the function where you want to query first
const process = {env: {myenv: ':)'}}
const path = R.flip(R.curry(R.path))
const findPathInProcess = R.pipe(
path (process),
R.ifElse(
R.isNil,
R.always(undefined),
R.identity
)
)
const validatePath = path =>
validationPathResponse (findPathInProcess( path )) (`can't find something under [${path}]`)
const validateEnvPath = path =>
validatePath (buildPath (['env']) (path))
const buildPath = xs => x =>
xs.concat(x)
const validationPathResponse = response => errorMessage =>
response
? response
: errorMessage
console.log(validatePath(['env', 'myenv']))
console.log(validateEnvPath('myenv'))
console.log(validateEnvPath('yourenv'))
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>

Consider using Either monad. Sanctuary has it already implemented and plays nice with its brother ramda:
const viewEnv = S.pipe ([
S.flip (R.append) (['env']),
R.lensPath,
R.view,
S.T (process),
S.toEither ('Given variable could not be retrieved')
])
const log = R.tap (console.log)
const eitherSomeVar = viewEnv ('someVar')
const eitherWhatever = S.bimap (log) (doSomeOtherStuff)

In addition one could also write the following
const path = R.flip(R.path) // already curried
const findPathInProcess = R.pipe(
path(process),
R.when(
R.isNil,
R.always(undefined)
)
)

Related

Sinon.stub.resolves returns undefined when awaited within Buffer.from

I have the following code:
describe('run()', () => {
const ret = ['returnValue'];
const expectedData = 'data';
beforeEach(() => {
sinon.stub(myStubbedService, 'myStubbedMethod').resolves(ret);
});
it('should not break', async () => {
const foo = myStubbedService.myStubbedMethod();
const bar = await myStubbedService.myStubbedMethod();
const works = Buffer.from(bar[0], 'hex');
console.log(works);
const breaks = Buffer.from(await myStubbedService.myStubbedMethod()[0], 'hex');
console.log(breaks);
})
logging works logs the correct Buffer but logging breaks =>
TypeError [ERR_INVALID_ARG_TYPE]: The first argument must be of type string or an instance of Buffer, ArrayBuffer, or Array or an Array-like Object. Received undefined
I am pretty sure the code for breaks works just the same as the code for works but the test fails. What am I missing?
Actually, your way to get the works is not the same as breaks. The way to get the works is easy to understand - Wait for the response of myStubbedMethod, then get the first item of the response.
Take a look at the way you get the breaks:
const breaks = Buffer.from(await myStubbedService.myStubbedMethod()[0], 'hex');
(Maybe) as you know myStubbedService.myStubbedMethod() return a Promise, when you get the 0 attribute of a Promise, you get back an undefined value.
await with a constant, you get the constant.
Your code will look like this:
const breaks = Buffer.from(await undefined, 'hex');
Just more parentheses:
const breaks = Buffer.from((await myStubbedService.myStubbedMethod())[0], 'hex');

Define template strings as const to evaluate later

Is there a way to accomplish something like this:
const API_URL = `https://api.my-data-provider.com/items/${id}`
// [...]
// and later on or in another file, use it with something like
const result = await fetch(API_URL(id)) // or API_URL.apply(id), API_URL.apply({ id: 23}), etc...
I want to save template literals in a constants / configuration file, to use them later
Or is there any other standard or well established way to do this kind of thing?
You could use a function for this:
const generateApiUrl = (id) => `https://api.my-data-provider.com/items/${id}`;
const result = await fetch(generateApiUrl(id))

How can I minimize duplication of code that sometimes needs a callback?

In my gulpfile, I have a task that processes all of my pages and another task that watches my pages for changes and processes just the changed page. It looks like this:
const buildPages = path => cb => {
gulp.src(path)
// some lines of piping
.pipe(gulp.dest(pages.dist));
cb();
}
const watchPages = () =>
gulp.watch(pages.src).on('change', path =>
gulp.src(path)
// the same piping as above
.pipe(gulp.dest(pages.dist))
);
The .on() method of the chokidar watcher object returned by gulp.watch() does not receive a callback function, while the gulp task above it requires one. So to remove code duplication, I can do this:
const buildPages = path =>
gulp.src(path)
// some lines of piping
.pipe(gulp.dest(pages.dist));
const buildPagesWithCallback = path => cb => {
buildPages(path)
cb();
}
const watchPages = () =>
gulp.watch(pages.src).on('change', path =>
buildPages(path)
);
Is this the right way to go about it, or is there a way to remove duplication without creating an extra function (perhaps by making the watcher receive a callback)?
Not sure what's your other requirements/needs, but given your description, I would usually have a setup like this (assuming you're using gulp 4):
const src = [ './src/pageA/**/*.js', ...others ];
const dist = './dist';
const buildPages = () => gulp.src(src).pipe(...).pipe(gulp.dest(dist));
const callback = () => { /* do stuff */ };
exports.buildPageWithCallback = gulp.series(buildPages, callback);
exports.watchPages = () => gulp.watch(src, gulp.series(buildPages));
exports.watchPageWithCallback = () => gulp.watch(src, gulp.series(buildPages, callback));
I would use gulp.series to run callback without explicitly passing a callback to a task maker function, and pass a task directly to gulp.watch.
If your requirements demand buildPages to take in a path, then I think the way you're doing it is right, sorry if I misunderstand the question

using buffer& streams to search for string in multiple text file & also to find line number in nodejs

please help me to search a string across multiple files, I need to print the line number of that particular string with filename using buffer & streams concept in node.js.
for example:
there are 5 text files and there is " hello " string in 10 and 15th line of the 3rd file. same hello string in the 50th line of the 5th file. now I need to print line number of file name 3 with the line number of that searched string "hello"
same as for the 5th file.
help me to write this program in buffer concept in node.js
const readline = require("readline");
const fs = require("fs");
// Start methods implementation
const beginSearch = (readStream, filePath, queries) => {
let lineCount = 0;
let matches = new Map();
queries.forEach(query => matches.set(query, []));
return new Promise((resolve, reject) => {
readStream.on("line", line => {
lineCount++;
for (query of matches.keys()) {
if (searchForTerm(line, query))
matches.set(query, [...matches.get(query), lineCount]);
}
});
readStream.on("close", () => resolve({
filePath,
matches
}));
});
};
const searchForTerm = (line, query) => line.match(query);
const createLineInterfaces = filePaths =>
filePaths.map(filePath => {
const readStream = readline.createInterface({
input: fs.createReadStream(filePath),
crlfDelay: Infinity
});
return {
filePath,
readStream
};
});
// End methods implementation
// Start main function
const filesToSearch = ["sample.txt", "sample2.txt"];
const queriesToSeatch = ["hello"];
let searchProms = createLineInterfaces(filesToSearch).map(
({
readStream,
filePath
}) =>
beginSearch(readStream, filePath, queriesToSeatch)
);
Promise.all(searchProms).then(searchResults =>
searchResults.forEach(result => console.log(result))
);
// End main function
A little explain
I am using the readline module to split each file into lines. Keep in mind the whole implementation is with streams. Then i am attaching a listener to the line event and I am searching each line for a specific query. The search method is a simple regexp. You could use a fuzzy search method if you want. Then the matched lines are saved in a Map which keys are the queries and values the arrays of lineNumbers that the query has found.
I am assuming that you are familiar with the stream concept and you know about ES6 stuff.

Using Task monads and Reader monads in javascript (DynamoDB and Facebook API)

here we have are trying to make a lot of calls in a functional way with javascript, the problem is that we end up with Reader.of(Task.of(Reader.of(Task.of))), so we need to map(map(map(map))) the values that we need to operate.
The computation needs to go to a DynamoDB table, get a specific property, then go again to DynamoDB and get another property, after that go to facebook api to get a collection, finally store that info in a table in Dynamo. So each of this calls use an AWS javascript plugin and the facebook Graph plugin in node. So we need to pass AWS dependency on run, and then we need to fork, then pass again AWS and fork again, then pass FBGraph on run and fork again, this is kind of tedious if you are building a computation.
Anyway here is the code:
require('pointfree-fantasy').expose(global)
import Reader from 'fantasy-readers'
import Task from 'data.task'
import _ from 'ramda'
const log = x => {console.log(x); return x}
const ReaderTask = Reader.ReaderT(Task)
// scan :: string -> [a]
const scan = x => Reader.ask.map(env => env.aws.scanAsync({tableName: x}))
// batchGetItems :: [a] -> [b]
const batchGetItem = x => Reader.ask.map(env => env.aws.batchGetItemAsync(x))
// batchWriteItem :: [a] -> [b]
const batchWriteItem = x => Reader.ask.map(env => env.aws.batchWriteItemAsync(x))
// scanCampaigns :: null -> [a]
const scanCampaigns = () => scan('CampaignStats')
const FBBatchGetItem = x => Reader.ask.map(env => env.fb.batchAsync(x))
const getCampaigns = compose(scanCampaigns, Reader.of)
const groupUsers = _.groupBy(x => x.user_fbid)
const batchGetAccessToken = chain(batchGetItem, ReaderTask.of)
const getCampaignsInsights = chain(FBBatchGetItem, ReaderTask.of)
const saveInsights = chain(batchWriteItem, ReaderTask.of)
const updateCampaignStats = chain(batchWriteItem, ReaderTask.of)
const taskOfEvery = (Task, num) => compose(map(Task.of),_.splitEvery(num))
const runTaskWithFn = (fn, task) => fn(task.fork(err => 'err', x => x))
const filterActive = _.filter(x => x.active === 'true')
const getItems = x => x.Items
const groupAndFilter = compose(groupUsers, filterActive, getItems)
// filterByLastFetch :: ([a], string) => [a]
const filterByLastFetch = (x, y) => x.filter(x => x.last_fetch < y)
export {getCampaigns, batchGetAccessToken, getCampaignsInsights, saveInsights,
groupUsers,filterByLastFetch, updateCampaignStats, taskOfEvery,
runTaskWithFn, filterActive, groupAndFilter,
getItems}
The objective is to pass the AWS plugin and FBGraph plugin only once to the computation, an build an elegant composition like:
const computation = compose(saveIntoDynamo3, fetchFromFacebook,fetchFromDynamo2,fetchFromDynamo)
and then:
computation().run({aws: AWSService, FB: FBGraph})

Categories