Let's say I have two bacon.js streams:
const stream1 = $('#input1').asEventStream('input')
.map(e => e.target.value)
.filter(inp => inp.length > 3)
.log();
const stream2 = $('#input2').asEventStream('input')
.map(e => e.target.value)
.filter(inp => inp.length > 3)
.log();
I would like to extract the common step of filtering of by the input length into a variable and apply it to both of the streams afterwards, fantasy code:
const my_filter = Bacon.map(e => e.target.value).filter(inp => inp.length > 3)
const stream1 = $('#input1').asEventStream('input')
.apply(my_filter)
.log();
const stream2 = $('#input2').asEventStream('input')
.apply(my_filter)
.log();
Is there an idiomatic way to do that?
EDIT1: To clarify, my_filter is just an example. I would like to be able to refactor an arbitrary chain of combinators and apply them to multiple streams.
EDIT2: As Bless Yahu noticed, I need an additional map to get the value out of events. This demonstrates my problem even better.
To me this looks like a basic software refactoring problem.
Generally, if you want to apply an arbitrary transformation to a stream, you'll probably want to write a function that takes a stream as an argument and returns the modified stream as a result.
In your particular case I'd remove duplication by extracting steps into functions.
const getValue = input => input.target.value
const my_filter = stream => stream.filter(text => text.length > 3)
const filteredInputValue = ($input) => {
my_filter($input.asEventStream('input').map(getValue))
const stream1 = filteredInputValue($('#input1')).log()
const stream2 = filteredInputValue($('#input2')).log()
In this example, you can easily modify my_filter to apply any other transformations as well, like throttling and text trimming.
Add a mapping function to get the value out of the event (I think there is a Bacon convience function as well), change you my_filter to a function, and call it with filter in your stream:
const getValue = inp=>inp.target.value;
const my_filter = inp => inp.length > 3
const stream1 = $('#input1').asEventStream('input')
.map(getValue)
.filter(my_filter)
.log();
const stream2 = $('#input2').asEventStream('input')
.map(getValue)
.filter(my_filter)
.log();
example of this is here: http://jsbin.com/vahejuv/edit?html,js,console,output
Related
I am querying data from an API which has nested object properties that I need to access.
const players = teamsData && teamsData.team && teamsData.team.players;
I am using path to get this data.
const players = path(['team', 'players'], teamsData);
This works but when I combine with filter I get an error.
Ideally I want to use pipe and combine this with Ramda's filter method.
My code looks like this:
const injuredPlayers = pipe(
path(['team', 'players'], teamsData),
filter(player => player.isInjured)
);
it does look like the players variable equals
const players = path(['team', 'players'], teamsData);
at least from the code you wrote
will continue here if it's not a problem
at first I would prefer to use pathOr instead is going to look like this
const pathTeamPlayers = R.pathOr([], ['team', 'players']);
const isInjured = player => player.injured
const filterInjured = R.filter(isInjured)
const teamDataToInjuredPlayers = R.pipe(
pathTeamPlayers,
filterInjured,
);
/* result */
teamDataToInjuredPlayers(teamData)
If you just want the list of injured players, you can write
const injuredPlayers =
filter (player => player.isInjured) (path (['team', 'players'], teamsData)
If you want a function that will retrieve that information from teamsData, then you can write
const getInjuredPlayers = pipe (
path('[team', 'players']),
filter (prop ('isInjured'))
)
(or use pathOr with [] to increase reliability) and call it with
const injuredPlayers = getInjuredPlayers (teamData)
Your code is combining these two distinct styles.
Guys I have a question.
I have an app in which I register data from an external device using BLE.
I have a "time" and an array for "acceleration".
const time = parseInt(
Buffer.from(characteristic.value, "base64")
.readUInt16LE(0)
.toString(16),
16
);
const acc_dx = [2, 4, 6].map(index => {
const hex = Buffer.from(characteristic.value, "base64")
.readInt16LE(index)
.toString(16);
return Number.parseInt(hex, 16);
});
const accUpdate_acc_dx = [...this.state.array_acc_dx, this.state.time, this.state.acc_dx]
this.setState({ array_acc_dx: accUpdate_acc_dx })
the result for array_acc_dx is like:
[1520,[42,-419,-926],1520,[41,-420,-927],1520,[41,-421,-927],1520,[41,-421,-926],1580,[40,-420,-927],1640,[40,-420,-926],1640,[41,-420,-926],1640,[41,-419,-926]
I would obtain this:
1520: [42,-419,-926],
1520: [41,-420,-927],
1520: [41,-421,-927],
1580: [40,-420,-927],
How can I do to have this kind of array?
I've cleaned up a few bits in your code, but I assume the following code does what you want it to do.
I don't think it is necessary to use a seperate buffer for each index you want to read.
I've already mentioned the part about converting a number to hex, just to immediately parse it back to a number; that's useless code.
you can parse the values for time and acc_dx in one go. No need to duplicate code.
when updating the state based on a previous state, use this.setState(previousState => newState)
const buf = Buffer.from(characteristic.value, "base64");
const [time, ...acc_dx] = [0,2,4,6].map(index => buf.readInt16LE(index));
this.setState(state => ({
time,
acc_dx,
array_acc_dx: [
...state.array_acc_dx,
[time, acc_dx]
]
}));
A user is typing values in a form and an event is emitted every time a user edits a particular field, with the value being the field they edited.
For example a user typing 3 times into the description field, followed by two times in the name field, would look like
"description" => "description" => "description" => "name" => "name" => ...
I want to buffer unique values and emit them as an array when the user stops typing for x amount of seconds. A value may reappear again in a different buffer window.
Essentially this is to track which fields were updated when the user stopped typing and communicate with the server to save the edited values.
I have this so far which emits every 3000 ms, plus it doesn't prevent duplicates when buffering but instead we "deduplicate" the array afterwards.
this.update$
.bufferTime(3000)
.filter(buffer => buffer.length > 0)
.map(buffer => [...new Set(buffer)])
.subscribe(x => console.log(x));
So it should listen until a value is emitted and then buffer unique values until no more values have been emitted for x seconds, then emit the buffer and repeat. How can one achieve this?
This could be an alternate version:
const { Observable } = Rx;
const log = (prefix) => (...args) => { console.log(prefix, ...args); };
const inputs = document.querySelectorAll('input');
const updates$ = Observable
.fromEvent(inputs, 'input')
.pluck('target', 'id')
;
// wait x ms after last update
const flush$ = updates$.debounceTime(3000);
const buffered$ = updates$
// use `distinct` without `keySelector`, but reset with flush$
.distinct(null, flush$)
// flush the buffer using flush$ as `notifier`
.buffer(flush$)
;
buffered$.subscribe(log('buffered$ =>'));
<script src="https://unpkg.com/#reactivex/rxjs#^5/dist/global/Rx.min.js"></script>
<div><input type="text" placeholder="foo" id="input.foo"></div>
<div><input type="text" placeholder="bar" id="input.bar"></div>
<div><input type="text" placeholder="baz" id="input.baz"></div>
Perhaps my question wasn't clear enough, anyhow I've managed to solve it like so: (in case it helps someone else)
To have the buffer emit only when the stream was silent for 3 seconds, I start a new timer every time a user types something (event emitted on update$), and use switchMap to cancel the previous one.
this.update$
.buffer(this.update$.switchMap(x => Observable.timer(3000)))
.filter(buffer => buffer.length > 0)
.map(buffer => [...new Set(buffer)])
.subscribe(console.log);
Then to get the buffer to be unique itself rather than having to manually deduplicate it, I had to create a custom operator uniqueBuffer.
this.update$
.uniqueBuffer(this.update$.switchMap(x => Observable.timer(3000)))
.filter(buffer => buffer.length > 0)
.subscribe(console.log);
function uniqueBuffer(emitObservable) {
return Observable.create(subscriber => {
const source = this;
const uniqueBuffer = new Set();
const subscription = source.subscribe(value => {
uniqueBuffer.add(value);
});
emitObservable.subscribe(emit => {
subscriber.next([...uniqueBuffer]);
uniqueBuffer.clear();
})
})
}
Observable.prototype.uniqueBuffer = uniqueBuffer;
Task
I need to make an app that reads a CSV file, transforms its data and then outputs it as another CSV file.
The input has the following format:
13:25:37 up 19 days, 21:35, 4 users, load average: 0.02, 0.02, 0.00
13:25:38 up 19 days, 21:35, 4 users, load average: 0.02, 0.02, 0.00
... so on
For those of you who are UNIX fans, you will recognise this as the output from the console command uptime.
The output format I want is the following:
RowNum, Avg Load
1,0.02
2,0.02
Where the first column is the row number in the CSV and the second is the number part of load average: 0.02.
All other columns are to be ignored.
Problem
Trying to do this as functionally as I can, I decided to use ramda.
This has been ... a challenge, to say the least. Right now, my code has several structural issues, but I want to focus on the main function, which is not working. Every time I execute my code I get the error:
index.js:54
.then( () => console.log("Done!") )
^
TypeError: main(...).then is not a function
Which is confusing because in both functions I pass to R.ifElse I return a Promise.
Code
const fs = require("fs");
const csvReader = require("csvreader");
const R = require("ramda");
const isString = require("lodash.isstring");
const { promisify } = require("util");
const argv = require("minimist")(process.argv.slice(2));
const appedFileAsync = promisify( fs.appendFile );
const createProcessData = () => {
const stringifyArray = array => `${array.toString()}\n`;
const write = str => fs.appendFileSync( argv.outputFile, str );
const getAvg = R.pipe(
R.replace("load average:", ""),
R.trim
);
let elapsedTime = 1;
const transform = list => [ elapsedTime++, getAvg ( R.nth( 3, list ) ) ];
return R.pipe(
transform,
stringifyArray,
write
);
};
const printHelp = () => {
console.log(`
=== MAN HELP ===
Usage: npm start -- --inputFile input.csv --outputFile output.csv
--inputFile: location of an input file in CSV format
--outputFile: location of an output file to append the new information to.
If this file does not exist, it will be created.
`);
return Promise.resolve();
};
const execute = () => appedFileAsync( argv.outputFile, "Time, Avg Load\n" )
.then( ( ) => csvReader.read( argv.inputFile, createProcessData() ) );
const main = () => {
const isInvalidFileName = R.anyPass( [ R.isNil, R.isEmpty, R.pipe ( isString, R.not ) ] );
const hasInvalidArgs = R.either( isInvalidFileName( argv.inputFile ), isInvalidFileName( argv.outputFile ) );
return R.ifElse(
hasInvalidArgs,
printHelp,
execute
);
};
main()
.then( () => console.log("Done!") )
.catch( console.error );
Question
What is wrong with my code ?
This is how to think of ifElse:
const ifElse = (predicate, consequent, alternative) =>
(...val) => predicate(...val) ? consequent(...val) : alternative(...val);
So
const comp = ifElse(
(a, b) => a < b,
(a, b) => `${a} is smaller than ${b}`,
(a, b) => `${a} is at least as large as ${b}`
)
comp(12, 7) //=> "12 is at least as large as 7"
The main point is that the first argument to ifElse is a function. But you pass it the result of this:
R.either( isInvalidFileName( argv.inputFile ), isInvalidFileName( argv.outputFile ) )
Now normally, either returns a function. But that depends upon you supplying functions to it. The assumption is that if you don't supply functions, you know what you are doing and are supplying container types with ap and map methods, so that either is slightly more generic. But you're supplying booleans such as the result of isInvalidFileName( argv.inputFile ). At that point the behavior is not well defined. Perhaps that should be changed, but Ramda's philosophy is generally garbage-in-garbage-out. So that either call, for whatever reason, is returning [undefined].
And that means that you're supplying [undefined] as the predicate function to ifElse. You should receive an error when you try to call it. I haven't tried to trace down why that error is being shadowed by the one you see.
As to how to fix this using Ramda, I'm not really sure where to start. This is pretty far from Ramda's usual style. If nothing else, functions that accept no parameters are extremely rare in Ramda. I guess I would start the Ramda part of this with a function that accepts argv.inputFile and argv.outputFile, either as individual objects or perhaps as that single object argv.
Hello I'm trying to figure out if there is an equivalent to the RxJs operator zip in xstream, or at least a way to get the same behaviour. In case anyone needs clarification on the difference the marble diagrams below will show.
zip in rxjs
|---1---2---3-----------5->
|-a------b------c---d----->
"zip"
|-1a----2b------3c-----5d->
whereas 'combineLatest' aka 'combine' in xstream does
|---1---2----------4---5->
|----a---b---c---d------->
"combine"
|-1a----2a-2b-2c-2d-4d-5d>
Any help is appreciated as I'm very new to programming with streams. Thank you in advance!
I also needed a zip operator for xstream. So I created my own from existing operators. It takes an arbitrary number of streams for zipping.
function zip(...streams) {
// Wrap the events on each stream with a label
// so that we can seperate them into buckets later.
const streamsLabeled = streams
.map((stream$, idx) => stream$.map(event => ({label: idx + 1, event: event})));
return (event$) => {
// Wrap the events on each stream with a label
// so that we can seperate them into buckets later.
const eventLabeled$ = event$.map(event => ({label: 0, event: event}));
const labeledStreams = [eventLabeled$, ...streamsLabeled];
// Create the buckets used to store stream events
const buckets = labeledStreams.map((stream, idx) => idx)
.reduce((buckets, label) => ({...buckets, [label]: []}), {});
// Initial value for the fold operation
const accumulator = {buckets, tuple: []};
// Merge all the streams together and accumulate them
return xs.merge(...labeledStreams).fold((acc, event) => {
// Buffer the events into seperate buckets
acc.buckets[event.label].push(event);
// Does the first value of all the buckets have something in it?
// If so, then there is a complete tuple.
const tupleComplete = Object.keys(acc.buckets)
.map(key => acc.buckets[key][0])
.reduce((hadValue, value) => value !== undefined
? true && hadValue
: false && hadValue,
true);
// Save completed tuple and remove it from the buckets
if (tupleComplete) {
acc.tuple = [...Object.keys(acc.buckets).map(key => acc.buckets[key][0].event)];
Object.keys(acc.buckets).map(key => acc.buckets[key].shift());
} else {
// Clear tuple since all columns weren't filled
acc.tuple = [];
}
return {...acc};
}, accumulator)
// Only emit when we have a complete tuple
.filter(buffer => buffer.tuple.length !== 0)
// Just return the complete tuple
.map(buffer => buffer.tuple);
};
}
This can be used with compose.
foo$.compose(zip(bar$)).map(([foo, bar]) => doSomething(foo, bar))