This question already has an answer here:
Why delaying evaluation can transform impure functions into pure ones?
(1 answer)
Closed 11 months ago.
I'm now reading "mostly adequate guide to functional programming" by Professor Frisby, and I was wondering how can we do pure fetching, and I saw this piece of code here in the book
const pureHttpCall = memoize((url, params) => () => $.getJSON(url, params));
And says
The interesting thing here is that we don't actually make the http call - we instead return a function that will do so when called. This function is pure because it will always return the same output given the same input: the function that will make the particular http call given the url and params.
where that left me with some confusion, I don't understand how pureHttpCall is now pure and still have impure underlying code (the fetching part).
So what am I missing here that makes the code pure and functional?
What makes the function pure is this:
memoize( ... )
Memoizing is a special kind of caching. You may be familiar with caching. That is, you store the result of an expensive operation in a variable and if that variable already exist return the value of that variable instead of performing the expensive operation. Memoizing is exactly that.
The main difference between memoizing and caching is that memoizing is a permanent cache - most commonly used caching algorithm has an invalidation process: it could be time based (delete the cache if it is older than x), count based (delete the cache if there are more than x items cached), memory based (delete the cache if the cache uses more than x megabytes of RAM) etc. Memoizing never deletes the cache. Therefore the expensive operation is done only once.
The fact that an operation is done only once causes every call to pureHttpCall to be guaranteed to return the same result. This means that the function is now pure.
Yes you can do the same to Math.random() if you memoize it making every call to pureRandom return exactly the same number. And this would make pureRandom pure because it always returns the same result. I personally would not call it a "random" function.
A very simple implementation of memoize could be something like this:
function memoize (fn) {
let cache;
return function () {
if (cache === undefined) {
cache = fn();
}
return cache;
}
}
The above code is 100% synchronous for clarity but it is possible to write an asynchronous version of a memoization function.
As I understand it, the author is arguing here that because the caching layer is added, the function now has predictable results:
Once you have done a combination of the input arguments, that same combination will always return the same result it produced the first time.
So if you unplug your ethernet cable and do pureHttpCall("http://google.com") you will get an error response back. If you now replug your cable, then pureHttpCall("http://google.com") will still produce that exact same error you got the first time.
I don't think I agree though, because the there is an implicit dependency here. The function still depends on the environment when it is first called. E.g. If you restart your program between unpluggin and replugging your ethernet cable, you will get different results.
Related
I am reading a book where it says one way to handle impure functions is to inject them into the function instead of calling it like the example below.
normal function call:
const getRandomFileName = (fileExtension = "") => {
...
for (let i = 0; i < NAME_LENGTH; i++) {
namePart[i] = getRandomLetter();
}
...
};
inject and then function call:
const getRandomFileName2 = (fileExtension = "", randomLetterFunc = getRandomLetter) => {
const NAME_LENGTH = 12;
let namePart = new Array(NAME_LENGTH);
for (let i = 0; i < NAME_LENGTH; i++) {
namePart[i] = randomLetterFunc();
}
return namePart.join("") + fileExtension;
};
The author says such injections could be helpful when we are trying to test the function, as we can pass a function we know the result of, to the original function to get a more predictable solution.
Is there any difference between the above two functions in terms of being pure as I understand the second function is still impure even after getting injected?
An impure function is just a function that contains one or more side effects that are not disenable from the given inputs.
That is if it mutates data outside of its scope and does not predictably produce the same output for the same input.
In the first example NAME_LENGTH is defined outside the scope of the function - so if that value changes the behaviour of getRandomFileName also changes - even if we supply the same fileExtension each time. Likewise, getRandomLetter is defined outside the scope - and almost certainly produces random output - so would be inherently impure.
In second example everything is referenced in the scope of the function or is passed to it or defined in it. This means that it could be pure - but isn't necessarily. Again this is because some functions are inherently impure - so it would depend on how randomLetterFunc is defined.
If we called it with
getRandomFileName2('test', () => 'a');
...then it would be pure - because every time we called it we would get the same result.
On the other hand if we called it with
getRandomFileName2(
'test',
() => 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'.charAt(Math.floor(25 * Math.random()))
);
It would be impure, because calling it each time would give a different result.
There's more than one thing at stake here. At one level, as Fraser's answer explains, assuming that getRandomLetter is impure (by being nondeterministic), then getRandomFileName also is.
At least, by making getRandomFileName2 a higher-order function, you at least give it the opportunity to be a pure function. Assuming that getRandomFileName2 performs no other impure action, if you pass it a pure function, it will, itself, transitively, be pure.
If you pass it an impure function, it will also, transitively, be impure.
Giving a function an opportunity to be pure can be useful for testing, but doesn't imply that the design is functional. You can also use dependency injection and Test Doubles to make objects deterministic, but that doesn't make them functional.
In most languages, including JavaScript, you can't get any guarantees from functions as first-class values. A function of a particular 'shape' (type) can be pure or impure, and you can check this neither at compile time nor at run time.
In Haskell, on the other hand, you can explicitly declare whether a function is pure or impure. In order to even have the option of calling an impure action, a function must itself be declared impure.
Thus, the opportunity to be impure must be declared at compile time. Even if you pass a pure 'implementation' of an impure type, the receiving, higher-order function still looks impure.
While something like described in the OP would be technically possible in Haskell, it would make everything impure, so it wouldn't be the way you go about it.
What you do instead depends on circumstances and requirements. In the OP, it looks as though you need exactly 12 random values. Instead of passing an impure action as an argument, you might instead generate 12 random values in the 'impure shell' of the program, and pass those values to a function that can then remain pure.
There's more at stake than just testing. While testability is nice, the design suggested in the OP will most certainly be impure 'in production' (i.e. when composed with a proper random value generator).
Impure actions are harder to understand, and their interactions can be surprising. Pure functions, on the other hand, are referentially transparent, and referential transparency fits in your head.
It'd be a good idea to have as a goal pure functions whenever possible. The proposed getRandomFileName2 is unlikely to be pure when composed with a 'real' random value generator, so a more functional design is warranted.
Anything that contains random (or Date or stuff like that). Will be considered impure and hard to test because what it returns doesn't strictly depends on its inputs (always different). However, if the random part of the function is injected, the function can be made "pure" in the test suite by replacing whatever injected randomness with something predictable.
function getRandomFileName(fileExtension = "", randomLetterFunc = getRandomLetter) {}
can be tested by calling it with a predictable "getLetter" function instead of a random one:
getRandomFileName("", predictableLetterFunc)
I was writing some code for a widget in "Scriptable" that shows random word at a certain time. The widget calls itself with a timeout, on iPad. But it's exceeding the stack size. How do I solve this? I am using Javascript within "Scriptable" framework, so I don't have much freedom.
kanjiObj: Object; kanjiKeys: List; Timer().schedule(timeinterval, repeat, callback)
var randomNum = Math.floor(Math.random()*150)+1
var kanji = kanjiKeys[randomNum]
var kanjiMeaning = kanjiObj[kanjiKeys[randomNum]]
if(config.runsInWidget){
let widget = createWidget()
Script.setWidget(widget)
Script.complete()
function createWidget(){
let widget = new ListWidget()
widget.addText(kanji + "=>" + kanjiMeaning)
widget.wait = new Timer().schedule(1000*60*60, 150, createWidget())
In your case, you are calling the createWidget() function recursively by mistake. Your intention was different, you wanted to pass the function as a parameter to the schedule function, which accepts a function callback as the last parameter.
So you have to change to
widget.wait = new Timer().schedule(1000*60*60, 150, () => createWidget())
or
widget.wait = new Timer().schedule(1000*60*60, 150, createWidget)
But since this question is about "call stack size exceeded" people who have a different kind of this problem could stumble upon it, so I will answer in a general way below as well.
Programming is about recognizing patterns, so what you have in your code is a pattern (structure) of an infinite recursive call
function someFunction() {
// ... code before the recursive call that never returns from this function
// an unconditional call (or a conditional chain that is always true)
// of the function itself in the function scope instead of inside a callback/lambda
someFunction()
// ... could be more code, which does not matter,
// until the pattern above still holds true
}
So a structure (pattern) like this will result in an infinite recursive call, which eventually ends with a maximum call stack size exceeded. Call stack size is limited and an infinite recursive call eventually fills it up.
Javascript does not have built-in tail-recursion optimization (in practice, see compatibility), but in programming languages that do, if the function is tail-recursive, it would be optimized to a loop. In the case of an infinite tail-recursive function it would be optimized to an infinite loop, which would not fill the call stack, it would just loop forever (it would be working, but if not implemented correctly, it could make the application unresponsive).
So to resolve a problem like this in general, one has to break this pattern by breaking any (at least one) of the conditions above. This is the list:
code before the recursive call that never returns from this function
add a conditional return statement (also called a stop condition)
an unconditional call (or a conditional chain that is always true) to the function itself in the function scope instead of inside a callback/lambda
make the call conditional (make sure the condition chain can be false) or put it inside a callback/lambda. When you put it inside a
callback/lambda, then a different pattern applies (you have to check
the logic inside the call that will be calling the callback), but
calling the callback still has to be conditional, it has to have a
limit at some point.
after making a change, the code that is after the call, needs to be checked for the same pattern again, if the pattern is there again, break it in a similar way. Keep repeating this until the whole function does not form the pattern anymore - has stop condition(s) where needed.
In cases when you do need an infinite recursive call and the function is tail-recursive and your language does not support tail-recursion optimization, you would need to do the tail optimization yourself or use a library/framework that lets you do that.
If this does not solve your problem, have a look at the answers in this question that has collected a lot of reasons why a "maximum call stack size exceeded" might happen in Javascript.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am thinking could it be a good approach to always return promises in functions in JavaScript?
Let's imagine the case where we have a function that validates a username. A master function simply utilises 2 other functions that perform different checks.
Please note, all functions names are just examples.
// Returns a boolean
function validateUsername (username) {
return validateUsernameFormat(username) &&
isUsernameReserved(username);
}
// Returns a boolean
function validateUsernameFormat (username) {
return typeOf(username) === 'string' &&
username.match(/^\[a-z0-9]{8,20}$/);
}
// Returns a boolean
function isUsernameNotReserved (username) {
return ['igor', 'kristina'].indexOf(username) === -1;
}
Now let's imagine we augment our validation in the future by calling API to check if a given username already exists in our database.
// Now returns a promise
function isUsernameNotReserved (username) {
return API.checkIfUserNameAlreadyExists(username);
}
This would mean we will also now have to change the master validateUsername function since it now also needs to return promise. This would also probably mean we will have to modify all functions that use validateUsername function.
But what If we had all function in promises from scratch?
Option A - All functions return promises
// Returns a promise
function validateUsername (username) {
return validateUsernameFormat(username)
.then(() => {
return isUsernameReserved(username);
});
}
// Returns a promise
function validateUsernameFormat (username) {
return (
typeOf(username) === 'string' && username.match(/^\[a-z0-9]{8,20}$/) ?
Promise.resolve() : Promise.reject()
);
}
// Returns a promise
function isUsernameNotReserved (username) {
return (
['igor', 'kristina'].indexOf(username) === -1 ?
Promise.resolve() : Promise.reject()
);
}
Now if we want to augment isUsernameNotReserved with asynchronous API call, we don't need to change anything else.
Option B - Only functions calling another functions return promises
Also, another option would be write functions in promises that call another functions. In that case, only validateUsername should be written as a promise from scratch.
Is this a good approach? What could be the drawbacks apart from performance?
Upd: ran a simple performance test and though running consequent promises is slower, it practically should not make any difference since running 100000 consequent functions takes ~200ms, while running 1000 takes ~3ms in Chrome. Fiddle here https://jsfiddle.net/igorpavlov/o7nb71np/2/
Should I always return promises in all functions in JavaScript?
No.
If you have a function that performs an asynchronous operation or may perform an asynchronous operation, then it is reasonable and generally good design to return a promise from that function.
But, if your function is entirely synchronous and no reasonable amount of forethought thinks this will sometime soon contain an asynchronous operation, then there are a bunch of reason why you should not return a promise from it:
Asynchronous code writing and testing is more complicated than synchronous code writing and testing. So, you really don't want to make code harder to write and test than it needs to be. If your code can be synchronous (and just return a normal value), then you should do so.
Every .then() handler gets called on the next tick (guaranteed asynchronously) so if you take a whole series of synchronous operations and force each function to wait until the next tick of the event loop, you're slowing down code execution. In addition, you're adding to the work of the garbage collector for every single function call (since there's now a promise object associated with every single function call).
Losing the ability to return a normal value from a synchronous function is a huge step backwards in the language tools that you can conveniently use to write normal code. You really don't want to give that up on every single function.
Now if we want to augment isUsernameNotReserved with asynchronous API call, we don't need to change anything else.
A good API design would anticipate whether an asynchronous API is relevant or likely useful in the near future and make the API async only in that case. Because asynchronous APIs are more work to write, use and test, you don't want to unnecessarily just make everything async "just in case". But, it would be wise to anticipate if you are likely to want to use an async operation in your API in the future or not. You just don't want to go overboard here. Remember, APIs can be added to or extended in the future to support new async things that come along so you don't have to over complicate things in an effort to pre-anticipate everything that ever might happen in the future. You want to draw a balance.
"Balance" seems like the right word here. You want to balance the likely future needs of your API with developer simplicity. Making everything return a promise does not draw a proper balance, as it chooses infinite flexibility in the future while over complicating things that don't need to be as complicated.
In looking at a couple of your particular examples:
validateUsernameFormat() does not seem likely to ever need to be asynchronous so I see no reason for it to return a promise.
isUsernameNotReserved() does seem like it may need to be asynchronous at some point since you may need to look some data up in a database in order to determine if the user name is available.
Though, I might point out that checking if a user name is available before trying to just create it can create a race condition. This type of check may still be used as UI feedback, but when actually creating the name, you generally need to create it in an atomic way such that two requests looking for the same name can't collide. Usually this is done by creating the name in the database with settings that will cause it to succeed if the name does not already exist, but fail if it does. Then, the burden is on the database (where it belongs) to handle concurrency issues for two requests both trying to create the same user name.
var identifier = '0';
var serviceCodes = parseServiceCodes(identifier);
console.log(serviceCodes);
function parseServiceCodes(id){
var serviceCodes = id + 'HYJSIXNS';
return serviceCodes
}
0HYJSIXNS is returned in the console. But I thought that since JavaScript is asynchronous, the variable would be assigned before the function is returned, making it null.
Javascript isn't asynchronous by default, and code is executed in order unless you specify that it shouldn't be. You're never making a call to an external resource, so there's nothing to be async here.
JS, like any interpreted language, is run line-by-line. In your example, at the parseServiceCodes callsite, execution goes from line 2 (the callsite) to line 6 (the function body).
Async behavior is introduced for time consuming operations, like HTTP requests. In this case, the expensive function either takes a callback parameter (code to run once the expensive operation is over) or returns a Promise which "resolves" with the computed data when ready.
To illustrate the first case, let's say we have a function getFromServer which needs a URI and a callback:
function getFromServer(uri, callback){ // ...
It doesn't matter exactly what happens under the hood, just that execution is given back to the main program after invocation. This means the program isn't blocked for the duration of this expensive operation. When getFromServer is done, it will come back and call callback with some data.
To learn more about the second case, Promises, I encourage you to read the documentation I linked to earlier in this response.
You seem to misunderstand what it means to say that "JavaScript is asynchronous".
JavaScript is asynchronous because it supports and benefits from usage of asynchronicity, which is considered a common design pattern throughout the language by using features such as asynchronous callbacks, promises and even async/await, but this does not mean that all code is asynchronous.
You can easily write synchronous code, as you just demonstrated in your question, and there is no issue specifying a return value that is not a Promise.
This is because the code invoking it expects the return value to contain a string, which it then assigns to serviceCodes and uses that in the same tick.
If any of these terms above confuse you, please refer to their respective documentation and usage I've provided the links for.
There is array of objects, which are expanded with parallel ajax requests. When last request is done array should be processed. The only solution i see is:
function expandArray(objects, callback){
number_of_requests=objects.length-1;
for(i in objects){
$.getJSON(request,function(){
//expanding array
if(--number_of_reuests==0){
callback();
}
});
}
}
But as requests are executed in parallel there is chance of race condition. Variable number_of_requests can be edited by two "threads" simultaneously. How to avoid chance of race condition?
Is it possible to rework your AJAX so that it all goes in one request? That part of the system will be the biggest bottleneck and can get tricky (as you've found out), so the less requests you make, the better.
The only other way I could think would be if each request mutated its related object, setting a flag value or something, and then you looped through all the objects to check if all the flags had been set yet.
Isn't Javascript single threaded? The kind of condition you talk of wouldn't occur.
It's more complex, but I would suggest using a third function to monitor the results. Something like this:
1) Start monitor - using an appropriate interval monitor (using an interval to test here is important - a simple loop would lock the JS engine up tight) the results of requests A and B.
2) Call request A.
3) Call request B.
4) Request B finishes - callback function sets a "B All Done!" value.
5) Request A finishes - callback function sets an "A All Done!" value.
6) Monitor recognizes that both requests are completed and calls function which uses data from both.
Worded more simply this is "dependencies" (multiple calls)to "monitor" (the function that checks dependencies) to "completion" (the action to take when all is ready).
You can create the callbacks and the completion function as nested functions to maintain encapsulation and reduce "little" in the global scope.
You can extend your dependencies as far as you like.