ES8 - Why have padStart/padEnd methods? - javascript

Trying to understand the reasoning behind the support for these 2 methods in ES8. padEnd for example - this can be achieved either using concat, replace, repeat.
So is it just to have a cleaner way of achieving this because this could be a common use-case or this is more efficient than current alternatives?
Edit: It would help to know why a question is down voted - was the question too opinionated/broad to ask?

It's just for convenience. There are a huge amount of functions that could be done using other low level means - but when written poorly they result in bugs, or inefficient code. Everyone wins when the language adds support for something people often do.
To exaggerate your example - languages don't need for loops either. You can generally write the same sort of code with a while loop. People don't need ternaries - they can be done with a standard if statement. In both of the examples people would generally need to write more code to achieve the same effect - but why make the coder do that?
I would reverse the question - why do you think they shouldn't include padEnd?

I think your question is asking for usecases of padstart, padend functions, i.e what prompted for these to be included in ecmascript.
As pointed above, they are helper functions that let you achieve
more with less code.
Displaying tabular data in a monospaced font.
Adding a count or an ID to a file name or a URL: 'file 001.txt'.
Aligning console output: 'Test 001: ✓'.
Printing hexadecimal or binary numbers that have a fixed number of digits: '0x00FF'
You can read more about its usecase/applications here :
http://exploringjs.com/es2016-es2017/ch_string-padding.html
https://www.theregister.co.uk/2017/07/12/javascript_spec_straps_on_padding/

Related

JavaScript BigInt with a gmp-style API, specifically mpfr

I am writing a transpiler from my desktop programming language to JavaScript.
I use gmp on the desktop, so am writing a thin wrapper to mimic the same entry points but use BigInt under the hood.
(NB Emscripten etc NOT involved) So far mpz and mpq are working pretty well, ~30 entry points each, done by hand, so now I'm wondering about mpfr.
Could mpfr be done as mpq with implied/capped denominator of 10^k (where k can be negative), and
accordingly truncated/BigInt numerator? I expect a bit of a struggle with mpfr_const_pi(), mpfr_sin/log/exp(), etc. I say 10^k but am not even certain of that vs 2^k.
I have studied https://github.com/MikeMcl/big.js and friends but no offence meant all that seems to pre-date BigInts, and I simply cannot find anything that implements floats via BigInt.
In short, what code needs to be in mpfr.js so that the following will work (ideally unaltered), obviously any partial ideas, hints, or tips are just as welcome as a full-blown working example. You can assume (eg) mpz_get_str() is available, or of course you can go with using (say) BigInt.toString() etc directly, and not overly panic about precisely where the decimal point has to go, or any "%.75Rf" related nuances. I just need something to get the ball rolling.
<script src="mpfr.js"></script>
<script>
mpfr_set_default_prec(252); // (enough for 75 decimal places)
let one_third = mpfr_init(1); // (ok, non-std syntax, anyway init to 1)
mpfr_div_si(one_third,one_third,3);
console.log(mpfr_sprintf("%.75Rf",one_third);
</script>
I finally found this https://jrsinclair.com/articles/2020/sick-of-the-jokes-write-your-own-arbitrary-precision-javascript-math-library/ and I've now got pretty much everything I needed working.
While it is exactly what I was looking for, I should point out that it is deeply flawed, for instance there is a frankly outrageous memoize() function liberally applied, which no doubt vastly improved some pointless benchmark but would totally cripple real-world use, and other gross ineffiencies such as exp10(n) returns BigInt(1${[...new Array(n)].map(() => 0).join("")}), instead of the much saner
10n**BigInt(n). Nevertheless it is quite spirited and undeniably well meant, with plenty of good ideas.
Should anyone wish to see the results of my efforts I have uploaded the latest version: https://github.com/petelomax/Phix/blob/master/pwa/builtins/mpfr.js

Writing high-performance Javascript code without getting deoptimised

When writing performance-sensitive code in Javascript which operates on large numeric arrays (think a linear algebra package, operating on integers or floating-point numbers), one always wants the the JIT to help out as much as possible. Roughly this means:
We always want our arrays to be packed SMIs (small integers) or packed Doubles, depending on whether we're doing integer or floating-point calculations.
We always want to be passing the same type of thing to functions, so that they don't get labelled "megamorphic" and deoptimised. For instance, we always want to be calling vec.add(x, y) with both x and y being packed SMI arrays, or both packed Double arrays.
We want functions to be inlined as much as possible.
When one strays outside of these cases, a sudden and drastic performance dropoff occurs. This can happen for various innocuous reasons:
You might turn a packed SMI array into a packed Double array via a seemingly innocuous operation, like the equivalent of myArray.map(x => -x). This is actually the "best" bad case, since packed Double arrays are still very fast.
You might turn a packed array into a generic boxed array, for example by mapping the array over a function which (unexpectedly) returned null or undefined. This bad case is fairly easy to avoid.
You might deoptimise a whole function such as vec.add() by passing in too many types of things and turning it megamorphic. This could happen if you want to do "generic programming", where vec.add() is used both in cases where you're not being careful about types (so it sees a lot of types come in) and in cases where you want to eke out maximum performance (it should only ever receive boxed doubles, for instance).
My question is more of a soft question, about how one writes high-performance Javascript code in light of the considerations above, while still keeping the code nice and readable. Some specific sub-questions so that you know what kind of answer I'm aiming for:
Is there a set of guidelines somewhere on how to program while staying in the world of packed SMI arrays (for instance)?
Is possible to do generic high-performance programming in Javascript without using something like a macro system to inline things like vec.add() into callsites?
How does one modularise high-performance code into libaries in light of things like megamorphic call sites and deoptimisations? For instance, if I am happily using Linear Algebra package A at high speed, and then I import a package B that depends on A, but B calls it with other types and deoptimises it, suddenly (without my code changing) my code runs slower.
Are there any good easy to use measurement tools for checking what the Javascript engine is doing internally with types?
V8 developer here. Given the amount of interest in this question, and the lack of other answers, I can give this a shot; I'm afraid it won't be the answer you were hoping for though.
Is there a set of guidelines somewhere on how to program while staying in the world of packed SMI arrays (for instance)?
Short answer: it's right here: const guidelines = ["keep your integers small enough"].
Longer answer: giving a comprehensive set of guidelines is difficult for various reasons. In general, our opinion is that JavaScript developers should write code that makes sense to them and their use case, and JavaScript engine developers should figure out how to run that code fast on their engines. On the flip side, there are obviously some limitations to that ideal, in the sense that some coding patterns will always have higher performance costs than others, regardless of engine implementation choices and optimization efforts.
When we talk about performance advice, we try to keep that in mind, and carefully estimate what recommendations have a high likelihood of remaining valid across many engines and many years, and also are reasonably idiomatic/non-intrusive.
Getting back to the example at hand: using Smis internally is supposed to be an implementation detail that user code doesn't need to know about. It'll make some cases more efficient, and shouldn't hurt in other cases. Not all engines use Smis (for example, AFAIK Firefox/Spidermonkey historically hasn't; I've heard that for some cases they do use Smis these days; but I don't know any details and can't speak with any authority on the matter). In V8, the size of Smis is an internal detail, and has actually been changing over time and over versions. On 32-bit platforms, which used to be the majority use case, Smis have always been 31-bit signed integers; on 64-bit platforms they used to be 32-bit signed integers, which recently seemed like the most common case, until in Chrome 80 we shipped "pointer compression" for 64-bit architectures, which required lowering Smi size to the 31 bits known from 32-bit platforms. If you happened to have based an implementation on the assumption that Smis are typically 32 bits, you'd get unfortunate situations like this.
Thankfully, as you noted, double arrays are still very fast. For numerics-heavy code, it probably makes sense to assume/target double arrays. Given the prevalence of doubles in JavaScript, it is reasonable to assume that all engines have good support for doubles and double arrays.
Is possible to do generic high-performance programming in Javascript without using something like a macro system to inline things like vec.add() into callsites?
"generic" is generally at odds with "high-performance". This is unrelated to JavaScript, or to specific engine implementations.
"Generic" code means that decisions have to be made at runtime. Every time you execute a function, code has to run to determine, say, "is x an integer? If so, take that code path. Is x a string? Then jump over here. Is it an object? Does it have .valueOf? No? Then maybe .toString()? Maybe on its prototype chain? Call that, and restart from the beginning with its result". "High-performance" optimized code is essentially built on the idea to drop all these dynamic checks; that's only possible when the engine/compiler has some way to infer types ahead of time: if it can prove (or assume with high enough probability) that x is always going to be an integer, then it only needs to generate code for that case (guarded by a type check if unproven assumptions were involved).
Inlining is orthogonal to all this. A "generic" function can still get inlined. In some cases, the compiler might be able to propagate type information into the inlined function to reduce polymorphism there.
(For comparison: C++, being a statically compiled language, has templates to solve a related problem. In short, they let the programmer explicitly instruct the compiler to create specialized copies of functions (or entire classes), parameterized on given types. That's a nice solution for some cases, but not without its own set of drawbacks, for example long compile times and large binaries. JavaScript, of course, has no such thing as templates. You could use eval to build a system that's somewhat similar, but then you'd run into similar drawbacks: you'd have to do the equivalent of the C++ compiler's work at runtime, and you'd have to worry about the sheer amount of code you're generating.)
How does one modularise high-performance code into libaries in light of things like megamorphic call sites and deoptimisations? For instance, if I am happily using Linear Algebra package A at high speed, and then I import a package B that depends on A, but B calls it with other types and deoptimises it, suddenly (without my code changing) my code runs slower.
Yes, that's a general problem with JavaScript. V8 used to implement certain builtins (things like Array.sort) in JavaScript internally, and this problem (which we call "type feedback pollution") was one of the primary reasons why we have entirely moved away from that technique.
That said, for numerical code, there aren't all that many types (only Smis and doubles), and as you noted they should have similar performance in practice, so while type feedback pollution is indeed a theoretical concern, and in some cases can have significant impact, it's also fairly likely that in linear algebra scenarios you won't see a measurable difference.
Also, inside the engine there are many more situations than "one type == fast" and "more than one type == slow". If a given operation has seen both Smis and doubles, that's totally fine. Loading elements from two kinds of arrays is fine too. We use the term "megamorphic" for the situation when a load has seen so many different types that it's given up on tracking them individually and instead uses a more generic mechanism that scales better to large numbers of types -- a function containing such loads can still get optimized. A "deoptimization" is the very specific act of having to throw away optimized code for a function because a new type is seen that hasn't been seen previously, and that the optimized code therefore isn't equipped to handle. But even that is fine: just go back to unoptimized code to collect more type feedback, and optimize again later. If this happens a couple of times, then it's nothing to worry about; it only becomes a problem in pathologically bad cases.
So the summary of all that is: don't worry about it. Just write reasonable code, let the engine deal with it. And by "reasonable", I mean: what makes sense for your use case, is readable, maintainable, uses efficient algorithms, doesn't contain bugs like reading beyond the length of arrays. Ideally, that's all there is to it, and you don't need to do anything else. If it makes you feel better to do something, and/or if you're actually observing performance issues, I can offer two ideas:
Using TypeScript can help. Big fat warning: TypeScript's types are aimed at developer productivity, not execution performance (and as it turns out, those two perspectives have very different requirements from a type system). That said, there is some overlap: e.g. if you consistently annotate things as number, then the TS compiler will warn you if you accidentally put null into an array or function that's supposed to only contain/operate on numbers. Of course, discipline is still required: a single number_func(random_object as number) escape hatch can silently undermine everything, because the correctness of the type annotations is not enforced anywhere.
Using TypedArrays can also help. They have a little more overhead (memory consumption and allocation speed) per array compared to regular JavaScript arrays (so if you need many small arrays, then regular arrays are probably more efficient), and they're less flexible because they can't grow or shrink after allocation, but they do provide the guarantee that all elements have exactly one type.
Are there any good easy to use measurement tools for checking what the Javascript engine is doing internally with types?
No, and that's intentional. As explained above, we don't want you to specifically tailor your code to whatever patterns V8 can optimize particularly well today, and we don't believe that you really want to do that either. That set of things can change in either direction: if there's a pattern you'd love to use, we might optimize for that in a future version (we have previously toyed with the idea of storing unboxed 32-bit integers as array elements... but work on that hasn't started yet, so no promises); and sometimes if there's a pattern we used to optimize for in the past, we might decide to drop that if it gets in the way of other, more important/impactful optimizations. Also, things like inlining heuristics are notoriously difficult to get right, so making the right inlining decision at the right time is an area of ongoing research and corresponding changes to engine/compiler behavior; which makes this another case where it would be unfortunate for everyone (you and us) if you spent a lot of time tweaking your code until some set of current browser versions does approximately the inlining decisions you think (or know?) are best, only to come back half a year later to realize that then-current browsers have changed their heuristics.
You can, of course, always measure performance of your application as a whole -- that's what ultimately matters, not what choices specifically the engine made internally. Beware of microbenchmarks, for they are misleading: if you only extract two lines of code and benchmark those, then chances are that the scenario will be sufficiently different (e.g., different type feedback) that the engine will make very different decisions.

why should I use js regex instead of string methods, or vice-versa? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
For example, I have the string 'ala bala aladin'. Now, if I want to check after word 'aladin', I can do it with both, regex and string functions, like so:
if( /aladin/g.test('ala bala aladin') ) { ..... }
or
if( 'ala bala aladin'.indexOf("aladin") !='-1' ) { ..... }
In this case, what is the best way to use? regex or string methods? And why?
Either will get you a result just fine. For simple strings, a regex is likely slower just because a regular expression is a language of its own and the matching code is not as simple as .indexOf(). So, if you just have a straight string with no special regex characters involved, then .indexOf() is likely faster.
But, as with all performance issues, if you really care about performance, then you must measure your particular situation in your relevant browsers to be sure. And, you generally should not favor a solution purely for performance until you know you actually have a material performance issue to worry about.
My guiding principle is to keep my code as simple as possible to solve the desired problem. For that, my choice is to use .indexOf() if I'm just doing a straight string search and to use a regex when I actually need to take advantage of regex features. But .test() is pretty simple too so there really is no wrong answer here.
Go with what you think is the most readable.
FYI, here's a quick jsperf to look at the performance difference. Bigger difference in Firefox (2x). Chrome and IE have less of a difference.
The operation is overall fast enough (we're talking millions of operations per second) that the difference is unlikely to be noticeable in practice unless this operation is in a tight loop in which case, the creation of the regex object should be outside the loop anyway which changes to a different case to test.
http://jsperf.com/regex-vs-indexof919
In general, RegEx is going to be ever so slightly slower than string operations. But unless you're using this in a really big loop or doing something where performance is really important, the best version to use is the one that makes the most sense to you and is the most readable.
If you can solve without regex, do without regex.
Regular expressions are extremely powerful, but they are not the
correct solution for every problem. You should learn enough about them
to know when they are appropriate, when they will solve your problems,
and when they will cause more problems than they solve.
Some people, when confronted with a problem, think “I know, I'll
use regular expressions.” Now they have two problems.
--Jamie Zawinski, in comp.emacs.xemacs
http://www.diveintopython.net/regular_expressions/summary.html
There's a lot of things you might want to consider here. Is performance important? Is the string well-defined? Is case-sensitivity an issue? Will there be optional characters in the string?
The indexOf() method is fast but not very flexible, so it can only really test vs. exact matches. It can't tell you how many times it matched, just where.
If you're testing vs. a specific string, use a regular expression by default. That way you can always add customization later, like:
/aladin/i.test(...) // Test in a case-insensitive manner
/aladd?in/i.test(...) // Allow "aladdin" as well
User input is seldom neat and tidy. If you learn how to use regular expressions effectively you can cover a lot of crazy edge cases quite neatly.
If you absolutely need speed, or position information, indexOf() has you covered. I'd only be concerned about speed if you run this thing literally a million times in a row. For anything less than that the difference will be immeasurable.
Personally, I would use regex from a readability standpoint. It's easier to understand what is going on when using regex instead of comparing a string to an int.

Javascript: How to use eval() safely [duplicate]

This question already has answers here:
When is JavaScript's eval() not evil?
(27 answers)
Closed 8 years ago.
I am building a little game and I've gotten to the point where I need to calculate data in the tips of abilities which is unique to each individual unit. So to do this I figured I'm gonna basically need a formula. I don't know if this is the way it's supposed to be done but here's what I've come up with
tip = 'Hurls a fire ball at your enemy, dealing [X] damage.';
formula = '5 * unit.magicPower * abilitylevel';
So for each unit's tool tip I use
tip.replace('[X]', eval(formula))
Which appears to work fine, but what I'm concerned about is the safety of this code. It hasn't been once or twice that I've seen people discouraging the use of it. Are there any potential issues that may occur with the way I'm using eval()?
As long as you control the input into eval, it's safe to use it. The concern comes in when you're using it to process input that you don't control. At that point, it becomes unsafe because it's a full JavaScript parser but people sometimes try to use it as just an expression evaluator (for instance, when parsing JSON from a source they don't control).
The other objection is that it's firing up a full JavaScript parser (and so in theory costly), but frankly unless you're doing this hundreds of thousands of times in a tight loop, it's not going to matter.
eval is very dangerous if any of the expression is supplied by the user. If you're constructing it entirely from built-in components, it's not very dangerous. However, there are still usually better ways of accomplishing it, such as calling closures.
The basic rule of thumb is to make sure that by default you pass your data/information through eval() only.
You can't stop someone with tools like Firebug if they want to mess with stuff obviously but that is what server-side validation is about.
Now if you're talking about server-side eval() then you really have to be careful. Unfortunately there are a lot of uncooperative people working on JavaScript and it's implementations in browsers so you'll be forced to use eval() in JavaScript, I've never had to use it in PHP.

Is metaprogramming possible in Javascript?

During my routine work, i happened to write the chained javascript function which is something like LINQ expression to query the JSON result.
var Result = from(obj1).as("x").where("x.id=5").groupby("x.status").having(count("x.status") > 5).select("x.status");
It works perfectly and give the expected result.
I was wondering this looks awesome if the code is written like this (in a more readable way)
var Result = from obj1 as x where x.status
groupby x.status having count(x.status) > 5
select x.status;
is there a way to achieve this??
Cheers
Ramesh Vel
No. JavaScript doesn't support this.
But this looks quite good too:
var Result = from(obj1)
.as("x")
.where("x.id=5")
.groupby("x.status")
.having(count("x.status") > 5)
.select("x.status");
Most people insist on trying to metaprogram from inside their favorite language. That doesn't work if the language doesn't support metaprogramming well; other answers have observed that JavaScript does not.
A way around this is to do metaprogramming from outside the language, using
program transformation tools. Such tools can parse source code, and carry out arbitrary transformations on it (that's what metaprogramming does anyway) and then spit the revised program.
If you have a general purpose program transformation system, that can parse arbitrary languages, you can then do metaprogramming on/with whatever language you like. See our DMS Software Reengineering Toolkit for such a tool, that has robust front ends for C, C++, Java, C#, COBOL, PHP, and ECMAScript and a number of other programming langauges, and has been used for metaprogramming on all of these.
In your case, you want to extend the JavaScript grammar with new syntax for SQL queries, and then transform them to plain JavaScript. (This is a lot like Intentional Programming)
DMS will easily let you build a JavaScript dialect with additional rules, and then you can use its program transformation capabilities to produce the equivalent standard Javascript.
Having said, that, I'm not a great fan of "custom syntax for every programmer on the planet" which is where Intentional Programming leads IMHO.
This is a good thing to do if there is a large community of users that would find this valuable. This idea may or may not be one of them; part of the problem is you don't get to find out without doing the experiment, and it might fail to gain enough social traction to matter.
although not quite what you wanted, it is possible to write parsers in javascript, and just parse the query (stored as strings) and then execute it. e.g.,using libraries like http://jscc.jmksf.com/ (no doubt there are others out there) it shouldnt be too hard to implement.
but what you have in the question looks great already, i m not sure why you'd want it to look the way you suggested.
Considering that this question is asked some years ago, I will try to add more to it based on the current technologies.
As of ECMAScript 6, metaprogramming is now supported in a sense via Symbol, Reflect and Proxy objects.
By searching on the web, I found a series of very interesting articles on the subject, written by Keith Kirkel:
Metaprogramming in ES6: Symbols and why they're awesome
In short, Symbols are new primitives that can be added inside an object (without practically being properties) and are very handy for passing metaprogramming properties to it among others. Symbols are all about changing the behavior of existing classes by modifying them (Reflection within implementation).
Metaprogramming in ES6: Part 2 - Reflect
In short, Reflect is effectively a collection of all of those “internal methods” that were available exclusively through the JavaScript engine internals, now exposed in one single, handy object. Its usage is analogous to the Reflection capabilities of Java and C#. They are used to discover very low level information about your code (Reflection through introspection).
Metaprogramming in ES6: Part 3 - Proxies
In short, Proxies are handler objects, responsible for wrapping objects and intercepting their behaviors through traps (Reflection through intercession).
Of course, these objects provide specific metaprogramming capabilities, much more restrictive compared to metaprogramming languages, but still can provide handy ways of basic metaprogramming, mainly through Reflection practices, in fact.
In the end, it is worth mentioning that there is some worth-noticing ongoing research work on staged metaprogramming in JavaScript.
Well, in your code sample:
var Result = from(obj1)
.as("x")
.where("x.id=5")
.groupby("x.status")
.having(count("x.status") > 5)
.select("x.status");
The only problem I see (other than select used as an identifier) is that you embed a predicate as a function argument. You'd have to make it a function instead:
.having(function(x){ return x.status > 5; })
JavaScript has closures and dynamic typing, so you can do some really nifty and elegant things in it. Just letting you know.
In pure JS no you can not. But with right preprocessor it is possible.
You can do something similar with sweet.js macros or (God forgive me) GPP.
Wat you want is to change the javascript parser into an SQL parser. It wasn't created to do that, the javascript syntax doesn't allow you to.
What you have is 90% like SQL (it maps straight onto it), and a 100% valid javascript, which is a great achievement. My answer to the question in the title is: YES, metaprogramming is possible, but NO it won't give you an SQL parser, since it's bound to use javascript grammar.
Maybe you want something like JSONPath if you've got JSON data. I found this at http://www.json.org/. Lots of other tools linked to from there if it's not exactly what you need.
(this is being worked on as well: http://docs.dojocampus.org/dojox/json/query)

Categories