Everybody.
Several days ago an Interviewer asked me a question.
And I couldn't answer it. May be at this site exists some guru JS. =)
We have just one string: VARNAME[byte][byte][byte][byte] where [byte] is place for one char.
Question: How write JS correct, if pair of [byte][byte] in HEX MUST BE NOT MORE than 1000 in decimal?
I tried following :
1) VARNAME[20][3D][09][30] it is equal
2) VARNAME<space>=1<space> and it is correct JS CODE BUT!
3) 0x203D = 8253 in decimal not correct must be <=1000
0x0120 = 2352 not correct must be <=1000!
I tried replacing 20 on 09, then:
0x093d = 2365 it is more good, but more than 1000 =(
How i can make it? Interviewer says that it is possible because char can be any( i mean
varname;<space><space><space> and etc), but he can not say me an answer.
Who can make it guys?
The question as described has no answer.
The lowest code point that can appear in an expression context after a variable references is \u0009 which, as you point out, will result in a value greater than 1000 (>= 2304). The ECMAScript 5 specification requires JavaScript environment to generate an early error when an invalid character is encountered. The only characters legal here are a identifier continuation character or a InputElementDiv which is either Whitespace, LineTerminator, Comment, Token, and DivPunctuator, none of which allow code points in the range \u0000-\u0003 which would be required for the question to have an answer.
There are some environments that terminate parsing when a \u0000 is encountered (the C end-of-string character) but those do not conform ES5 in this respect.
The statement that JavaScript allows any character in this position is simply wrong.
This all changes if VARNAME is in a string or a regular expression, however, which can both take character in the range \u0000-\u0003. If this is the trick the interviewer is looking for I can only say that was an unfair question.
Remember, in an interview, you are interviewing the company as much, or more, than the company is interviewing you. I would have serious reservations about joining a company that considers such a question a valid question to use in an interview.
Related
First post on here!
I've done a couple hours of research, and I can't seem to find any actual answers to this, though it may be my understanding that's wrong.
I want to convert a string, lets say "Hello 123" into any Base N, lets say N = 32 for simplicity.
My Attempt
Using Javascript's built-in methods (Found through other websites, and):
stringToBase(string, base) {
return parseInt(string, 10).toString(base);
}
So, this encodes the string to base 10 (decimal) and then into the base I want, however the caveat with this is that it only works from 2 to 36, which is good, but not really in the range that I'm looking for.
More
I'm aware that I can use the JS BigInt, but I'm looking to convert with bases as high as 65536 that uses an arbitrary character set that does not stop when encountering ASCII or (yes I'm aware it's completely useless, I'm just having some fun and I'm very persistent). Most solutions I've seen use an alphabet string or array (e.g. "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz+-").
I've seen a couple threads that say that encoding to a radix which is not divisible by 2 won't work, is that true? Since base 85, 91, exist.
I know that the methods atob() and btoa() exist, but this is only for Radix/Base 64.
Some links:
I had a look at this github page: https://github.com/gliese1337/base-to-base/blob/main/src/index.ts , but it's in typescript and I'm not even sure what's going on.
This one is in JS: https://github.com/adanilo/base128codec/blob/master/b128image.js . It makes a bit more sense than the last one, but the fact there is a whole github page just for Base 128 sort of implies that they're all unique and may not be easily converted.
This is the aim of the last and final base: https://github.com/qntm/base65536 . The output of "Hello World!" for instance, is "驈ꍬ啯𒁗ꍲ噤".
(I can code java much better than JS, so if there is a java solution, please let me know as well)
I'm doing char golfing these days in different languages and I was skeptic at first cause it's totally disconnected from 'real world' practices but I ended up loving it for its educationnal purpose: I learned a LOT about my languages in the process.
And let's admit it, it's fun.
I'm currently trying to learn tricks in JS and here's the last I found:
Say, you have this script:
for(i=5;i--;)print(i*i) (23 chars)
The script is made of ASCII chars, each of them is basically a pair of hex digits.
For example 'f' is 66 and 'o' is 6f.
So if you group the informations of these two chars you get: 666f, which the utf16 code of one char: 景
My script has an odd number of chars so let's add a space somewhere to make it even:
for(i=5;i--;) print(i*i) (24 chars)
and now by applying the previous idea to the whole script we get:
景爨椽㔻椭ⴻ⤠灲楮琨椪椩 (12 chars)
So now my question is: how can I reconstruct the script back from the 12 chars with as few chars as possible?
I came up with that:
eval(unescape(escape`景爨椽㔻椭ⴻ⤠灲楮琨椪椩`.replace(/%u(..)/g,'%$1%')))
but it adds a constant cost of 50 chars to the process so it makes this method useless if your script has less than 100 chars.
It's great for long scripts (e.g. 600 chars becomes 350 chars) but in golfing problems, the script is rarely long, usually it's less than 100 chars.
I'm not an encoding specialist at all, that's why I came here cause I'm pretty sure there's a shorter method.
30 chars of constant cost would be already amazing cause it would make the threshold drop from 100 to 60 chars.
Note that I used utf16 here but it could be another encoding, as long as it shortens the script I'm happy with it.
My Version of JS is: Node 12.13.0
The standard way to switch between string decodings in node.js is to use the Buffer api:
Buffer.from(…, "utf16le").toString("ascii")
To golf this a bit, you can take advantage of some legacy options and defaults:
''+new Buffer(…,"ucs2")
(The .toString() without arguments actually does use UTF-8 but it doesn't matter for ASCII data)
Since node only supports UTF16-le instead of UTF16-be your string won't work, you'll need to swap the bytes and use different characters though:
global.print = console.log;
eval(''+new Buffer("潦⡲㵩㬵㬭
牰湩⡴⩩⥩","ucs2"))
(online demo)
I'm trying to solve a challenge on Codewars which requires you to reverse an array in JavaScript, in 16 characters or less. Using .reverse() is not an option.
The maximum number of characters allowed in your code is 28, which includes the function name weirdReverse, so that leaves you with just 16 characters to solve it in. The constraint -
Your code needs to be as short as possible, in fact not longer than 28 characters
Sample input and output -
Input: an array containing data of any types. Ex: [1,2,3,'a','b','c',[]]
Output: [[],'c','b','a',3,2,1]
The starter code given is -
weirdReverse=a=>
My solution (29 characters) is -
weirdReverse=a=>a.sort(()=>1)
which of course fails -
Code length should less or equal to 28 characters.
your code length = 29 - Expected: 'code length <= 28', instead got: 'code length > 28'
I'm not sure what else to truncate here.
Note - I did think about posting this question on CodeGolf SE, but I felt it wouldn't be a good fit there, due to the limited scope.
I'd like to give you a hint, without giving you the answer:
You're close, but you can save characters by not using something you need to add in your code.
By adding the thing you won't use, you can remove ().
Spoiler (answer):
// Note: this only really works for this specific case.
// Never EVER use this in a real-life scenario.
var a = [1,2,3,'a','b','c',[]]
weirdReverse=a=>a.sort(x=>1)
// ^ That's 1 character shorter than ()
console.log(weirdReverse(a))
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
substr() handles negative indices perfectly but substring() only accepts nonnegative indices.
Is there a reason of not using substr in favor of substring? The usage of negative indices are so useful in a lot of cases by viewing the space of indices as cyclic group. Why substr is indicated "deprecated" by MDN?
substring is when you want to specify a starting and ending index. substr is when you want to specify a starting offset and a length. They do different things and have different use cases.
Edit:
To better answer the exact question of
Why substring does not handle negative indices?
substring specifies a starting and ending index of characters in a string. substr deals with a starting offset and a length. It makes sense to me that substring does not allow a negative index, because there really isn't a such thing as a negative index (the characters in a string are indexed from 0 to n, a "negative index" would be out of bounds). Since substr is dealing with an offset vs an index, I feel the term offset is loose enough to allow for a negative offset, which of course means counting backwards from the end of the string rather than forward from the beginning. This might just be semantics, but its how I make sense of it.
Why is substr deprecated?
I would argue that is in fact not deprecated.
The revision history for the substr MDN states the deprecation notice was put in based on this blog post:
Aug 16, 2016, 12:00:34 AM
hexalys
add deprecated mention per https://blog.whatwg.org/javascript
Which states that the HTML string methods are deprecated (which they should be!). These are methods that wrap a string in an HTML tag, ie, "abc".sub() would return <sub>abc</sub>. The blog post lists out all of the HTML string methods, and imho, erroneously includes subtr as an HTML string method (it isn't).
So this looks like a misunderstanding to me.
(Excerpt below, emphasis added by me)
Highlights:
The infamous “string HTML methods”: String.prototype.anchor(name), String.prototype.big(), String.prototype.blink(),
String.prototype.bold(), String.prototype.fixed(),
String.prototype.fontcolor(color), String.prototype.fontsize(size),
String.prototype.italics(), String.prototype.link(href),
String.prototype.small(), String.prototype.strike(),
String.prototype.sub(), String.prototype.substr(start, length), and
String.prototype.sup(). Browsers implemented these slightly
differently in various ways, which in one case lead to a security
issue (and not just in theory!). It was an uphill battle, but
eventually browsers and the ECMAScript spec matched the behavior that
the JavaScript Standard had defined.
https://blog.whatwg.org/javascript
substr is particularly useful when you are only interested in the last N characters of a string of unknown length.
For example, if you want to know if a string ends with a single character:
function endsWith(str, character) {
return str.substr(-1) === character;
}
endsWith('my sentence.', '.'); // => true
endsWith('my other sentence', '.'); // => false
Implementing this same function using substring would require you calculating the length of the string first.
function endsWith(str, character) {
var length = str.length;
return str.substring(length - 1, length) === character;
}
Both functions can be used to get the same results, but having substr is more convenient.
There are three functions in JS that do more or less the same:
substring
substr
slice
I guess most people use the latter, because it matches its array counterpart. The former two are more or less historical relicts (substring was in JS1, then substr came in two different flavours etc).
Why substr is indicated "deprecated" by MDN?
The notice has been added as per this post by Mathias where substr is listed under "string HTML methods" (?). The reason of the deprecation is that it belongs to the Annex B which says:
This annex describes various legacy features and other characteristics of web browser based ECMAScript implementations. All of the language features and behaviours specified in this annex have one or more undesirable characteristics and in the absence of legacy usage would be removed from this specification.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why can't I access a property of an integer with a single dot?
I was reading an article, and came across the strange behaviour of javascript toFixed method. I don't understand the reason for the last statement. Can anyone explain please?
(42).toFixed(2); // "42.00" Okay
42.toFixed(2); // SyntaxError: identifier starts immediately after numeric literal
42..toFixed(2); // "42.00" This really seems strange
A number in JavaScript is basically this in regex:
[+-]?[0-9]*(?:\.[0-9]*)?(?:[eE][+-]?[0-9]+)?
Note that the quantifiers are greedy. This means when it sees:
42.toFixed(2);
It reads the 42. as the number and then is immediately confronted with toFixed and doesn't know what to do with it.
In the case of 42..toFixed(2), the number is 42. but not 42.. because the regex only allows one dot. Then it sees the . which can only be a call to a member, which is toFixed. Everything works fine.
As far as readability goes, (42).toFixed(2) is far clearer as to its intention.
The dot is ambiguous: decimal point or call member operator. Therefore the error.
42..toFixed(2); is equivalent to (42.).toFixed(2)