Related
Hello everyone and thank you for reading.
I'm trying to get a random number in a given range (in my case 1 - 87) based on the current date (no matter the format... milliseconds, YYYYMMDD, etc) and all this in javascript.
The reason for this is that I want to have a random number in this range that is different from day to day but remains the same during the day.
I first thought of simply generating a random number and then storing it in the cookies or localeStorage but I think that if you empty the browser cache or the localStorage (because yes my project is meant to be used on a browser) it will generate a new number when the page reloads and so this solution won't work.
Then I tried to use the seedRandom function of Davide Bau (http://davidbau.com/archives/2010/01/30/random_seeds_coded_hints_and_quintillions.html) but I didn't get the expected result (maybe I didn't understand how it works which is very likely too)
I would have shared with you a piece of code of my progress but none of the tests I did made sense to me, so I started from zero and I rely on you today.
Hoping to get some help, thanks !
Based on the ARNG algorithm that I found on the link you shared, you can use the current date (in my example, the timestamp format) as a seed for the RNG and always get the same random number from the list (1..87) given the same date.
<script src="//cdnjs.cloudflare.com/ajax/libs/seedrandom/3.0.5/lib/alea.min.js">
</script>
<script>
const arng = new alea(new Date().getTime());
const rand = Math.ceil( arng.quick() * 87 );
console.log( rand ); // <= Gives a random number between 1 and 87,
// based on the timestamp seed
</script>
Since the randomness is derived based on the date, you do not need to save anything in the localStorage or elsewhere. Your dates are the single point of reference when it comes to the random number generator.
You could format each date as a timestamp e.g. YYYYMMDD, then pass to a hashcode function such as cyrb53 (thanks bryc!).
We'd prepend our timestamp with a prefix to allow different sequences to be generated for the same dates if required.
We'd mod the hashcode with our maximum desired value (87 in this case) to get our random number for each day.
This number will be fixed for each day and prefix.
const prefix = '8tHifL4Cmz6A3e8';
/** From: bryc: https://github.com/bryc/code/blob/master/jshash/experimental/cyrb53.js **/
const cyrb53 = (str, seed = 0) => {
let h1 = 0xdeadbeef ^ seed,
h2 = 0x41c6ce57 ^ seed;
for (let i = 0, ch; i < str.length; i++) {
ch = str.charCodeAt(i);
h1 = Math.imul(h1 ^ ch, 2654435761);
h2 = Math.imul(h2 ^ ch, 1597334677);
}
h1 = Math.imul(h1 ^ (h1 >>> 16), 2246822507) ^ Math.imul(h2 ^ (h2 >>> 13), 3266489909);
h2 = Math.imul(h2 ^ (h2 >>> 16), 2246822507) ^ Math.imul(h1 ^ (h1 >>> 13), 3266489909);
return 4294967296 * (2097151 & h2) + (h1 >>> 0);
};
const maxN = 87;
const dateCount = 100;
const startDate = new Date();
const dates = Array.from({ length: dateCount }, (v, k) => {
const d = new Date(startDate);
d.setDate(d.getDate() + k);
return d;
})
function getTimestamp(date) {
const dateArr = [date.getFullYear(), date.getMonth() + 1, date.getDate()];
return dateArr.map(s => (s + '').padStart(2, '0')).join('')
}
const timeStamps = dates.map(d => getTimestamp(d));
console.log(timeStamps.map(timestamp => {
return { timestamp, randomNumber: 1 + cyrb53(prefix + timestamp) % maxN };
}))
.as-console-wrapper { max-height: 100% !important; }
I apologise in advance as the probablility that I have done something stupid is 1 as it's just been a few minutes since I have started problem solving with coding. That being said, please don't mind, I know I'm dumb.
Problem:
Roman numerals are represented by seven different symbols: I, V, X, L, C, D and M.
(Symbol : Value) =
(I : 1)
(V : 5)
(X : 10)
(L : 50)
(C : 100)
(D : 500)
(M : 1000)
For example, 2 is written as II in Roman numeral, just two one's added together. 12 is written as XII, which is simply X + II. The number 27 is written as XXVII, which is XX + V + II.
Roman numerals are usually written largest to smallest from left to right. However, the numeral for four is not IIII. Instead, the number four is written as IV. Because the one is before the five we subtract it making four. The same principle applies to the number nine, which is written as IX. There are six instances where subtraction is used:
I can be placed before V (5) and X (10) to make 4 and 9.
X can be placed before L (50) and C (100) to make 40 and 90.
C can be placed before D (500) and M (1000) to make 400 and 900.
Given a roman numeral, convert it to an integer.
Example 1:
Input: s = "III"
Output: 3
Explanation: III = 3.
Example 2:
Input: s = "LVIII"
Output: 58
Explanation: L = 50, V= 5, III = 3.
Example 3:
Input: s = "MCMXCIV"
Output: 1994
Explanation: M = 1000, CM = 900, XC = 90 and IV = 4.
My solution:
let old;
var romanToInt = function(s) {
let dict = {I:1,V:5,X:10,L:50,C:100,D:500,M:1000};
let x = 0;
for(let letter of s){
x = dict[letter] + x;
if(old == "I" && letter == "V" || old == "I" && letter == "x"){x = x - 2*old;};
if(old == "X" && letter == "L" || old == "X" && letter == "C"){x = x - 2*old;};
if(old == "C" && letter == "D" || old == "C" && letter == "M"){x = x - 2*old;};
old = letter;
};
return x;
};
console.log(romanToInt("VIII"))
Can you please tell why this isn't working ?
Would you be up for a different solution? Since the logic of roman numerals follows a consistent pattern with a single look back, this would be much easier to write as a loop with a look back condition rather than a sum of multiple conditions on the string value. Let me explain.
Since all string values are associated with a numeric value, you could just convert every character in the string to an integer and then add them after.
To handle the subtraction (thank you for explaining this I honestly had no idea it worked like this) you could look back after every time you convert a character to a numeral and see if the previous numeral is lower than your current value. This would tell you a subtraction is needed. Below, I solve this by keeping the original lower value, lets say 10, and when the current string character value is greater at 50, I take 50 and subtract the previous value 10 twice. This removes the previous lesser value and then again to create the overall subtraction from 50. (10 + 50 - 10 - 10 = 40).
var data = {I:1,V:5,X:10,L:50,C:100,D:500,M:1000};
var input = 'MCMXCIV'
console.log(
input.split('').map((v,i,e) => data[e[i-1]] < data[v] ? data[v]-(2*data[e[i-1]]): data[v]).reduce((c,v) => c+v)
)
A commented version to explain the code
input.split('').map( //split the roman numeral string into an array of characters and then "map" over each one
// v = current roman numeral, i = current index of array, e = all elements of array
(v,i,e) =>
data[e[i-1]] < data[v] // if previous roman numeral value is less than the current ...
? data[v]-(2*data[e[i-1]]) // subtract the current value by 2 times the previous lesser value
: data[v] // convert roman numeral to integer
).reduce((c,v) => c+v) // reduce the array of values into a single integer
Update
Here's something I just learned:
The letters I, X, C can be repeated thrice in succession.
If a lower value digit is written to the left of a higher value digit, it is subtracted.
If a lower value digit is written to the right of a higher value digit, it is added.
Only I, X, and C can be used as subtractive numerals.
In the OP there's a ton of flow control statements (ifs) and I believe it was an attempt to satisfy rule 2. and 4.
Solution
Add c: -100, x: -10, and i: -1 key/values to dict.
Create 3 RegExp objects -- each pattern covers rule 2. and 4.
Before you begin to convert the given string use .replace() to fix all left assigned C, X, I:
...// c, x, i added to dict
c: -100,
x: -10,
i: -1
};
const rgxC = new RegExp(/(C)([DM])/, 'g');
const rgxX = new RegExp(/(X)([LCDM])/, 'g');
const rgxI = new RegExp(/(I)([VXLCDM])/, 'g');
roman = roman.replace(rgxC, 'c$2').replace(rgxX, 'x$2').replace(rgxI, 'i$2');
Example A - Modified OP with solution
Example B - A terse function fromRoman(string)
Example C - An inverse function toRoman(number) I barely read the question brain farted and wrote this, DERP! ๐ฉ
Example A
const romanToInt = roman => {
const dict = {
I: 1,
V: 5,
X: 10,
L: 50,
C: 100,
D: 500,
M: 1000,
c: -100,
x: -10,
i: -1
};
const rgxC = new RegExp(/(C)([DM])/, 'g');
const rgxX = new RegExp(/(X)([LCDM])/, 'g');
const rgxI = new RegExp(/(I)([VXLCDM])/, 'g');
roman = roman.replace(rgxC, 'c$2').replace(rgxX, 'x$2').replace(rgxI, 'i$2');
let sum = 0;
for (let char of roman) {
sum = dict[char] + sum;
}
return sum;
};
console.log(romanToInt('MLIX'));
console.log(romanToInt('MCDIX'));
Example B
/**
* Converts a given Roman numeral to a number
* #param {String} Roman numeral to convert
* #return {Number} a Number
*/
function fromRoman(roman) {
// Object of Roman numbers and decimal equivalents
const R = {
M: 1000,
D: 500,
C: 100,
L: 50,
X: 10,
V: 5,
I: 1,
i: -1,
x: -10,
c: -100
};
const rgxC = new RegExp(/(C)([DM])/, 'g');
const rgxX = new RegExp(/(X)([LCDM])/, 'g');
const rgxI = new RegExp(/(I)([VXLCDM])/, 'g');
roman = roman.replace(rgxC, 'c$2').replace(rgxX, 'x$2').replace(rgxI, 'i$2');
/*
Split the Roman numeral into an array of characters
then use each char as the key of object R to get the
number value and add each number and return the sum.
*/
return roman.split('').reduce((sum, r) => sum + R[r], 0);
}
console.log(fromRoman('MCMLXXXVI'));
console.log(fromRoman('LXCIV'));
Example C
/**
* Helper function divides 1st param by the second
* param to get the quotient and remainder
* #param {Number} dividend to divide into
* #param {Number} divisor to divide by
* #return {Array} [0] = quotient [1] = remainder
*/
const div = (dividend, divisor) =>
[Math.floor(dividend / divisor), (dividend % divisor)];
/**
* Converts a given number to a Roman numeral
* #param {Number} int an integer to convert
* #return {String} a Roman numeral
*/
function toRoman(int) {
// A 2D array of Roman numbers and decimal equivalents
const R = [
['M', 1000],
['D', 500],
['C', 100],
['L', 50],
['X', 10],
['V', 5],
['I', 1]
];
/*
Returns a 2D array of only destructured sub-arrays that are of
lesser value at R[?][1] than the given number. Basically
determining what we can divide with and get a whole number
*/
const maxD = R.filter(([r, n]) => int > n);
/*
The outer loop destructures the filtered 2D array on each
iteration, the function div() divides the given number by
maxD[?][1] returning an array called "qr". "qr" contians the
quotient and remainder.
Then in the inner loop maxD[?][0] is added to the empty array
(rArray) as many times as qr[0] (quotient) and qr[1] (remainder)
becomes the new int to divide into.
*/
let roman = [];
for (const [rom, num] of maxD) {
let qr = div(int, num);
int = qr[1];
for (let q = 0; q < qr[0]; q++) {
roman.push(rom);
}
}
/*
Convert array into a string and correct any 9s or 4s to
satisfy rule 1.
*/
return roman.join('').replace('VIIII', 'IX').replace('IIII', 'IV');
};
console.log(toRoman(204));
console.log(toRoman(6089));
var romanToInt = function(s) {
let Roman = {"I": 1, "V":5,"X":10,"L":50,"C":100,"D":500,"M":1000};
let count = 0;
for (var i = 0; i < s.length; i++) {
count += Roman [s[i]];
}
return count;
}
So I have a project where I need to create a cipher which has 3 different types of ciphers built into it. In my case I have a Caesar cipher, Atbash cipher and ROT13. Whenever I try to decrypt a message however, some characters don't show up, while some show up as symbols.
Below is my code in Javascript
function decryptROT13() {
var dmsg = document.getElementById('message').value;
var dnewMsg2 = '';
for(var i = 0; i < dmsg.length; i++){
var n = dmsg.charCodeAt(i)
if(n == 32) {
dnewMsg2 += String.fromCharCode(n);
}
else if(n - 13 > 90){
dnewMsg2 += String.fromCharCode(n-13-26)
}
else{
dnewMsg2 += String.fromCharCode(n - 13)
}
}
decryptAtbash(dnewMsg2);
}
function decryptAtbash(dval) {
var dmsg = dval;
var dnewMsg1 = '';
var dtebahpla = 'ZYXWVUTSRQPONMLKJIHGFEDCBA ';
var dalphabet = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ ';
for (i = 0; i < dmsg.length; i++) {
var dcodedLetter = dmsg.charAt(i);
var dletterIndex = dalphabet.indexOf(dcodedLetter);
dnewMsg1 += dalphabet.charAt(dletterIndex);
}
decryptCaesar(dnewMsg1);
}
function decryptCaesar(dval) {
var dnewMsg = '';
var dmsg = dval;
var dmsg1 = dmsg.toUpperCase();
for(var i = 0; i < dmsg1.length; i++){
var n = dmsg1.charCodeAt(i)
if(n == 32) {
dnewMsg += String.fromCharCode(n);
}
else if(n - 3 > 90){
dnewMsg += String.fromCharCode(n - 3 - 26)
}
else{
dnewMsg += String.fromCharCode(n - 3)
}
}
dnewMsg.toUpperCase()
document.getElementById('decryptedMessage').innerHTML = dnewMsg;
}
This what the output looks like:
character codes
Our simple ciphers will work on capital letters A-Z and perform simple transformations of their ASCII character codes. We can go from a letter to a character code -
console.log("A".charCodeAt(0)) // 65
console.log("B".charCodeAt(0)) // 66
console.log("C".charCodeAt(0)) // 67
console.log("Z".charCodeAt(0)) // 90
Or from a character code to a letter -
console.log(String.fromCharCode(65)) // "A"
console.log(String.fromCharCode(66)) // "B"
console.log(String.fromCharCode(67)) // "C"
console.log(String.fromCharCode(90)) // "Z"
We will write reusable functions toCode and fromCode to simplify the rest of our program -
const toCode = s =>
s.toUpperCase().charCodeAt(0)
const fromCode = n =>
String.fromCharCode(n)
cipher
A cipher is an isomorphism - ie, it is a pair of functions whereby one function can reverse the effect of the other. Each cipher we write will have an encode and decode function -
const cipher = (encode, decode) =>
({ encode, decode })
Character A starts at offset 65, Z ends at 90, and we don't want to apply our cipher to characters out of this range. Instead of handling this logic in each cipher, it would be nice if we could write our ciphers that work on a simple A-Z charset that goes from 0-25 -
(0,A) (1,B) (2,C) (3,D) (4,E)
(5,F) (6,G) (7,H) (8,I) (9,J)
(10,K) (11,L) (12,M) (13,N) (14,O)
(15,P) (16,Q) (17,R) (18,S) (19,T)
(20,U) (21,V) (22,W) (23,X) (24,Y)
(25,Z)
For any given algorithm and character code n, we can filter characters that are out of the 65-90 range, and automatically apply appropriate offset to the algorithm's response -
const filter = alg => n =>
n < 65 || n > 90 // if n is out or range
? n // simply return n
: 65 + alg(n - 65) // apply offset to alg's response
atbash
Now let's write our first cipher, atbash -
atbash(n)
cipher
25 - 0 = 25
A โ Z
25 - 1 = 24
B โ Y
25 - 2 = 23
C โ X
25 - 23 = 2
X โ C
25 - 24 = 1
Y โ B
25 - 25 = 0
Z โ A
The implementation is simple. As the table above reveals, the process to encode atbash is exactly the same as the decoding process. In other words, atbash is a pair of identical functions -
const atbash =
cipher(n => 25 - n, n => 25 - n)
rot13
Next we look at our second cipher, rot13 -
rot13(n)
cipher
rot13(0)
A โ N
rot13(1)
B โ O
rot13(2)
C โ P
rot13(23)
X โ K
rot13(24)
Y โ L
rot13(25)
Z โ M
Rot13 is a Caesar shift of 13, so we can simply define it as
const rot13 =
caesar(13)
caesar
And lastly Caesar allows us shift the character code by a specified amount -
caesar(shift,n)
cipher
caesar(-3,0)
A โ X
caesar(-2,0)
A โ Y
caesar(-1,0)
A โ Z
caesar(0,0)
A โ A
caesar(1,0)
A โ B
caesar(2,0)
A โ C
caesar(3,0)
A โ D
caesar(-100,0)
A โ E
caesar(100,0)
A โ W
We can implement caesar by parameterizing a cipher with a shift amount. period is used to perform basic modular arithmetic and support any positive or negative shift amount -
const caesar = shift =>
cipher
( n => period(26, n + shift) // plus shift for encode
, n => period(26, n - shift) // minus shift for decode
)
const period = (z, n) =>
(z + n % z) % z
encode and decode
The ciphers we defined operate on character codes but we don't expect the user to do handle that conversion manually. We will provide a simple encode function to work with -
encode(atbash, "A") // "Z"
encode(atbash, "B") // "Y"
encode(atbash, "C") // "Z"
encode(caesar(1), "A") // "B"
encode(caesar(2), "A") // "C"
encode(caesar(3), "A") // "D"
As well as the reversing decode function -
decode(atbash, "Z") // "A"
decode(atbash, "Y") // "B"
decode(atbash, "Z") // "C"
decode(caesar(1), "B") // "A"
decode(caesar(2), "C") // "A"
decode(caesar(3), "D") // "A"
encode and decode accept a cipher, c, and an input string, s -
const encode = (c, s) =>
transform(c.encode, s)
const decode = (c, s) =>
transform(c.decode, s)
The transform procedure is the same whether you are encoding or decoding. The only things that changes is the algorithm being used -
const transform = (alg, s) =>
Array
.from(s, composeL(toCode, filter(alg), fromCode))
.join("")
myCipher
Finally we write composeCipher which allows you to construct your own complex cipher from a sequence of ciphers. The sequence of encoders runs from left-to-right using composeL. Naturally, the sequence of decoders runs in reverse from right-to-left using composeR -
const composeCipher = (...ciphers) =>
cipher
( composeL(...ciphers.map(c => c.encode)) // encode
, composeR(...ciphers.map(c => c.decode)) // decode
)
const myCipher =
composeCipher(rot13, atbash, caesar(3))
console.log(encode(myCipher, "DON'T CAUSE PANIC!"))
// MBC'W NPVXL APCHN!
console.log(decode(myCipher, "MBC'W NPVXL APCHN!"))
// DON'T CAUSE PANIC!
reusable functions
Above we use composeL (left) and composeR (right) which are generic function composition procedures. These allow us to building a single function out of a sequence of input functions. You don't need to understand their implementation in order to use them -
// func.js
const composeL = (...fs) =>
x => fs.reduce((r, f) => f(r), x)
const composeR = (...fs) =>
x => fs.reduceRight((r, f) => f(r), x)
// ...
export { composeL, composeR, ... }
modules
Like we did with the func module above, you should bundle your very own cipher module to keep your code nice and tidy. This also allows us to choose which parts of the module should be accessible to the user -
// cipher.js
import { composeL, composeR } from "./func.js"
const atbash = ...
const caesar = ...
const cipher = ...
const compose = ...
const decode = ...
const encode = ...
const fromCode = ...
const rot13 = ...
const toCode = ...
const transform = ...
export { atbash, caesar, cipher, compose, decode, encode, rot13 }
When we write our program, we only import the parts we need -
// main.js
import { compose, atbash, caesar, rot13 } from "./cipher.js"
const superSecret =
compose(rot13, atbash, caesar(3))
console.log(encode(superSecret, "DON'T CAUSE PANIC!"))
console.log(decode(superSecret, "MBC'W NPVXL APCHN!"))
MBC'W NPVXL APCHN!
DON'T CAUSE PANIC!
demo
Expand the snippet below to verify the result in your own browser -
// cipher.js
const fromCode = n => String.fromCharCode(n)
const toCode = s => s.toUpperCase().charCodeAt(0)
const cipher = (encode, decode) =>
({ encode, decode })
const atbash =
cipher(n => 25 - n, n => 25 - n)
const caesar = shift =>
cipher
( n => period(26, n + shift)
, n => period(26, n - shift)
)
const rot13 =
caesar(13)
const filter = alg => n =>
n < 65 || n > 90
? n
: 65 + alg(n - 65)
const period = (z, n) =>
(z + n % z) % z
const transform = (f, s) =>
Array
.from(s, composeL(toCode, filter(f), fromCode))
.join("")
const encode = (alg, s) =>
transform(alg.encode, s)
const decode = (alg, s) =>
transform(alg.decode, s)
const composeCipher = (...ciphers) =>
cipher
( composeL(...ciphers.map(c => c.encode))
, composeR(...ciphers.map(c => c.decode))
)
// func.js
const composeL = (...fs) =>
x => fs.reduce((r, f) => f(r), x)
const composeR = (...fs) =>
x => fs.reduceRight((r, f) => f(r), x)
// main.js
const myCipher =
composeCipher(rot13, atbash, caesar(3))
console.log(encode(myCipher, "DON'T CAUSE PANIC!"))
console.log(decode(myCipher, "MBC'W NPVXL APCHN!"))
MBC'W NPVXL APCHN!
DON'T CAUSE PANIC!
I saw this line of code in a correction in a coding game
const tC = readline().split(' ').map(x => +x);
I wonder what it does because when I log this function it render the same thing that this one
const tC = readline().split(' ').map(x => x);
but the rest of the code didn't work
Context :
/** Temperatures (easy) https://www.codingame.com/training/easy/temperatures
* Solving this puzzle validates that the loop concept is understood and that
* you can compare a list of values.
* This puzzle is also a playground to experiment the concept of lambdas in
* different programming languages. It's also an opportunity to discover
* functional programming.
*
* Statement:
* Your program must analyze records of temperatures to find the closest to
* zero.
*
* Story:
* It's freezing cold out there! Will you be able to find the temperature
* closest to zero in a set of temperatures readings?
**/
const N = +readline();
const tC = readline().split(' ').map(x => +x);
let min = Infinity;
for (let i in tC) {
(Math.abs(tC[i]) < Math.abs(min) || tC[i] === -min && tC[i] > 0) && (min = tC[i]);
}
print(min || 0);
Thanks a lot
The .map(x => +x) converts all items in the array to a number. And returns a new array with those converted values.
If you change it to .map(x => x) then the values are left untouched und you just create a copy of the original array. So the strings remain strings which will break the code if numbers are expected.
I personally would avoid the +x syntax and use the more verbose Number(x), and write either .map(x => Number(x)) or .map(Number).
According to this site below are the inputs the program should receive
Line 1: N, the number of temperatures to analyze
Line 2: A string with the N temperatures expressed as integers ranging from -273 to 5526
Let me provide line by line comments with respect to the game rules
// Line 1: reads number temperature inputs. + converts to number
const N = +readline();
// Line 2: reads string of temperatures.
// tC contains an array of temperatures of length N in numbers. + converts to number
const tC = readline().split(' ').map(x => +x);
let min = Infinity;
// iterate over tC array
for (let i in tC) {
// If two numbers are equally close to zero, positive integer has to be considered closest to zero
// set min = current iterating number if it matches above condition
(Math.abs(tC[i]) < Math.abs(min) || tC[i] === -min && tC[i] > 0) && (min = tC[i]);
}
print(min || 0);
Here is the working demo in javascript
modified to make it understandable for beginners.
// Line 1: reads number temperature inputs. + converts to number
// const N = +readline(); SAMPLE ALTERNATIVE
const N = +"5";
// Line 2: reads string of temperatures.
// tC contains an array of temperatures of length N in numbers. + converts to number
// const tC = readline().split(' ').map(x => +x); SAMPLE ALTERNATIVE
const tC = "1 -2 -8 4 5".split(' ').map(x => +x);
let min = Infinity;
// iterate over tC array
for (let i in tC) {
// If two numbers are equally close to zero, positive integer has to be considered closest to zero
// set min = current iterating number if it matches above condition
(Math.abs(tC[i]) < Math.abs(min) || tC[i] === -min && tC[i] > 0) && (min = tC[i]);
}
console.log(min || 0);
function readLine(){
return "123456"
}
var result = readLine().split("").map(x => +x)
console.log(result)
readLine().split("") // splits the string into an array as follows ["1", "2", "3", "4", "5", "6"]
.map(x => +x) // map method returns a new array which will take each item and gives a new array , here number changing from string to numbers as follows [1, 2, 3, 4, 5, 6] since +x is used
// the above is written in es6, which can be re written in es5 as follows
readLine().split("").map(function(x) {
return +x
})
// Note
In es6 if we have a single thing to pass we can avoid the function(x) to x
also we can remove the {} [curly braces and return too]
{return +x} to +x
ES2015
readLine().split("").map(function(x) {
return +x
})
ES2016
readLine().split("").map(x => +x);
Maybe i am just not that good enough in math, but I am having a problem in converting a number into pure alphabetical Bijective Hexavigesimal just like how Microsoft Excel/OpenOffice Calc do it.
Here is a version of my code but did not give me the output i needed:
var toHexvg = function(a){
var x='';
var let="_abcdefghijklmnopqrstuvwxyz";
var len=let.length;
var b=a;
var cnt=0;
var y = Array();
do{
a=(a-(a%len))/len;
cnt++;
}while(a!=0)
a=b;
var vnt=0;
do{
b+=Math.pow((len),vnt)*Math.floor(a/Math.pow((len),vnt+1));
vnt++;
}while(vnt!=cnt)
var c=b;
do{
y.unshift( c%len );
c=(c-(c%len))/len;
}while(c!=0)
for(var i in y)x+=let[y[i]];
return x;
}
The best output of my efforts can get is: a b c d ... y z ba bb bc - though not the actual code above. The intended output is suppose to be a b c ... y z aa ab ac ... zz aaa aab aac ... zzzzz aaaaaa aaaaab, you get the picture.
Basically, my problem is more on doing the ''math'' rather than the function. Ultimately my question is: How to do the Math in Hexavigesimal conversion, till a [supposed] infinity, just like Microsoft Excel.
And if possible, a source code, thank you in advance.
Okay, here's my attempt, assuming you want the sequence to be start with "a" (representing 0) and going:
a, b, c, ..., y, z, aa, ab, ac, ..., zy, zz, aaa, aab, ...
This works and hopefully makes some sense. The funky line is there because it mathematically makes more sense for 0 to be represented by the empty string and then "a" would be 1, etc.
alpha = "abcdefghijklmnopqrstuvwxyz";
function hex(a) {
// First figure out how many digits there are.
a += 1; // This line is funky
c = 0;
var x = 1;
while (a >= x) {
c++;
a -= x;
x *= 26;
}
// Now you can do normal base conversion.
var s = "";
for (var i = 0; i < c; i++) {
s = alpha.charAt(a % 26) + s;
a = Math.floor(a/26);
}
return s;
}
However, if you're planning to simply print them out in order, there are far more efficient methods. For example, using recursion and/or prefixes and stuff.
Although #user826788 has already posted a working code (which is even a third quicker), I'll post my own work, that I did before finding the posts here (as i didnt know the word "hexavigesimal"). However it also includes the function for the other way round. Note that I use a = 1 as I use it to convert the starting list element from
aa) first
ab) second
to
<ol type="a" start="27">
<li>first</li>
<li>second</li>
</ol>
:
function linum2int(input) {
input = input.replace(/[^A-Za-z]/, '');
output = 0;
for (i = 0; i < input.length; i++) {
output = output * 26 + parseInt(input.substr(i, 1), 26 + 10) - 9;
}
console.log('linum', output);
return output;
}
function int2linum(input) {
var zeros = 0;
var next = input;
var generation = 0;
while (next >= 27) {
next = (next - 1) / 26 - (next - 1) % 26 / 26;
zeros += next * Math.pow(27, generation);
generation++;
}
output = (input + zeros).toString(27).replace(/./g, function ($0) {
return '_abcdefghijklmnopqrstuvwxyz'.charAt(parseInt($0, 27));
});
return output;
}
linum2int("aa"); // 27
int2linum(27); // "aa"
You could accomplish this with recursion, like this:
const toBijective = n => (n > 26 ? toBijective(Math.floor((n - 1) / 26)) : "") + ((n % 26 || 26) + 9).toString(36);
// Parsing is not recursive
const parseBijective = str => str.split("").reverse().reduce((acc, x, i) => acc + ((parseInt(x, 36) - 9) * (26 ** i)), 0);
toBijective(1) // "a"
toBijective(27) // "aa"
toBijective(703) // "aaa"
toBijective(18279) // "aaaa"
toBijective(127341046141) // "overflow"
parseBijective("Overflow") // 127341046141
I don't understand how to work it out from a formula, but I fooled around with it for a while and came up with the following algorithm to literally count up to the requested column number:
var getAlpha = (function() {
var alphas = [null, "a"],
highest = [1];
return function(decNum) {
if (alphas[decNum])
return alphas[decNum];
var d,
next,
carry,
i = alphas.length;
for(; i <= decNum; i++) {
next = "";
carry = true;
for(d = 0; d < highest.length; d++){
if (carry) {
if (highest[d] === 26) {
highest[d] = 1;
} else {
highest[d]++;
carry = false;
}
}
next = String.fromCharCode(
highest[d] + 96)
+ next;
}
if (carry) {
highest.push(1);
next = "a" + next;
}
alphas[i] = next;
}
return alphas[decNum];
};
})();
alert(getAlpha(27)); // "aa"
alert(getAlpha(100000)); // "eqxd"
Demo: http://jsfiddle.net/6SE2f/1/
The highest array holds the current highest number with an array element per "digit" (element 0 is the least significant "digit").
When I started the above it seemed a good idea to cache each value once calculated, to save time if the same value was requested again, but in practice (with Chrome) it only took about 3 seconds to calculate the 1,000,000th value (bdwgn) and about 20 seconds to calculate the 10,000,000th value (uvxxk). With the caching removed it took about 14 seconds to the 10,000,000th value.
Just finished writing this code earlier tonight, and I found this question while on a quest to figure out what to name the damn thing. Here it is (in case anybody feels like using it):
/**
* Convert an integer to bijective hexavigesimal notation (alphabetic base-26).
*
* #param {Number} int - A positive integer above zero
* #return {String} The number's value expressed in uppercased bijective base-26
*/
function bijectiveBase26(int){
const sequence = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
const length = sequence.length;
if(int <= 0) return int;
if(int <= length) return sequence[int - 1];
let index = (int % length) || length;
let result = [sequence[index - 1]];
while((int = Math.floor((int - 1) / length)) > 0){
index = (int % length) || length;
result.push(sequence[index - 1]);
}
return result.reverse().join("")
}
I had to solve this same problem today for work. My solution is written in Elixir and uses recursion, but I explain the thinking in plain English.
Here are some example transformations:
0 -> "A", 1 -> "B", 2 -> "C", 3 -> "D", ..
25 -> "Z", 26 -> "AA", 27 -> "AB", ...
At first glance it might seem like a normal 26-base counting system
but unfortunately it is not so simple.
The "problem" becomes clear when you realize:
A = 0
AA = 26
This is at odds with a normal counting system, where "0" does not behave
as "1" when it is in a decimal place other than then unit.
To understand the algorithm, consider a simpler but equivalent base-2 system:
A = 0
B = 1
AA = 2
AB = 3
BA = 4
BB = 5
AAA = 6
In a normal binary counting system we can determine the "value" of decimal places by
taking increasing powers of 2 (1, 2, 4, 8, 16) and the value of a binary number is
calculated by multiplying each digit by that digit place's value.
e.g. 10101 = 1 * (2 ^ 4) + 0 * (2 ^ 3) + 1 * (2 ^ 2) + 0 * (2 ^ 1) + 1 * (2 ^ 0) = 21
In our more complicated AB system, we can see by inspection that the decimal place values are:
1, 2, 6, 14, 30, 62
The pattern reveals itself to be (previous_unit_place_value + 1) * 2.
As such, to get the next lower unit place value, we divide by 2 and subtract 1.
This can be extended to a base-26 system. Simply divide by 26 and subtract 1.
Now a formula for transforming a normal base-10 number to special base-26 is apparent.
Say the input is x.
Create an accumulator list l.
If x is less than 26, set l = [x | l] and go to step 5. Otherwise, continue.
Divide x by 2. The floored result is d and the remainder is r.
Push the remainder as head on an accumulator list. i.e. l = [r | l]
Go to step 2 with with (d - 1) as input, e.g. x = d - 1
Convert """ all elements of l to their corresponding chars. 0 -> A, etc.
So, finally, here is my answer, written in Elixir:
defmodule BijectiveHexavigesimal do
def to_az_string(number, base \\ 26) do
number
|> to_list(base)
|> Enum.map(&to_char/1)
|> to_string()
end
def to_09_integer(string, base \\ 26) do
string
|> String.to_charlist()
|> Enum.reverse()
|> Enum.reduce({0, nil}, fn
char, {_total, nil} ->
{to_integer(char), 1}
char, {total, previous_place_value} ->
char_value = to_integer(char + 1)
place_value = previous_place_value * base
new_total = total + char_value * place_value
{new_total, place_value}
end)
|> elem(0)
end
def to_list(number, base, acc \\ []) do
if number < base do
[number | acc]
else
to_list(div(number, base) - 1, base, [rem(number, base) | acc])
end
end
defp to_char(x), do: x + 65
end
You use it simply as BijectiveHexavigesimal.to_az_string(420). It also accepts on optional "base" arg.
I know the OP asked about Javascript but I wanted to provide an Elixir solution for posterity.
I have published these functions in npm package here:
https://www.npmjs.com/package/#gkucmierz/utils
Converting bijective numeration to number both ways (also BigInt version is included).
https://github.com/gkucmierz/utils/blob/main/src/bijective-numeration.mjs