The project I work on switched to MySQL. The keys we use are UUID strings (like 43d597d7-2323-325a-90fc-21fa5947b9f3), but the database field, rather than be a string, is defined as binary(16) - 16-byte unsigned binary.
I understand that a UUID is basically a 16-byte binary, but I have no idea how to convert from/to a binary number.
I'm using node-mysql to access the database, and I tried using node-uuid to parse the UUID, but that yields an array of integers. I also tried using Node's Buffer, but that just yields a buffer object.
How do I convert a UUID string to fit into that field? And how do I turn a value I read from that field into a UUID?
Due to lack of time, I'll paste the comment that provided valid result(s) and modify the answer later so it's clearer.
Right, if you have a UUID 43d597d7-2323-325a-90fc-21fa5947b9f3 in that string format already in your JS app, you'd send the following query to MySQL:
SELECT col FROM table WHERE uuid_col = UNHEX(REPLACE('43d597d7-2323-325a-90fc-21fa5947b9f3', '-', ''));
If you want to pull data out and have UUID in readable format, you have to convert it to hexadecimal notation.
SELECT HEX(uuid_col) FROM table;
That one will give you the UUID without dashes. It appears that the node-uuid.parse method works if you give it hex string without dashes.
While N.B.'s answer works I stumbled upon another solution.
UUID v1 starts with character segments that are time based; however, the smallest units come first making distribution rather scattered in an index.
If you aren't stuck on the precise UUID v1 format than there is a NodeJS module that can generate unique IDs based on UUID v1 that also monotonically increase and scale about as well as auto incremented IDs. It also works with node-mysql.
Checkout: monotonic-id
An example with node-mysql:
var MID = require('monotonic-id');
var mid = new MID();
client.query('INSERT INTO `...` SET `mid`=?', mid.toBuffer(), function(err, res) {
...
})
Related
I am experimenting with the Web Serial API (https://codelabs.developers.google.com/codelabs/web-serial/#3) to read and write data to an Arduino processor (atmega 328p). I couldn't figure out why the message being sent to the board goes through an Uint8Array in an example I found:
const sendToBoard(command) {
let message = new Uint8Array([command])
const writer = outputStream.getWriter();
writer.write(message);
writer.releaseLock();
}
What will passing an array do to the integer passed to Uint8Array and what is its relevance when sending it through a serial port to a board? Could it be done without it?
If you mean the example with the led matrix its explained why they use an array they store the values of the individual led in the matrix that way
arr.push(cb.checked === true ? 1 : 0);
and this is probably the reason why. So you can of course send single values over serial but its more efficient to do bursts of data instead of single values in that scenario (MatrixLED)
EDIT
The Uint8Array() constructor creates a typed array of 8-bit unsigned integers. The contents are initialized to 0. Once established, you can reference elements in the array using the object's methods, or using standard array index syntax.
Read more here and in-depth readingWhy the dev choose to use exactly this method - you have to write an email to the author of the program.
I have this issue with Google's Firestore and Google's Realtime DB ids/duplicates but I think it is more general problem and it may have multiple solutions without even considering Firebase.
Right now, I create IDs from my JSON object like this:
// Preparing data
let data = {
workouts: [{...}],
movements: [{...}]
}
// Creating ID from data
const id = btoa(JSON.stringify(data))
// Deleting workouts and movements
delete data.workouts
delete data.movements
// Adding multiple other properties to data objects for example:
data.date = new Date()
... and much more
// Creating new document in DB here or
alerting user it already exists if data.id already exists
When I load the data object from Firestore, I decode it like this:
const data = JSON.parse(atob(this.workout.id))
My goal is to have only unique workouts + movements combinations in my database and generating id based on data from workouts + movements solves it.
The issue is that Realtime DB has limit of 750 Bytes (750 UTF-8 chars per id) and Firestore has limit of 1500 Bytes per id. I have just discovered that by having id that has ~1000 chars. And I believe I would be able to hit even the 1500 chars limit with data from users.
My ideas:
1) Use some different encoding (supporting UTF-8) that will make even long (1000 chars) string to something like 100 chars max. It will still need to be decodable. Is it even possible or Base64 is the shortest it could be?
2) Use autogenerated IDs + save encoded string as data.id parameter to db and when creating new workout always compare this data.id to table of already created workout data.id(s).
Is it possible to solve without looping through all existing workouts?
3) Any other idea? I am still in the realm of decoding/encoding but I believe it must has a different more simple solution.
Do not btoa
First off, Base64 string is probably gonna be longer than the stringified JSON, so if you're struggling with character limit and you can use entire UTF-8, do not btoa anything.
IDs
You're looking for a hash. You could (not recommended) try to roll your own by writing hashing functions for JSON primitives, each must return number:
{ ... } object shall have it's properties sorted by name then hashed
string string shall construct the it's hash from individual characters (.charCodeAt())
number probably can just be kept as-is
[ ... ] Not really sure what would I do with arrays, probably assume different order is different hash and hash them as is
Then you'd deal with the json recursively, constructing the value as:
let hash = 0;
hash += hashStringValue("ddd");
hash *= 31;
hash += hashObjectValue({one:1, test:"text"});
return hash
The multiplication by a prime before addition is a cheap trick, but this only works fir limited depth of the object.
Use library for hash
I googled javascript hash json and found this: https://www.npmjs.com/package/json-hash which looks like what you want:
// If you're not on babel use:
// require('babel/polyfill')
npm install json-hash
var assert = require('assert')
var hash = require('json-hash')
// hash.digest(any, options?)
assert.equal(hash.digest({foo:1,bar:1}), hash.digest({bar:1,foo:1}))
Storage
For the storage of JSON data, if you really need it, use compression algorithm such as LZString. You could also filter the JSON and only keep the values you really need.
What are possible values if a variable inside an interface is typed as Uint8Array?
typeorm/src/driver/sqljs/SqljsConnectionOptions.ts
https://github.com/typeorm/typeorm/blob/master/src/driver/sqljs/SqljsConnectionOptions.ts
/**
* Sql.js-specific connection options.
*/
export interface SqljsConnectionOptions extends BaseConnectionOptions {
/**
* A Uint8Array that gets imported when the connection is opened.
*/
readonly database?: Uint8Array;
}
If already read MDN's article on Uint8Array, but it did not help.
EDIT: As you can see, there is a database name required. Intuitivly I would past in the name of my database, but this is a string. So how does a database in Uint8Array format look like?
It is not a database name. It is a database. Reading up on what sql.js is will show you it is SQLite compiled into JavaScript through Emscripten, with an in-memory store. By default, it gives you a blank database, which will get forgotten when you stop using it; but you have an option of importing it from, or exporting it to, an Uint8Array, which is literally the byte-by-byte contents of your SQLite database file. Look at sql.js readme to see many examples of how to get the database array (from upload, from XHR, from Node.js reading of a file...).
Type of data is gonna be an array of 8-bit unsigned integer.
If you're not familiar with it, this link explains well http://ctp.mkprog.com/en/ctp/unsigned_8bit_integer/
"8-bit unsigned integer type is used to store only pozitiv whole number. 8-bit unsigned integer and his value range: from 0 to 255."
On sql server : Out put : 0x5C8C8AAFE7AE37EA4EBDF8BFA01F82B8
SELECT HASHBYTES('MD5', convert(varchar,getdate(),112)+'mytest#+')
On JavaScript : Out put : 5c8c8aafe7ae37ea4ebdf8bfa01f82b8
//to get Md5 Hash bytes
vm.getMd5Hashbytes = function () {
var currentDate = moment().format('YYYYMMDD');
var md5Hash = md5.createHash(currentDate + 'mytest#+');
return md5Hash;
}
angular-md5 module
Q : Can you tell me why this difference ? SQL server shows 0x as prefix.Why ?
This is purely a formatting issue. Both versions are producing an identical sequence of bytes. SQL Server and node just have different conventions when it comes to presenting these bytes in a human readable format.
You can get similar formatting by specifically telling SQL Server how to format your binary data
declare #hashAsBinary varbinary(max)
declare #hashAsText char(32)
set #hashAsBinary = HASHBYTES('MD5', '20160818mytest#+')
set #hashAsText = LOWER(CONVERT(varchar(max), #hashAsBinary, 2))
select #hashAsText
Which outputs:
5c8c8aafe7ae37ea4ebdf8bfa01f82b8
See SQL Server converting varbinary to string
I am not sure how else to explain it but it will take more space than a comment allows for so I will post it as an answer.
Look at the source code that you are referencing. At the end (lines 210 and 212) you will see it converts the binary value to a hex string (and then to lower case which does not matter unless you opt for a string comparison at the end). End result = your JavaScript library returns a representation using the type string formatted as hex.
Your Sql function HASHBYTES on the other hand produces a varbinary typed result (which is a different type than string (varchar)).
So you have 2 different data types (each living on their own space as you have not pulled one to the other). You never mention where you are doing the comparison, ie: on the database or are you pulling from the database to script. Either way to do a comparison you need to convert one type so you are either comparing 2 strings types OR comparing two binary types. If you do not compare similar types you will get unexpected results or run time exceptions.
If you are comparing using strings AND in JavaScript then look at your library that you are referencing, it already has a call named wordToHex, copy and paste it and reuse it to convert your Sql result to a string and then do a string comparison (do not forget to compare case insensitive or also make it lower case).
Edit
WebApi is black box for me.It is a 3rd party service.I just need to send the security token as mentioned above.
Assuming that the type accepted by that web api is byt[] appending 0x to your string in javascript and then sending it to the web api should work as in the web api will then translate the incoming parameter as a byte array and execute the comparison using the correct types. As this is a black box there is no way to know for certain unless you either ask them if the accepted type is indeed a byte array or to test it.
I am creating an app where I sometimes need to allow user to generate some random strings. I would like to force that to be generated in the following format:
xxxx-xxxx-xxxx
Where "x" is some number [0-9] or character [A-Z]. What would the most efficient way to do this? When generated, I would also need to check does it already exist in database so I am a little bit worried about the time which it would take.
We can make it so simple like
require("crypto").randomBytes(64).toString('hex')
You can use crypto library.
var crypto = require('crypto');
//function code taken from http://blog.tompawlak.org/how-to-generate-random-values-nodejs-javascript
function randomValueHex (len) {
return crypto.randomBytes(Math.ceil(len/2))
.toString('hex') // convert to hexadecimal format
.slice(0,len).toUpperCase(); // return required number of characters
}
var string = randomValueHex(4)+"-"+randomValueHex(4)+"-"+randomValueHex(4);
console.log(string);
Check these threads: Generate random string/characters in JavaScript
You can check if the field exists in the database. If it does, just generate a new token. Then check again. The probability of it existing is really low if you don't have large user base. Hence, probability of long loop of checks is low as well.