I have seen OpenGL codes written like this:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(1,1,1,1,1,-1);
glMatrixMode(GL_MODELVIEW);
and I have seen OpenGL codes written like this:
glEnable(2896);
glDisable(3042);
Notice the number values in the glEnable() and glDisable() methods.
My real question is: Does anyone have a link to a website that has a list of every method or which number value corresponds to whichever mode you are putting in the code? Like, what does glEnable(2896); actually mean?
The enum names and values live in gl.xml, off of the main spec registry page.
Given your example of glEnable(2896):
2896 in hex is 0x0B50
Searching gl.xml for that value lands you on
<enum value="0x0B50" name="GL_LIGHTING"/>
Which you can see corresponds to GL_LIGHTING
The numbers are the numeric values for OpenGL tokens as defined by the OpenGL specification. You can find the specifications for the various OpenGL versions at http://www.opengl.org/registry/
The definitions for the tokens is written down in a form usable by a C or C++ compiler in the GL/gl.h header file.
However it's strongly discouraged to use the numeric values, since in code they're just magic numbers.
Like, what does glEnable(2896); actually mean?
Just search the GL/gl.h for the token which is defined to this value. They're usually written in hexadecimal so you have to convert that decimal representation first. Like this (using a *nix style shell):
dw ~ % grep $(printf '%X' 2896) /usr/include/GL/gl.h
#define GL_LIGHTING 0x0B50
Related
I'm trying to use the "Change User's Password" extended operation, as defined in this RFC which states that it takes a sequence of three optional parameters. However, it seems that ldapjs's client.exop() function only allows me to provide it with a string or buffer.
This is my attempt:
const dn = `uid=${username},ou=People,dc=${orginization},dc=com`
client.exop('1.3.6.1.4.1.4203.1.11.1', [dn, null, newPassword], (err, value, res) => {
// ...
})
And this is the resulting error:
TypeError: options.requestValue must be a buffer or a string
How am I supposed to encode those values into a string? The ldapjs documentation gives very little information about passing parameters to an extended operation.
TL:DR; The extended operation parameters need to be ASN.1 values encoded with the BER standard. This isn't a trivial task, so you might want an additional npm library, such as asn1 to help with this process.
After combing through ldapjs's code, reading up a bunch about ASN.1 and how LDAP uses the ASN.1 standard, and some trial and error, I was finally able to resolve this issue. Because of the distinct lack of documentation for this, I thought I would share what I learned on stackoverflow so others don't need to go through as much trouble as I did.
A working example
This uses the asn1 npm library to encode the data being sent.
const { Ber } = require('asn1')
// ...
const CTX_SPECIFIC_CLASS = 0b10 << 6
const writer = new Ber.Writer()
writer.startSequence()
writer.writeString(dn, CTX_SPECIFIC_CLASS | 0) // sequence item number 0
// I'm choosing to omit the optional sequence item number 1
writer.writeString(newPassword, CTX_SPECIFIC_CLASS | 2) // sequence item number 2
writer.endSequence()
client.exop('1.3.6.1.4.1.4203.1.11.1', writer.buffer, (err, value, res) => {
// ...
})
What is ASN.1?
ASN.1 is a language that's used to describe an interface for an object. These interfaces are special in that they are language agnostic - i.e. javascript can create an object that conforms to one of these interfaces, encode it, and send it to a python server which then decodes and validates the object against the same interface. Much of ASN.1 is not relevant to what we're trying to accomplish, but it's important to note that what we're trying to do is make an object that conforms to one of these ASN.1 interfaces (LDAP is built all around them).
What is BER?
BER describes a standard way to represent an object that conforms to an ASN.1 interface. Using the BER standard, we can encode javascript data into a buffer that can be understood by an LDAP server.
BER basics
BER is designed to be a very compact encoding standard. I'll go over the basics here, but I highly recommend this article if you want to get into more details about the binary representation of BER (it's tailored to LDAP users). A Layman's Guide to a Subset of ASN.1, BER, and DER is another great resource.
ASN.1 describes a number of basic object types, such as strings and numbers, and It describes structured object types such as a sequence or set. It also provides the power for users to use their own custom types.
In BER, each piece of data is prefixed by two bytes (usually): An identifier byte and a length-of-data byte. The identifier byte tags the data with information about the kind of data it contains (A string? sequence? custom type?). There are four "classes" of tags: universal (such as a string), application (LDAP defined some application tags which you might encounter), context-specific (See the "BER Sequences" section below), and private (not likely applicable here). A string tag's bit sequence will always be interpreted as a string tag, but the meaning for the bit sequence of a custom tag may vary on the environment, or even within a request.
In the asn1 npm library, you can write out a string element as follows:
writer.writeString('text')
To find all of the available functions, the author of this library asks that you peek into the source code.
BER Sequences
A sequence is used to describe an object (a set of key-value pairs) with a particular shape. Some elements may be optional while others are required. The RFC I was following gave the following description for its parameters. We need to conform to this sequence's interface in order to send our password reset parameters to LDAP.
PasswdModifyRequestValue ::= SEQUENCE {
userIdentity [0] OCTET STRING OPTIONAL
oldPasswd [1] OCTET STRING OPTIONAL
newPasswd [2] OCTET STRING OPTIONAL }
The [0], [1], and [2] all refer to context-specific tag numbers. A value tagged with the context-specific tag of 1 will be interpreted as the value for the oldPasswd argument. We don't need to use the global string tag to indicate that our value is of type string - LDAP can already infer that information using the interface we're conforming to. This means when writing a string in this sequence, instead of doing writer.writeString('text') as done before (which automatically used the global string tag), a tag number must be provided as follows:
const CTX_SPECIFIC_CLASS = 0b10 << 6
writer.writeString(newPassword, CTX_SPECIFIC_CLASS | 2) // The second optional parameter allows you to set a custom tag on the data being set (instead of the default string tag).
The first two bits of the tag byte are reserved for specifying the tag class (in this case, it's the context-specific class, or bits "10"). So, CTX_SPECIFIC_CLASS | 2 refers to the newPasswd sequence item described by the RFC. Note that if I want to omit an optional sequence entry, I just don't write out a value tagged with that sequence id.
Concluding Remarks
Hopefully this should give readers enough information to be able to format and send BER-encoded parameters for an extended LDAP operation. I do want to note that I am no ASN.1/BER expert - all of this information above is just how I understood these concepts from my own research over the past couple of days. So, there are likely a few things mis-explained in this post. Feel free to edit it if you happen to be more knowledgeable than me about this topic.
The situation:
My sensor measures data, that I process in a NodeRED-function and afterwards parse into a JSON-object. The NodeRED-function allows me to write JavaScript-Code. The JSON-object gets send to a receiving module, written in C++, that works on the JSON with the JSON_spirit library. I can not change the receiving module.
The problem: The receiving app tries to get one value of the JSOn with the function value.get_float(). The sensors sometimes measure an exact 1.00. That gets passed to the JSON as {"value":1}. The receiving module terminates with the Error:
terminate called after throwing an instance of 'std::runtime_error'
what(): get_value< real > called on integer Value
Obviously, the function value.get_float() seems not to be able to change an 1 into a 1.0 and, as mentioned, I can not change the used function. So, I need to find a way to parse {"value":1.00} into the JSON.
What I have tried:
I tried in my NodeRED function value.toFixed(2) but this would return a string {"value":"1.00"}.
So, I tried to parse the string as a float again like this
value.toFixed(2);
value = parseFloat(value);
But this would lead for a 1.00 again to a JSON like this: {"value":1}.
I tried some tricks with rounding as well, but as soon as JavaScript can omit unnecessary decimals, it does. So, I havent found a solution yet.
Any ideas are welcome.
P.S.: This is my first time ever StackOverflow question so please do not be too harsh on me :)
Edit: I found the following workaround.
I use value.toFixed(2); in a first note to get {"value":"1.00"}. Later on, I use a regular expression on the string in a change-Node in NodeRED.
RegEx:
"Value":\"(\d+\.\d{2})\"
Replace with:
"Value":$1
My real case was a bit more complex than the example, so the regex was a little longer. But regex101 helped a lot.
Think this post was already been there:
Force float value when using JSON.stringify
So i think in Javascript there is no difference between 1 and 1.0
On sql server : Out put : 0x5C8C8AAFE7AE37EA4EBDF8BFA01F82B8
SELECT HASHBYTES('MD5', convert(varchar,getdate(),112)+'mytest#+')
On JavaScript : Out put : 5c8c8aafe7ae37ea4ebdf8bfa01f82b8
//to get Md5 Hash bytes
vm.getMd5Hashbytes = function () {
var currentDate = moment().format('YYYYMMDD');
var md5Hash = md5.createHash(currentDate + 'mytest#+');
return md5Hash;
}
angular-md5 module
Q : Can you tell me why this difference ? SQL server shows 0x as prefix.Why ?
This is purely a formatting issue. Both versions are producing an identical sequence of bytes. SQL Server and node just have different conventions when it comes to presenting these bytes in a human readable format.
You can get similar formatting by specifically telling SQL Server how to format your binary data
declare #hashAsBinary varbinary(max)
declare #hashAsText char(32)
set #hashAsBinary = HASHBYTES('MD5', '20160818mytest#+')
set #hashAsText = LOWER(CONVERT(varchar(max), #hashAsBinary, 2))
select #hashAsText
Which outputs:
5c8c8aafe7ae37ea4ebdf8bfa01f82b8
See SQL Server converting varbinary to string
I am not sure how else to explain it but it will take more space than a comment allows for so I will post it as an answer.
Look at the source code that you are referencing. At the end (lines 210 and 212) you will see it converts the binary value to a hex string (and then to lower case which does not matter unless you opt for a string comparison at the end). End result = your JavaScript library returns a representation using the type string formatted as hex.
Your Sql function HASHBYTES on the other hand produces a varbinary typed result (which is a different type than string (varchar)).
So you have 2 different data types (each living on their own space as you have not pulled one to the other). You never mention where you are doing the comparison, ie: on the database or are you pulling from the database to script. Either way to do a comparison you need to convert one type so you are either comparing 2 strings types OR comparing two binary types. If you do not compare similar types you will get unexpected results or run time exceptions.
If you are comparing using strings AND in JavaScript then look at your library that you are referencing, it already has a call named wordToHex, copy and paste it and reuse it to convert your Sql result to a string and then do a string comparison (do not forget to compare case insensitive or also make it lower case).
Edit
WebApi is black box for me.It is a 3rd party service.I just need to send the security token as mentioned above.
Assuming that the type accepted by that web api is byt[] appending 0x to your string in javascript and then sending it to the web api should work as in the web api will then translate the incoming parameter as a byte array and execute the comparison using the correct types. As this is a black box there is no way to know for certain unless you either ask them if the accepted type is indeed a byte array or to test it.
I am currently working on a "crowdsourced" average value calculator. The idea is to show a picture to people and ask them to guess the age of the person. Once they entered the value, I want to show them the average age the person was given.
Here is what I want to do exactly :
Put a form online and ask people to put on a value
Store the data entered
Return the mean value people put there
Calculate the standard deviation so that people cannot put a value too high or too low compared to what others put. That means the average value shown will be more accurate this way.
I am looking for the fastest way to do it, and here is what I thought about :
Store the data in an SQL table and return the mean value through the AVG() function..but then, how would I calculate the Std. Dev ?
Store the data in a txt file and use javascript to convert it to an array do the calculations.
But if I get like 20,000 different values, it might be slow to do either way. ?
I am quite a beginner in programming and what I propose might seem ridiculous...feel free to tell it to me !
Thank you all.
SQL Server has STDEV (from 2005 onwards) so SQL sounds good for you.
Returns the statistical standard deviation of all values in the
specified expression. May be followed by the OVER clause.
Syntax
STDEV ( [ ALL | DISTINCT ] expression )
Arguments
ALL
Applies the function to all values. ALL is the default.
DISTINCT
Specifies that each unique value is considered.
expression
Is a numeric expression. Aggregate functions and subqueries are not permitted. expression is an expression of the exact numeric or approximate numeric data type category, except for the bit data type.
I've used JavaScript to get the data through the internet (I'm interfacing with a brokerage firm's API functions), but unlike most of the rest of their API's, this one returns the data in a 'binary' like format. Here is the layout of the file I get back:
Field -------- Type ------------ Length(8 bit bytes) --------------- Description
Symbol Count----Integer------------- 4--------------------------- Number of symbols for which data is being returned. The subsequent sections are repeated this many times
REPEATING SYMBOL DATA
Symbol Length Short 2 Length of the Symbol field
Symbol String Variable The symbol for which the historical data is returned
Error Code Byte 1 0=OK, 1=ERROR
Error Length Short 2 Only returned if Error Code=1. Length of the Error string
Error Text String Variable Only returned if Error Code=1. The string describing the error
Bar Count Integer 4 # of chart bars; only if error code=0
REPEATING PRICE DATA
close Float 4
high Float 4
Low Float 4
open Float 4
volume Float 4 in 100's
timestamp Long 8 time in milliseconds from 00:00:00 UTC on January 1, 1970
END OF REPEATING PRICE DATA
Terminator Bytes 2 0xFF, 0XFF
END OF REPEATING SYMBOL DATA
As you can see, this file is a mixture of different types of fields. My requirement is to convert this file from the way it is into a fixed field text file (or CSV file). I'm not very good at JavaScript, but I know enough to get by. My main language is MAPPER from Unisys (it is actually called "Business Information Server"). Currently I get all HTTP responses as text files, but this one is a 'binary' file, and MAPPER can not process it because it is a text-based language (a 4GL). I've spent days trying to find a snippet of JavaScript code that I could use, but to no avail. I think this is really simple stuff for a guy that knows JavaScript.
I'm a fellow UNISYS programmer. 25 years of FORTRAN 77 on a 2200 mainframe. Happily, I rarely had anything to do with MAPPER.
I'd like to help, but you're not providing enough information.
Where is this JavaScript code running? In a browser, or is it an extension to whatever you're using to access MAPPER?
Are you using some kind of terminal emulator? AttachMate?
Is your data really arriving in a file, or is it in memory? How are you receiving it, how are you passing on the contents?
Is it vital that your processing happen in JavaScript? There are dozens of languages that would make very short work of the task if the data were lying around as a file and the output should be a file too.
One problem I see is that, AFAIK, JavaScript doesn't know about file IO. That's why I'm asking where it's running.
EDIT:
OK, somehow you have a browser-like environment and JavaScript running in it.
First, the problem of getting binary data out of your response. Here's a bit of help:
https://developer.mozilla.org/en/using_xmlhttprequest
This is Mozilla documentation, under "Receiving binary data," but I'm hoping there will be enough overlap for it to be useful:
function load_binary_resource(url) {
var req = new XMLHttpRequest();
req.open('GET', url, false);
//XHR binary charset opt by Marcus Granado 2006 [http://mgran.blogspot.com]
req.overrideMimeType('text/plain; charset=x-user-defined');
req.send(null);
if (req.status != 200) return '';
return req.responseText;
}
The above lets you fiddle with the connection a bit to hopefully obtain binary data.
That function is called like so:
var filestream = load_binary_resource(url);
var abyte = filestream.charCodeAt(x) & 0xff;
...and if I understand this correctly, your responseText is a JavaScript string (as usual) but thanks to the fiddling and the binary content, it's not containing printable text but binary data. Heh, as long as you don't try to interpret it, it's just a series of bytes just like any old text.
The second line lets you extract a single byte from any position in the string. That byte will be a value between 0 and 255; or if you're unlucky, between -128 and 127. Not sure how JavaScript deals with signed bytes.
This may look like it's doing you a lot of no good. Let's see how you could get to your data:
Your data starts off with a short called symbolLength. I'm guessing a short is 2 bytes, and I'm guessing that offsets for charCodeAt() begin at 0. So you'll be wanting the first two bytes, or bytes 0 and 1. I'm not sure if your data will be coming in high-endian or low-endian, but you should be able to reconstruct that short from either
var symbolLength = fileStream.charCodeAt(0) + 256 * fileStream.charCodeAt(1);
or
var symbolLength = 256 * fileStream.charCodeAt(0) + fileStream.charCodeAt(1);
In other words, using multiplication to re-assemble bytes into integers.
Integers are presumably 4 bytes, so you'll be multiplying by 4 powers of 256: 16777216, 65536, 256 and 1 - again, either in that order or reversed.
The String data is, of course, just that, and once you've taken into account the number of bytes taken up by the preceding fields, you should be able to dig it out of your response string simply using substring operators.
Now for the yucky part - conversion of floats. The internal structure of those numbers is defined by IEEE 754. float probably corresponds to binary32 and double (if you have any) to binary64. The links from the Wikipedia article I linked explain these formats well enough that you could write your own conversion routine if you were desparate, but in your shoes I'd go looking for ready-built coding for this. Surely you're not the first person faced with converting a handful of bytes into a floating point number. Maybe you can find some C or Java code you could hand-convert, or you could even find a routine already written in JavaScript.
Finally, once you have in hand methods to convert all the data types you mentioned, all you need to do is to format that data in whatever format you want to see downstream in MAPPER. Loop through the structures, incrementing the pointers for the offsets... probably nothing new for you.
Admittedly, I've done a lot of guessing and handwaving here. This could be the beginning of a solution but you'll probably want to do a bit of experimenting and hit SO with some detail questions. Don't mention UNISYS, phrase your question as if you wanted to do this in IE :)
As a first step, I'd try dumping out your incoming binary string, byte-wise, preferrably in hex, to some medium where you can read it and compare the bytes you see with the bytes you're expecting from the input data.