Setting unsigned byte to ArrayBuffer - javascript

I know that you can do
const buffer = new ArrayBuffer(16);
const dataView = new DataView(buffer);
dataView.setUint8(1, 4)
console.log(dataView.getUint8(1)); // 1
However, I would like to set an unsigned byte before the dataView deceleration line, so would it be possible to do this without having access to dataView hence would if be possible to set an unsigned byte of 4 at the byte offset 1 to the ArrayBuffer instead of using dataView.setUint8(1, 4)?
Or alternatively would it be to convert a DataView to an ArrayBuffer?

I think the important thing you're missing is that a DataView is just a View. So when you do dataView.setUint8(1, 4) you do modify the buffer. The dataView itself does not hold the data, just a reference to the buffer. So your code already does what you want. To get an ArrayBuffer of it just use the original buffer:
const buffer = new ArrayBuffer(16);
const dataView = new DataView(buffer);
dataView.setUint8(1, 4)
console.log(dataView.getUint8(1)); // 4
console.log(new Uint8Array(buffer)) // [ 0, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]

Related

Merging Audio files with Javascript

I have an issue that I have spent forever trying to figure out and I can't get it. I want to take two audio files, and merge them into one blob so that they play at the exact same time. I don't know much about how audio works, so i'm kinda shooting in the dark. But my first idea (which I will leave the code below) was to create arrays of the decimal values of the two audio files, then add the values of a certain position together and divide by two, then push all of these new values into an array that would be turned into a blob and then played. This failed however, it played a really horrible squeaking sound.
function mergeAudio(){
var length;
const mergedAudio = []
//audioArray1 and audioArray2 are just arrays of the decimal values of the two audio files
// setting the length of the merged audio
if(audioArray1.length < audioArray2.length){
length = audioArray1.length
}else{
length = audioArray2.length
}
//merging bytes and pushing them to a new array
for(var i = 0; i < length; i++){
var byte = audioArray2[i] + audioArray1[i]
byte = byte / 2
if(byte <= 0){
byte = 0
}
mergedAudio.push(byte)
}
//create Audio and play it
const arrayBuffer = new Uint8Array(mergedAudio)
const audioBlob = new Blob(arrayBuffer);
const audioUrl = URL.createObjectURL(audioBlob);
const audio = new Audio(audioUrl);
audio.play().then(function(){}).catch(function(error){
console.log(error)
})
})
Like i said, this did not work, and i tried subtracting values from each byte and trying other methods to see what the outcome would be so I could figure out what I was doing wrong, but for some reason every other method I try (besides the one in the code above) leaves an error message:
"DOMException: Failed to load because no supported source was found."
If anyone is savvy with audio or knows why I am getting this error, or if anyone knows another method to merge audio, it would be greatly appreciated!!!
Edit: I am getting my data 5 second .mp3 files and am using the code below to create an array of the data
var audioArray1;
input.addEventListener('change', () =>{
const fileReader = new FileReader()
fileReader.onload = function(event) {
const arrayBuffer = event.target.result
const buffer = new Uint8Array(arrayBuffer)
const array = []
for(var i = 0; i < buffer.length; i++){
array.push(buffer[i])
}
audioArray1 = array
}
//uses audio file that user uploaded and assigns to file reader
fileReader.readAsArrayBuffer(input.files[0]);
})
If I log the array, it shows something like this:
(74925) [73, 68, 51, 4, 0, 0, 0, 0, 0, 35, 84, 83, 83, 69, 0, 0, 0, 15, 0, 0, 3, 76, 97, 118, 102, 53, 55, 46, 53, 54, 46, 49, 48, 49, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 251, 180, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 73, 110, 102, 111, 0, 0, 0, 15, 0, 0, 0, 129, 0, 1, 36, 128, 0, 5, 7, …]
A few things to be aware of:
(1) you need to deal with PCM values, not bytes. The PCM is created by appending bytes. For example, if the format is 16-bit, one of the two bytes will be shifted 8-bits over and the two bytes are then OR'd together to form a single 16-byte value. The order of the two bytes depends on whether the format is little-endian or big-endian.
(2) Once you have the PCM, mixing the tracks together is accomplished via addition.
IDK what playback format you are using. There may be another formatting step. The only time I've played raw PCM was when using the Web-Audio API.

GLTF index count same as buffer size error

I am working on learning WebGL and having a great time! I decided to use glTF as the 3d format for this project. I have it working well, with one weird exception. When the index count is low (say a simple triangulated cube), the index count equals the index buffer size. This can't be right. In every other model I have, the index count is 1/2 the size of the buffer.
These causes render errors like this "Error: WebGL warning: drawElements: Index buffer too small.". Below is the relevant code.
Renderable Constructor:
constructor(type,indexCount,vertBuffer,indexBuffer,uvBuffer,normalBuffer,modelMatrix){
this.type = type;
this.indexCount = indexCount;
this.name = "NONE";
this.vertBuffer = GL.createBuffer();
GL.bindBuffer(GL.ARRAY_BUFFER, this.vertBuffer);
GL.bufferData(GL.ARRAY_BUFFER, vertBuffer, GL.STATIC_DRAW);
GL.bindBuffer(GL.ARRAY_BUFFER, null);
this.uvBuffer = GL.createBuffer();
GL.bindBuffer(GL.ARRAY_BUFFER, this.uvBuffer);
GL.bufferData(GL.ARRAY_BUFFER, uvBuffer, GL.STATIC_DRAW);
GL.bindBuffer(GL.ARRAY_BUFFER, null);
this.indexBuffer = GL.createBuffer();
GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, this.indexBuffer);
GL.bufferData(GL.ELEMENT_ARRAY_BUFFER, indexBuffer, GL.STATIC_DRAW);
GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, null);
this.normalBuffer = GL.createBuffer();
GL.bindBuffer(GL.ARRAY_BUFFER, this.normalBuffer);
GL.bufferData(GL.ARRAY_BUFFER, normalBuffer, GL.STATIC_DRAW);
GL.bindBuffer(GL.ARRAY_BUFFER, null);
this.matrix = modelMatrix;
this.witMatrix = mat4.create();
this.textures = [];
//Create defaults
this.setTexture(new dTexture(TEX.COLOR,"res/missingno.png"));
this.setTexture(new dTexture(TEX.LIGHT,"res/rawLight.png"));
}
GLTF to "Renderable"
static fromGLTF(type,gltf){
console.log("GLTF: loading "+gltf.nodes[0].name);
return new Renderable(type,
gltf.nodes[0].mesh.primitives[0].indicesLength,
gltf.nodes[0].mesh.primitives[0].attributes.POSITION.bufferView.data,
gltf.accessors[gltf.nodes[0].mesh.primitives[0].indices].bufferView.data,
gltf.nodes[0].mesh.primitives[0].attributes.TEXCOORD_0.bufferView.data,
gltf.nodes[0].mesh.primitives[0].attributes.NORMAL.bufferView.data,
gltf.nodes[0].matrix);
}
Here is the rendering code (It's not very pretty, but for completeness, here it is):
render(){
GL.viewport(0.0,0.0,this.canvas.width,this.canvas.height);
GL.clear(GL.COLOR_BUFFER_BIT | GL.DEPTH_BUFFER_BIT);
this.renderables.forEach(renderable => {
//mat4.identity(renderable.witMatrix);
mat4.invert(renderable.witMatrix,renderable.matrix);
mat4.transpose(renderable.witMatrix,renderable.witMatrix);
GL.useProgram(this.programs[renderable.type].program);
GL.uniformMatrix4fv(this.programs[renderable.type].pMatrix, false, this.projectionMatrix);
GL.uniformMatrix4fv(this.programs[renderable.type].vMatrix, false, this.viewMatrix);
GL.uniformMatrix4fv(this.programs[renderable.type].mMatrix, false, renderable.matrix);
GL.enableVertexAttribArray(this.programs[renderable.type].positon);
GL.bindBuffer(GL.ARRAY_BUFFER,renderable.vertBuffer);
GL.vertexAttribPointer(this.programs[renderable.type].positon, 3, GL.FLOAT, false,0,0);
GL.enableVertexAttribArray(this.programs[renderable.type].uv);
GL.bindBuffer(GL.ARRAY_BUFFER,renderable.uvBuffer);
GL.vertexAttribPointer(this.programs[renderable.type].uv, 2, GL.FLOAT, false,0,0);
if(renderable.type == SHADER.STATIC){
GL.uniform1i(this.programs[renderable.type].colorPos, 0); // texture unit 0
GL.activeTexture(GL.TEXTURE0);
GL.bindTexture(GL.TEXTURE_2D, renderable.textures[TEX.COLOR].data);
GL.uniform1i(this.programs[renderable.type].lightPos, 1); // texture unit 1
GL.activeTexture(GL.TEXTURE1);
GL.bindTexture(GL.TEXTURE_2D, renderable.textures[TEX.LIGHT].data);
}else if(renderable.type == SHADER.DYNAMIC){
GL.uniform1i(this.programs[renderable.type].colorPos, 0); // texture unit 0
GL.activeTexture(GL.TEXTURE0);
GL.bindTexture(GL.TEXTURE_2D, renderable.textures[TEX.COLOR].data);
GL.enableVertexAttribArray(this.programs[renderable.type].normalPos);
GL.bindBuffer(GL.ARRAY_BUFFER,renderable.normalBuffer);
GL.vertexAttribPointer(this.programs[renderable.type].normalPos, 3, GL.FLOAT, false,0,0);
GL.uniformMatrix4fv(this.programs[renderable.type].witMatrix, false, renderable.witMatrix);
// set the light position
GL.uniform3fv(this.programs[renderable.type].lightPosPos, [
Math.sin(this.counter)*0.75,
Math.cos(this.counter)*0.75+1,
0
]);
this.counter+=this.dt*0.25;
}
GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, renderable.indexBuffer);
GL.drawElements(GL.TRIANGLES,renderable.indexCount,GL.UNSIGNED_SHORT,0);
GL.activeTexture(GL.TEXTURE1);
GL.bindTexture(GL.TEXTURE_2D,this.nullLightmap.data);
});
GL.flush();
}
Any ideas?
When the index count is low (say a simple triangulated cube), the index count equals the index buffer size. This can't be right. In every other model I have, the index count is 1/2 the size of the buffer.
The size of the index buffer depends on the number of indices and the componentType.
See Accessor Element Size:
componentType Size in bytes
5120 (BYTE) 1
5121 (UNSIGNED_BYTE) 1
5122 (SHORT) 2
5123 (UNSIGNED_SHORT) 2
5125 (UNSIGNED_INT) 4
5126 (FLOAT) 4
The componentType specifies the data type of a single index. When the number of indices is low (<= 256), then the data type UNSIGNED_BYTE is used, while the type of the index buffer is UNSIGNED_SHORT or even UNSIGNED_INT, if there are more indices. If the type is UNSIGNED_BYTE, then of course the number of indices is equal the size of the buffer in bytes.
Dependent on the type of the element indices you have to adept the draw call, e.g. GL.UNSIGNED_BYTE:
GL.drawElements(GL.TRIANGLES,renderable.indexCount,GL.UNSIGNED_BYTE,0);
Note, the values of the componentType (5120, 5121, ...) which seems to be arbitrary, are the values of the OpenGL enumerator constants GL.BYTE, GL.UNSIGNED_BYTE, ...
I suggest to pass the componentType to the constructor as you do it with the number of indices (indexCount)
constructor(
type,indexCount,componentType,
vertBuffer,indexBuffer,uvBuffer,normalBuffer,modelMatrix){
this.indexCount = indexCount;
this.componentType = componentType;
and to use it in when drawing the geometry:
GL.drawElements(
GL.TRIANGLES,
renderable.indexCount,
renderable.componentType,
0);

JavaScript TypedArray mixing types

I'm trying to use WebGL and would like to mix some different types into one buffer of bytes. I understand TypedArrays serve this purpose but it's not clear if I can mix types with them (OpenGL vertex data is often floats mixed with unsigned bytes or integers).
In my test I want to pack 2 floats into a UInt8Array using set(), but it appears to just place the 2 floats into the first 2 elements of the UInt8Array. I would expect this to fill the array of course since we have 8 bytes of data.
Is there anyway to achieve this in JavaScript or do I need to keep all my vertex data as floats?
src = new Float32Array(2); // 2 elements = 8 bytes
src[0] = 100;
src[1] = 200;
dest = new UInt8Array(8); // 8 elements = 8 bytes
dest.set(src, 0); // insert src at offset 0
dest = 100,200,0,0,0,0,0,0 (only the first 2 bytes are set)
You can mix types by making different views on the same buffer.
const asFloats = new Float32Array(2);
// create a uint8 view to the same buffer as the float32array
const asBytes = new Uint8Array(asFloats.buffer);
console.log(asFloats);
asBytes[3] = 123;
console.log(asFloats);
The way TypeArrays really work is there is something called an ArrayBuffer which is a certain number of bytes long. To view the bytes you need an ArrayBufferView of which there are various types Int8Array, Uint8Array, Int16Array, Uint16Array, Int32Array, Uint32Array, Float32Array, Float64Array.
You can create the ArrayBuffer from scratch.
const buffer = new ArrayBuffer(8);
const asFloats = new Float32Array(buffer);
asFloats[0] = 1.23;
asFloats[1] = 4.56;
console.log(asFloats);
Or you can do the more normal thing which is to create an ArrayBufferView of a specific type and it will create both the ArrayBufferView of that type and create the ArrayBuffer for it as well if you don't pass one into the constructor. You can then access that buffer from someArrayBufferView.buffer as shown in the first example above.
You can also assign a view an offset in the ArrayBuffer and a length to make it smaller than the ArrayBuffer. Example:
// make a 16byte ArrayBuffer and a Uint8Array (ArrayBufferView)
const asUint8 = new Uint8Array(16);
// make a 1 float long view in the same buffer
// that starts at byte 4 in that buffer
const byteOffset = 4;
const length = 1; // 1 float32
const asFloat = new Float32Array(asUint8.buffer, byteOffset, length);
// show the buffer is all 0s
console.log(asUint8);
// set the float
asFloat[0] = 12345.6789
// show the buffer is affected at byte 4
console.log(asUint8);
// set a float out of range of its length
asFloat[1] = -12345.6789; // this is effectively a no-op
// show the buffer is NOT affected at byte 8
console.log(asUint8);
So if you want to for example mix float positions and Uint8 colors for WebGL you might do something like
// we're going to have
// X,Y,Z,R,G,B,A, X,Y,Z,R,G,B,A, X,Y,Z,R,G,B,A,
// where X,Y,Z are float32
// and R,G,B,A are uint8
const sizeOfVertex = 3 * 4 + 4 * 1; // 3 float32s + 4 bytes
const numVerts = 3;
const asBytes = new Uint8Array(numVerts * sizeOfVertex);
const asFloats = new Float32Array(asBytes.buffer);
// set the positions and colors
const positions = [
-1, 1, 0,
0, -1, 0,
1, 1, 0,
];
const colors = [
255, 0, 0, 255,
0, 255, 0, 255,
0, 0, 255, 255,
];
{
const numComponents = 3;
const offset = 0; // in float32s
const stride = 4; // in float32s
copyToArray(positions, numComponents, offset, stride, asFloats);
}
{
const numComponents = 4;
const offset = 12; // in bytes
const stride = 16; // in bytes
copyToArray(colors, numComponents, offset, stride, asBytes);
}
console.log(asBytes);
console.log(asFloats);
function copyToArray(src, numComponents, offset, stride, dst) {
const strideDiff = stride - numComponents;
let srcNdx = 0;
let dstNdx = offset;
const numElements = src.length / numComponents;
if (numElements % 1) {
throw new Error("src does not have an even number of elements");
}
for (let elem = 0; elem < numElements; ++elem) {
for(let component = 0; component < numComponents; ++component) {
dst[dstNdx++] = src[srcNdx++];
}
dstNdx += strideDiff;
}
}

Convert uint8array to double in javascript

I have an arraybuffer and I want to get double values.For example from [64, -124, 12, 0, 0, 0, 0, 0] I would get 641.5
Any ideas?
You could adapt the excellent answer of T.J. Crowder and use DataView#setUint8 for the given bytes.
var data = [64, -124, 12, 0, 0, 0, 0, 0];
// Create a buffer
var buf = new ArrayBuffer(8);
// Create a data view of it
var view = new DataView(buf);
// set bytes
data.forEach(function (b, i) {
view.setUint8(i, b);
});
// Read the bits as a float/native 64-bit double
var num = view.getFloat64(0);
// Done
console.log(num);
For multiple numbers, you could take chunks of 8.
function getFloat(array) {
var view = new DataView(new ArrayBuffer(8));
array.forEach(function (b, i) {
view.setUint8(i, b);
});
return view.getFloat64(0);
}
var data = [64, -124, 12, 0, 0, 0, 0, 0, 64, -124, 12, 0, 0, 0, 0, 0],
i = 0,
result = [];
while (i < data.length) {
result.push(getFloat(data.slice(i, i + 8)));
i += 8;
}
console.log(result);
Based on the answer from Nina Scholz I came up with a shorter:
function getFloat(data /* Uint8Array */) {
return new DataView(data.buffer).getFloat64(0);
}
Or if you have a large array and know the offset:
function getFloat(data, offset = 0) {
return new DataView(data.buffer, offset, 8).getFloat64(0);
}

Draw .obj file with Three.js without native OBJLoader

I need to draw .obj file, without its loading. For example I have .obj file with follow content
v 0.1 0.2 0.3
v 0.2 0.1 0.5
vt 0.5 -1.3
vn 0.7 0.0 0.7
f 1 2 3
I read this file, parse content and have its data in a JavaScript object.
{
v: [
{x:0.1, 0.2, 0.3}
{x:0.2, 0.1, 0.5}
],
vt: [
{u: 0.5, v: -1.3}
],
vn: [
{x: 0.7, 0.0, 0.7}
],
f: [
// ...
]
}
Next I need to draw this data with three.js. I read documentation, but can't find any example or description how to do it. Who knows?
Is there any method for that purpose?
First question is, why wont you use THREE.ObjLoader ? The reason is not clear to me. There could be so much different test cases for loading obj file. Its better to use THREE.ObjLoader.
If you cant use that then
My preferred way would be to create a THREE.BufferGeometry. We are going to create some THREE.BufferAttribute from the arrays of your javascript object. One THREE.BufferAttribute for each vertex attribute. Also, we gonna set the index buffer. Here is a function to do it -
function make_3D_object(js_object) {
let vertices = new Float32Array(js_object.v);
let uvs = new Float32Array(js_object.vt);
let normals = new Float32Array(js_object.vn);
let indices = new Uint8Array(js_object.f);
// this is to make it 0 indexed
for(let i = 0; i < indices.length; i++)
indices[i]--;
let geom = new THREE.BufferGeometry();
geom.addAttribute('position', new THREE.BufferAttribute(vertices, 3));
geom.addAttribute('normal', new THREE.BufferAttribute(normals, 3));
geom.addAttribute('uv', new THREE.BufferAttribute(uvs, 2));
geom.setIndex(new THREE.BufferAttribute(indices, 1));
let material = new THREE.MeshPhongMaterial( {
map: js_object.texture, // assuming you have texture
color: new THREE.Color().setRGB(1, 1, 1),
specular: new THREE.Color().setRGB(0, 0,0 )
} );
let obj_mesh = new THREE.Mesh(geom, material);
return obj_mesh;
}
In this code i have assumed you have only a single body, a single material with only a texture. Also this code is not tested.

Categories