GIF.js flashing on addFrame - javascript

I am using GIF.js and am trying to get rid of this flashing of the logo in the bottom left corner of the image (example attached below or can be reproduced at this link. I have confirmed that each frame passed to the gif object is indeed the right color. I am using the following constructor :
var tempGIF = new GIF({
workers: 4,
quality: quality,
height: this.mapHeight + this.infoCanvas.height,
width: this.mapWidth,
workerScript: 'gif.worker.js',
globalPalette: true
});
and adding frames using gif.addFrame(composedCnv, { delay: delayInput }); and although I experimented with background and transparent I am unable to get rid of the flashing behaviour. Can anyone help me please?
EDIT : Using globalPalette and a large quality (20), I still get this weird pulsating of the logo but the colors are consistent.
EDIT 2 : I dug in the source code of GIF.js and found in this link the following piece of information :
/*
Sets quality of color quantization (conversion of images to the maximum 256
colors allowed by the GIF specification). Lower values (minimum = 1)
produce better colors, but slow processing significantly. 10 is the
default, and produces good color mapping at reasonable speeds. Values
greater than 20 do not yield significant improvements in speed.
*/
and indeed my flashing issue disappears if the quality is equal to 1. I will leave this question open in case there is a workaround and if someone can explain to me how come this quality issue only affects the logo and not the map portion of the canvas.
EDIT : Add more code for context. I loop over visible layers of a OpenLayers map and compose a HTMLCanvasElement which I addFrame to a GIF object. I have verified that the passed canvas elements are always correctly colored and have isolated the issue to be something that happens between me adding them to the GIF and the GIF being rendered.
async createGIFHandler ( layer , addAllTitles , quality , delay , range ) {
...
var tempGIF = new GIF({
workers: 4,
quality: quality,
height: this.mapHeight + this.infoCanvas.height,
width: this.mapWidth,
workerScript: 'gif.worker.js'
});
let progressCounter = 1;
for ( let i = range[0] ; i < range[1] ; i++, progressCounter++ ) {
this.setDateTime( driver , driverDA[i] );
for ( let j = 0 ; j < visibleLayers.length ; j++ ) {
if ( visibleLayers[j].get('layerName') !== layer.Name ) {
var tempDA = visibleLayers[j].get('layerDateArray');
for ( let k = 0 ; k < tempDA.length ; k++ ) {
if ( driverDA[i].getTime() === tempDA[k].getTime() ) {
this.setDateTime( visibleLayers[j] , tempDA[k] );
}
}
}
}
await new Promise(resolve => this.map.once('rendercomplete', resolve));
await this.composeCanvas( tempGIF , driverDA[i] , visibleLayers , delay , widths )
this.$store.dispatch('Layers/setGIFPercent', Math.round(((progressCounter / gifLength) * 100)))
}
tempGIF.on('finished', (blob) => {
const tempURL = URL.createObjectURL( blob )
this.$store.dispatch( 'Layers/setGIFURL' , tempURL )
console.log('GIF Finished');
});
tempGIF.render();
},
async composeCanvas( gif , timeStep , visibleLayers , delayInput , widths ) {
const mapCnv = this.getMapCanvas();
await this.updateInfoCanvas( timeStep , widths )
const composedCnv = await this.stitchCanvases( mapCnv , visibleLayers.length );
await new Promise((resolve) => {
gif.addFrame(composedCnv, { delay: delayInput });
resolve();
})
},

gif.js computes the color palette per frame, but there's an undocumented option that lets you define a global palette.
To reuse the palette of the first frame, instantiate with:
new GIF({
// ...
globalPalette: true,
})
You can also pass in your own palette as a flat array of RGB values. E.g., to only assign black, grey and white, instantiate with:
new GIF({
// ...
globalPalette: [0, 0, 0, 127, 127, 127, 255, 255, 255],
})
I recommend you pick the latter option and include the logo color.

Related

Scaling a 2D SVG group object in three.js

I'm attempting to create a map of 2d SVG tiles in three.js. I have used SVGLoader() Like so (Keep in mind some brackets are for parent scopes that aren't shown. That is not the issue):
loader = new SVGLoader();
loader.load(
// resource URL
filePath,
// called when the resource is loaded
function ( data ) {
console.log("SVG file successfully loaded");
const paths = data.paths;
for ( let i = 0; i < paths.length; i ++ ) {
const path = paths[ i ];
const material = new THREE.MeshBasicMaterial( {
color: path.color,
side: THREE.DoubleSide,
depthWrite: false
} );
const shapes = SVGLoader.createShapes( path );
console.log(`Shapes length = ${shapes.length}`);
try{
for ( let j = 0; j < shapes.length; j ++ ) {
const shape = shapes[ j ];
const geometry = new THREE.ShapeGeometry( shape );
const testGeometry = new THREE.PlaneGeometry(2,2);
try{
const mesh = new THREE.Mesh(geometry, material );
group.add( mesh );
}catch(e){console.log(e)}
}
}catch(e){console.log(e)}
}
},
// called when loading is in progress
function ( xhr ) {
console.log( ( xhr.loaded / xhr.total * 100 ) + '% loaded' );
},
// called when loading has errors
function ( error ) {
console.log( 'An error happened' );
}
);
return group;
}
Dismiss the fact that I surrounded alot of it in try{}catch(){}
I have also created grid lines and added it to my axis helper in the application that allows me to see where each cooordinate is, in relation to the X and Y axis.
This is how the svg appears on screen:
Application Output
I can't seem to figure out how to correlate the scale of the svg, with the individual grid lines. I have a feeling that Im going to have to dive deeper into the SVG loading script that I have above then scale each shape mesh specifically. I call the SVG group itself in the following code.
try{
//SVG returns a group, TGA returns a texture to be added to a material
var object1 = LOADER.textureLoader("TGA", './Art/tile1.tga', pGeometry);
var object2 = LOADER.textureLoader("SVG", '/Art/bitmap.svg');
const testMaterial = new THREE.MeshBasicMaterial({
color: 0xffffff,
map: object1,
side: THREE.DoubleSide
});
//const useMesh = new THREE.Mesh(pGeometry, testMaterial);
//testing scaling the tile
try{
const worldScale = new THREE.Vector3();
object2.getWorldScale(worldScale);
console.log(`World ScaleX: ${worldScale.x} World ScaleY: ${worldScale.y} World ScaleZ: ${worldScale.z}`);
//object2.scale.set(2,2,0);
}catch(error){console.log(error)}
scene.add(object2);
}
Keep in mind that the SVG is object2 in this case. Some of the ideas to tackle this problem I have had is looking into what a world scale is, matrix4 transformations, and the scale methods of either the object3d parent properties or the bufferGeometry parent properties of this particular svg group object. I am also fully aware that three.js is designed for 3d graphics, however I would like to master 2d graphics programming in this library before I get into the 3d aspect of things. I also have a thought that the scale of the SVG group is distinctly different from the scale of the scene and its X Y and Z axis.
If this question has already been answered a link to the corresponding answer would be of great help to me.
Thank you for the time you take to answer this question.
I messed with the dimensions of the svg file itself in the editor I used to paint it and I got it to scale. Not exactly a solution in the code, however I guess the code is just closely tied to the data that the svg file provides and cant be altered too much.

Is there a way to get RGB value from a live camera feed in reactJs?

I was wondering in ReactJS/ReactApp if it is possible to capture the RGB color from Center Pixel of a Live Camera Feed displayed on a website (honestly any pixel can do fine or we could average out the entire video frame - whatever simpler)?
All this camera does is displays a feed (of the devices rear camera) which is designed by the website (no need for pictures or video taken).
function Camera() {
const videoRef = useRef(null);
useEffect(() => {
getVideo();
}, [videoRef]);
const getVideo = () => {
navigator.mediaDevices
.getUserMedia({ video: {facingMode: 'environment', width: 600 , height: 400}})
.then(stream => {
let video = videoRef.current;
video.srcObject = stream;
video.play();
})
.catch(err => {
console.error("error:", err);
});
};
return (
<div>
<video ref={videoRef} />
</div>
)
}
export default Camera;
As well, this camera Implementation was Modified from: https://itnext.io/accessing-the-webcam-with-javascript-and-react-33cbe92f49cb
You can use your videoRef and do something like this.
const {video} = videoRef.current;
const canvas = document.createElement('canvas');
canvas.width = 600;
canvas.height = 400;
const ctx = canvas.getContext('2d');
ctx.drawImage(video, 0, 0, 600, 400);
const rgbaData = ctx.getImageData(0,0,600,400).data;
The above code grabs the video stream from your videoRef then you create a 2d canvas and draw your stream onto it.
From there you just do getImage fill out the parameters on what part of the image you want the pixels from and then .data is a single dimensional array of rgba values containing all the pixels in the image.
(note: this also doesn't require drawing the canvas onto the screen)
https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/getImageData
And then to get the average pixel color you can look at this post:
Get average color of image via Javascript
But if you're looking to find pixel brightness you can use this
Formula to determine perceived brightness of RGB color
And lastly for contrast of pixels
How to programmatically calculate the contrast ratio between two colors?
Note you will also probably want to chunk together the single dimensional array into [[rgba],[rgba]] chunks rather than the [r,g,b,a,r,g,b,a] structure the array already comes in.
const chunk = (a, n) => [...Array(Math.ceil(a.length / n))].map((_, i) => a.slice(n * i, n + n * i));
a - being the array
n - being the chunk size (in this case 4 since rgba)

How to change the scale to an arbitrary number using lightningchart

When you use lightningchart to set the scale to be displayed, the default size is 5 or 10 increments.
Is it possible to change this to an arbitrary increments (2 increments, 3 increments, etc.)?
Please let me know.
The Axis automatically calculates the increment size depending on the level of zooming for the Axis and space available for each tick label. We're looking towards improving our Axis behavior with the next Major release this Summer.
In the meanwhile, it is possible to create this behavior manually by using customTicks.
/*
* LightningChartJS example that showcases a simple XY line series.
*/
// Import LightningChartJS
const lcjs = require('#arction/lcjs')
// Extract required parts from LightningChartJS.
const {
lightningChart,
ColorRGBA,
UIElementBuilders,
emptyFill,
emptyLine,
emptyTick,
SolidLine,
SolidFill
} = lcjs
// Create a XY Chart.
const chart = lightningChart().ChartXY()
.setTitle('XY Chart with custom tick interval')
// Get the default X Axis and cache it
const defXAxis = chart.getDefaultAxisX()
// Set the default Axis style as emptyTick, hiding the default Ticks created
// by the AxisTickStrategy
defXAxis.setTickStyle(emptyTick)
// Iterate an arbitrary amount of ticks, creating a new customTick with interval of 2
for ( let i = 10; i <= 90; i += 2 ) {
// Add a new custom tick to the X Axis
defXAxis.addCustomTick(
// Modify the textBox to hide its background and border
UIElementBuilders.PointableTextBox.addStyler(
(styler) => {
styler.setBackground(
(bg)=>
bg
.setFillStyle(emptyFill)
.setStrokeStyle(emptyLine)
) }
))
// Set the tick position
.setValue( i )
// Make the Grid stroke less visible
.setGridStrokeStyle(
new SolidLine( {
thickness: 1,
fillStyle: new SolidFill( { color: ColorRGBA(200, 200, 200, 50)})
})
)
}

How to train a model in nodejs (tensorflow.js)?

I want to make a image classifier, but I don't know python.
Tensorflow.js works with javascript, which I am familiar with. Can models be trained with it and what would be the steps to do so?
Frankly I have no clue where to start.
The only thing I figured out is how to load "mobilenet", which apparently is a set of pre-trained models, and classify images with it:
const tf = require('#tensorflow/tfjs'),
mobilenet = require('#tensorflow-models/mobilenet'),
tfnode = require('#tensorflow/tfjs-node'),
fs = require('fs-extra');
const imageBuffer = await fs.readFile(......),
tfimage = tfnode.node.decodeImage(imageBuffer),
mobilenetModel = await mobilenet.load();
const results = await mobilenetModel.classify(tfimage);
which works, but it's no use to me because I want to train my own model using my images with labels that I create.
=======================
Say I have a bunch of images and labels. How do I use them to train a model?
const myData = JSON.parse(await fs.readFile('files.json'));
for(const data of myData){
const image = await fs.readFile(data.imagePath),
labels = data.labels;
// how to train, where to pass image and labels ?
}
First of all, the images needs to be converted to tensors. The first approach would be to create a tensor containing all the features (respectively a tensor containing all the labels). This should the way to go only if the dataset contains few images.
const imageBuffer = await fs.readFile(feature_file);
tensorFeature = tfnode.node.decodeImage(imageBuffer) // create a tensor for the image
// create an array of all the features
// by iterating over all the images
tensorFeatures = tf.stack([tensorFeature, tensorFeature2, tensorFeature3])
The labels would be an array indicating the type of each image
labelArray = [0, 1, 2] // maybe 0 for dog, 1 for cat and 2 for birds
One needs now to create a hot encoding of the labels
tensorLabels = tf.oneHot(tf.tensor1d(labelArray, 'int32'), 3);
Once there is the tensors, one would need to create the model for training. Here is a simple model.
const model = tf.sequential();
model.add(tf.layers.conv2d({
inputShape: [height, width, numberOfChannels], // numberOfChannels = 3 for colorful images and one otherwise
filters: 32,
kernelSize: 3,
activation: 'relu',
}));
model.add(tf.layers.flatten());
model.add(tf.layers.dense({units: 3, activation: 'softmax'}));
Then the model can be trained
model.fit(tensorFeatures, tensorLabels)
If the dataset contains a lot of images, one would need to create a tfDataset instead. This answer discusses why.
const genFeatureTensor = image => {
const imageBuffer = await fs.readFile(feature_file);
return tfnode.node.decodeImage(imageBuffer)
}
const labelArray = indice => Array.from({length: numberOfClasses}, (_, k) => k === indice ? 1 : 0)
function* dataGenerator() {
const numElements = numberOfImages;
let index = 0;
while (index < numFeatures) {
const feature = genFeatureTensor(imagePath);
const label = tf.tensor1d(labelArray(classImageIndex))
index++;
yield {xs: feature, ys: label};
}
}
const ds = tf.data.generator(dataGenerator).batch(1) // specify an appropriate batchsize;
And use model.fitDataset(ds) to train the model
The above is for training in nodejs. To do such a processing in the browser, genFeatureTensor can be written as follow:
function loadImage(url){
return new Promise((resolve, reject) => {
const im = new Image()
im.crossOrigin = 'anonymous'
im.src = 'url'
im.onload = () => {
resolve(im)
}
})
}
genFeatureTensor = image => {
const img = await loadImage(image);
return tf.browser.fromPixels(image);
}
One word of caution is that doing heavy processing might block the main thread in the browser. This is where web workers come into play.
Consider the exemple https://codelabs.developers.google.com/codelabs/tfjs-training-classfication/#0
What they do is:
take a BIG png image (a vertical concatenation of images)
take some labels
build the dataset (data.js)
then train
The building of the dataset is as follows:
images
The big image is divided into n vertical chunks.
(n being chunkSize)
Consider a chunkSize of size 2.
Given the pixel matrix of image 1:
1 2 3
4 5 6
Given the pixel matrix of image 2 is
7 8 9
1 2 3
The resulting array would be
1 2 3 4 5 6 7 8 9 1 2 3 (the 1D concatenation somehow)
So basically at the end of the processing, you have a big buffer representing
[...Buffer(image1), ...Buffer(image2), ...Buffer(image3)]
labels
That kind of formatting is done a lot for classification problems. Instead of classifying with a number, they take a boolean array.
To predict 7 out of 10 classes we would consider
[0,0,0,0,0,0,0,1,0,0] // 1 in 7e position, array 0-indexed
What you can do to get started
Take your image (and its associated label)
Load your image to the canvas
Extract its associated buffer
Concatenate all your image's buffer as a big buffer. That's it for xs.
Take all your associated labels, map them as a boolean array, and concatenate them.
Below, I subclass MNistData::load (the rest can be let as is (except in script.js where you need to instantiate your own class instead)
I still generate 28x28 images, write a digit on it, and get a perfect accuracy since I don't include noise or voluntarily wrong labelings.
import {MnistData} from './data.js'
const IMAGE_SIZE = 784;// actually 28*28...
const NUM_CLASSES = 10;
const NUM_DATASET_ELEMENTS = 5000;
const NUM_TRAIN_ELEMENTS = 4000;
const NUM_TEST_ELEMENTS = NUM_DATASET_ELEMENTS - NUM_TRAIN_ELEMENTS;
function makeImage (label, ctx) {
ctx.fillStyle = 'black'
ctx.fillRect(0, 0, 28, 28) // hardcoded, brrr
ctx.fillStyle = 'white'
ctx.fillText(label, 10, 20) // print a digit on the canvas
}
export class MyMnistData extends MnistData{
async load() {
const canvas = document.createElement('canvas')
canvas.width = 28
canvas.height = 28
let ctx = canvas.getContext('2d')
ctx.font = ctx.font.replace(/\d+px/, '18px')
let labels = new Uint8Array(NUM_DATASET_ELEMENTS*NUM_CLASSES)
// in data.js, they use a batch of images (aka chunksize)
// let's even remove it for simplification purpose
const datasetBytesBuffer = new ArrayBuffer(NUM_DATASET_ELEMENTS * IMAGE_SIZE * 4);
for (let i = 0; i < NUM_DATASET_ELEMENTS; i++) {
const datasetBytesView = new Float32Array(
datasetBytesBuffer, i * IMAGE_SIZE * 4,
IMAGE_SIZE);
// BEGIN our handmade label + its associated image
// notice that you could loadImage( images[i], datasetBytesView )
// so you do them by bulk and synchronize after your promises after "forloop"
const label = Math.floor(Math.random()*10)
labels[i*NUM_CLASSES + label] = 1
makeImage(label, ctx)
const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
// END you should be able to load an image to canvas :)
for (let j = 0; j < imageData.data.length / 4; j++) {
// NOTE: you are storing a FLOAT of 4 bytes, in [0;1] even though you don't need it
// We could make it with a uint8Array (assuming gray scale like we are) without scaling to 1/255
// they probably did it so you can copy paste like me for color image afterwards...
datasetBytesView[j] = imageData.data[j * 4] / 255;
}
}
this.datasetImages = new Float32Array(datasetBytesBuffer);
this.datasetLabels = labels
//below is copy pasted
this.trainIndices = tf.util.createShuffledIndices(NUM_TRAIN_ELEMENTS);
this.testIndices = tf.util.createShuffledIndices(NUM_TEST_ELEMENTS);
this.trainImages = this.datasetImages.slice(0, IMAGE_SIZE * NUM_TRAIN_ELEMENTS);
this.testImages = this.datasetImages.slice(IMAGE_SIZE * NUM_TRAIN_ELEMENTS);
this.trainLabels =
this.datasetLabels.slice(0, NUM_CLASSES * NUM_TRAIN_ELEMENTS);// notice, each element is an array of size NUM_CLASSES
this.testLabels =
this.datasetLabels.slice(NUM_CLASSES * NUM_TRAIN_ELEMENTS);
}
}
I found a tutorial [1] how to use existing model to train new classes. Main code parts here:
index.html head:
<script src="https://unpkg.com/#tensorflow-models/knn-classifier"></script>
index.html body:
<button id="class-a">Add A</button>
<button id="class-b">Add B</button>
<button id="class-c">Add C</button>
index.js:
const classifier = knnClassifier.create();
....
// Reads an image from the webcam and associates it with a specific class
// index.
const addExample = async classId => {
// Capture an image from the web camera.
const img = await webcam.capture();
// Get the intermediate activation of MobileNet 'conv_preds' and pass that
// to the KNN classifier.
const activation = net.infer(img, 'conv_preds');
// Pass the intermediate activation to the classifier.
classifier.addExample(activation, classId);
// Dispose the tensor to release the memory.
img.dispose();
};
// When clicking a button, add an example for that class.
document.getElementById('class-a').addEventListener('click', () => addExample(0));
document.getElementById('class-b').addEventListener('click', () => addExample(1));
document.getElementById('class-c').addEventListener('click', () => addExample(2));
....
Main idea is to use existing network to make its prediction and then substitute the found label with your own one.
Complete code is in the tutorial. Another promising, more advanced one in [2]. It needs strict pre processing, so I leave it only here, I mean it is so much more advanced one.
Sources:
[1] https://codelabs.developers.google.com/codelabs/tensorflowjs-teachablemachine-codelab/index.html#6
[2] https://towardsdatascience.com/training-custom-image-classification-model-on-the-browser-with-tensorflow-js-and-angular-f1796ed24934
TL;DR
MNIST is the image recognition Hello World. After learning it by heart, these questions in your mind are easy to solve.
Question setting:
Your main question written is
// how to train, where to pass image and labels ?
inside your code block. For those I found perfect answer from examples of Tensorflow.js examples section: MNIST example. My below links have pure javascript and node.js versions of it and Wikipedia explanation. I will go them through on the level necessary to answer the main question in your mind and I will add also perspectives how your own images and labels have anything to do with MNIST image set and the examples using it.
First things first:
Code snippets.
where to pass images (Node.js sample)
async function loadImages(filename) {
const buffer = await fetchOnceAndSaveToDiskWithBuffer(filename);
const headerBytes = IMAGE_HEADER_BYTES;
const recordBytes = IMAGE_HEIGHT * IMAGE_WIDTH;
const headerValues = loadHeaderValues(buffer, headerBytes);
assert.equal(headerValues[0], IMAGE_HEADER_MAGIC_NUM);
assert.equal(headerValues[2], IMAGE_HEIGHT);
assert.equal(headerValues[3], IMAGE_WIDTH);
const images = [];
let index = headerBytes;
while (index < buffer.byteLength) {
const array = new Float32Array(recordBytes);
for (let i = 0; i < recordBytes; i++) {
// Normalize the pixel values into the 0-1 interval, from
// the original 0-255 interval.
array[i] = buffer.readUInt8(index++) / 255;
}
images.push(array);
}
assert.equal(images.length, headerValues[1]);
return images;
}
Notes:
MNIST dataset is a huge image, where in one file there are several images like tiles in puzzle, each and every with same size, side by side, like boxes in x and y coordination table. Each box has one sample and corresponding x and y in the labels array has the label. From this example, it is not a big deal to turn it to several files format, so that actually only one pic at a time is given to the while loop to handle.
Labels:
async function loadLabels(filename) {
const buffer = await fetchOnceAndSaveToDiskWithBuffer(filename);
const headerBytes = LABEL_HEADER_BYTES;
const recordBytes = LABEL_RECORD_BYTE;
const headerValues = loadHeaderValues(buffer, headerBytes);
assert.equal(headerValues[0], LABEL_HEADER_MAGIC_NUM);
const labels = [];
let index = headerBytes;
while (index < buffer.byteLength) {
const array = new Int32Array(recordBytes);
for (let i = 0; i < recordBytes; i++) {
array[i] = buffer.readUInt8(index++);
}
labels.push(array);
}
assert.equal(labels.length, headerValues[1]);
return labels;
}
Notes:
Here, labels are also byte data in a file. In Javascript world, and with the approach you have in your starting point, labels could also be a json array.
train the model:
await data.loadData();
const {images: trainImages, labels: trainLabels} = data.getTrainData();
model.summary();
let epochBeginTime;
let millisPerStep;
const validationSplit = 0.15;
const numTrainExamplesPerEpoch =
trainImages.shape[0] * (1 - validationSplit);
const numTrainBatchesPerEpoch =
Math.ceil(numTrainExamplesPerEpoch / batchSize);
await model.fit(trainImages, trainLabels, {
epochs,
batchSize,
validationSplit
});
Notes:
Here model.fit is the actual line of code that does the thing: trains the model.
Results of the whole thing:
const {images: testImages, labels: testLabels} = data.getTestData();
const evalOutput = model.evaluate(testImages, testLabels);
console.log(
`\nEvaluation result:\n` +
` Loss = ${evalOutput[0].dataSync()[0].toFixed(3)}; `+
`Accuracy = ${evalOutput[1].dataSync()[0].toFixed(3)}`);
Note:
In Data Science, also this time here, the most faschinating part is to know how well the model survives the test of new data and no labels, can it label them or not? For that is the evaluation part that now prints us some numbers.
Loss and accuracy: [4]
The lower the loss, the better a model (unless the model has over-fitted to the training data). The loss is calculated on training and validation and its interperation is how well the model is doing for these two sets. Unlike accuracy, loss is not a percentage. It is a summation of the errors made for each example in training or validation sets.
..
The accuracy of a model is usually determined after the model parameters are learned and fixed and no learning is taking place. Then the test samples are fed to the model and the number of mistakes (zero-one loss) the model makes are recorded, after comparison to the true targets.
More information:
In the github pages, in README.md file, there is a link to tutorial, where all in the github example is explained in greater detail.
[1] https://github.com/tensorflow/tfjs-examples/tree/master/mnist
[2] https://github.com/tensorflow/tfjs-examples/tree/master/mnist-node
[3] https://en.wikipedia.org/wiki/MNIST_database
[4] How to interpret "loss" and "accuracy" for a machine learning model

Three.js: ExtrudeGeometry: Problems setting different material for front and back side

I'm creating an ExtrudeGeometry of a hexagon shape and trying to set a different material for the front side and back side, as it is stated in this thread.
let shape = new THREE.Shape();
/*...*/
let geometry = new THREE.ExtrudeGeometry(shape, {
steps: 2,
amount: 0.05,
bevelEnabled: false,
material: 0, //frontMaterial = green
extrudeMaterial: 1 //sideMaterial = gray
});
//Searching for the back side and setting the marialIndex accordingly
for (let face of geometry.faces) {
if (face.normal.z == 1) {
face.materialIndex = 2; //backMaterial = red
}
}
let mesh = new THREE.Mesh(geometry, new THREE.MultiMaterial([frontMaterial, sideMaterial, backMaterial]));
The problem now is, that this method (iterating over the faces and looking for those with normal.z == 1) does not work correctly with a extrudeGeometry.amount = 0.05. A value of 0.1 works fine.
See this jsfiddle
Is there another method for setting a different material for front and back side or am I doing it just wrong?
Thanks for your help!
The problem is due to rounding. Do this, instead
if ( face.normal.z < - 0.99999 ) { // instead of == - 1
face.materialIndex = 2;
}
Also, the back face normal is in theory (0, 0, - 1 ).
updated fiddle: https://jsfiddle.net/xhbu2e01/3/
three.js r.84

Categories