JavaScript callback to get selected glyph index in Bokeh - javascript

I've created a visual graph using Bokeh that shows a network I created using Networkx. I now want to use TapTool to show information pertinent to any node on the graph that I click on. The graph is just nodes and edges. I know I should be able to use var inds = cb_obj.selected['1d'].indices; in the JavaScript callback function to get the indices of the nodes (glyphs) that were clicked on, but that's not working somehow and I get the error message, Uncaught TypeError: Cannot read property '1d' of undefined. A nudge in the right direction would be greatly appreciated.
Below is my code. Please note that I've defined my plot as a Plot() and not as a figure(). I don't think that's the reason for the issue, but just wanted to mention it. Also, I'm using window.alert(inds); just to see what values I get. That's not my ultimate purpose, but I expect that bit to work anyway.
def draw_graph_____(self, my_network):
self.graph_height, self.graph_width, self.graph_nodes, self.graph_edges, self.node_coords, self.node_levels = self.compute_graph_layout(my_network)
graph = nx.DiGraph()
graph.add_nodes_from(self.graph_nodes)
graph.add_edges_from(self.graph_edges)
plot = Plot(plot_width = self.graph_width, plot_height = self.graph_height, x_range = Range1d(0.0, 1.0), y_range = Range1d(0.0, 1.0))
plot.title.text = "Graph Demonstration"
graph_renderer = from_networkx(graph, self.graph_layout, scale = 1, center = (-100, 100))
graph_renderer.node_renderer.data_source.data["node_names"] = self.graph_nodes
graph_renderer.node_renderer.data_source.data["index"] = self.graph_nodes
graph_renderer.node_renderer.glyph = Circle(size = 40, fill_color = Spectral4[0])
graph_renderer.node_renderer.selection_glyph = Circle(size = 40, fill_color = Spectral4[2])
graph_renderer.node_renderer.hover_glyph = Circle(size = 40, fill_color = Spectral4[1])
graph_renderer.edge_renderer.glyph = MultiLine(line_color = "#CCCCCC", line_alpha = 0.8, line_width = 5)
graph_renderer.edge_renderer.selection_glyph = MultiLine(line_color = Spectral4[2], line_width = 5)
graph_renderer.edge_renderer.hover_glyph = MultiLine(line_color = Spectral4[1], line_width = 5)
graph_renderer.selection_policy = NodesAndLinkedEdges()
graph_renderer.inspection_policy = NodesAndLinkedEdges()
x_coord = [coord[0] for coord in self.node_coords]
y_coord = [coord[1] for coord in self.node_coords]
y_offset = []
for level in self.node_levels:
for item in self.node_levels[level]:
if self.node_levels[level].index(item) % 2 == 0:
y_offset.append(20)
else:
y_offset.append(-40)
graph_renderer.node_renderer.data_source.data["x_coord"] = x_coord
graph_renderer.node_renderer.data_source.data["y_coord"] = y_coord
graph_renderer.node_renderer.data_source.data["y_offset"] = y_offset
labels_source = graph_renderer.node_renderer.data_source
labels = LabelSet(x = "x_coord", y = "y_coord", text = 'node_names', text_font_size = "12pt", level = 'glyph',
x_offset = -50, y_offset = "y_offset", source = labels_source, render_mode = 'canvas')
plot.add_layout(labels)
callback = CustomJS(args = dict(source = graph_renderer.node_renderer.data_source), code =
"""
console.log(cb_obj)
var inds = cb_obj.selected['1d'].indices;
window.alert(inds);
""")
plot.add_tools(HoverTool(tooltips = [("Node", "#node_names"), ("Recomm", "Will put a sample recommendation message here later")]))
plot.add_tools(TapTool(callback = callback))
plot.renderers.append(graph_renderer)
output_file("interactive_graphs.html")
show(plot)
My imports are as follows, by the way:
import collections
import networkx as nx
import numpy as np
from bokeh.io import output_file, show
from bokeh.models import Circle, ColumnDataSource, CustomJS, Div, HoverTool, LabelSet, MultiLine, OpenURL, Plot, Range1d, TapTool
from bokeh.models.graphs import from_networkx, NodesAndLinkedEdges
from bokeh.palettes import Spectral4
I'm sorry for not posting entire code, but that would require quite a few changes to make dummy data and show other files and functions (which I should have), but I thought just this one function may suffice for the identification of the issue. If not, I'm happy to share more code. Thanks!

The problem is that the callback is not attached to a data source. The value of cb_obj is whatever object triggers the callback. But only ColumnDataSource objects have a selected property, so only callbacks on data sources will have cb_obj.selected. If you are wanting to have a callback fire whenever a selection changes, i.e. whenever a node is clicked on, then you'd want to have the callback on the data source. [1]
However, if you want to have a callback when a node is merely hovered over (but not clicked on) that is an inspection, not a selection. You will want to follow this example:
https://docs.bokeh.org/en/latest/docs/user_guide/interaction/callbacks.html#customjs-for-hover
Although it is not often used (and thus not documented terribly well) the callback for hover tools gets passed additional information in a cb_data parameter. This cb_data parameter is used as a catch-all mechanism for tools to be able to pass extra things, specific to the tool, on to the callback. In the case of hover tools, cb_data is an object that has .index and .geometry attributes. So cb_data.index['1d'].indices has the indices of the points that are currently hovered over. The .geometry attribute as information about the kind of hit test that was performed (i.e. was a single point? or a vertical or horizontal span? And what was the location of the point or span?)
[1] Alternatively, tap tools also pass a specialized cb_data as described above. It is an object with a .source property that the the data source that made a selection. So cb_data.source.selected should work. In practice I never use this though, since a callback on the data source works equally well.

Related

Map Autodesk Forge heatmaps to room - Data Visualization Api

Okay so I've been trying to map some heatmaps to a Revit room using the DataViz api. I was able to get X Y Z from Revit for the sensor inside the rooms, i've substracted the viewer.model.getGlobalOffset() and managed to show some sprites on these points. I know for a fact that those sprites / points are inside Rooms, but whenever I try to use the same points to load a heatmap I get the Some devices did not map to a room: warning and no heatmap is displayed.
Following the API documentation this warning appears when there is no room information in the point. Did I miss anything? This is "my" code:
async function loadHeatmaps(model){
const dataVizExtn = await viewer.loadExtension("Autodesk.DataVisualization");
// Given a model loaded from Forge
const structureInfo = new Autodesk.DataVisualization.Core.ModelStructureInfo(model);
const devices = [
{
id: "Oficina 6", // An ID to identify this device
name:"Oficina-",
position: { x: 22.475382737884104, y: 7.4884431474006163, z: 3.0 }, // World coordinates of this device
sensorTypes: ["temperature", "humidity"], // The types/properties this device exposes
}
];
var offset = viewer.model.getGlobalOffset();
removeOffset(devices[0],offset)
// Generates `SurfaceShadingData` after assigning each device to a room.
const shadingData = await structureInfo.generateSurfaceShadingData(devices);
console.log(shadingData)
// Use the resulting shading data to generate heatmap from.
await dataVizExtn.setupSurfaceShading(model, shadingData);
// Register color stops for the heatmap. Along with the normalized sensor value
// in the range of [0.0, 1.0], `renderSurfaceShading` will interpolate the final
// heatmap color based on these specified colors.
const sensorColors = [0x0000ff, 0x00ff00, 0xffff00, 0xff0000];
// Set heatmap colors for temperature
const sensorType = "temperature";
dataVizExtn.registerSurfaceShadingColors(sensorType, sensorColors);
// Function that provides sensor value in the range of [0.0, 1.0]
function getSensorValue(surfaceShadingPoint, sensorType) {
// The `SurfaceShadingPoint.id` property matches one of the identifiers passed
// to `generateSurfaceShadingData` function. In our case above, this will either
// be "cafeteria-entrace-01" or "cafeteria-exit-01".
const deviceId = surfaceShadingPoint.id;
// Read the sensor data, along with its possible value range
let sensorValue = readSensorValue(deviceId, sensorType);
const maxSensorValue = getMaxSensorValue(sensorType);
const minSensorValue = getMinSensorValue(sensorType);
// Normalize sensor value to [0, 1.0]
sensorValue = (sensorValue - minSensorValue) / (maxSensorValue - minSensorValue);
return clamp(sensorValue, 0.0, 1.0);
}
// This value can also be a room instead of a floor
const floorName = "01 - Entry Level";
dataVizExtn.renderSurfaceShading(floorName, sensorType, getSensorValue);
}
function removeOffset(pos, offset) {
pos.position.x = pos.position.x - offset.x;
pos.position.y = pos.position.y - offset.y;
pos.position.z = pos.position.z - offset.z;
}
And I'm calling the loadHeatmaps() function inside onDocumentLoadSuccess callback.
EDIT: It looks like in this particular case it was a problem with floorName not being set to the right value. Note that this value (first parameter to dataVizExtn.renderSurfaceShading) should be set either to the name of the room, or to the name of the floor you want to update.
The offsets are a bit tricky so I'd suggest debugging that area, for example:
What coordinate system are the sensors defined in? If they are in the same coordinate system as the building model itself, you shouldn't subtract or add any offset to them. Whenever there's a model with a "global offset" in its metadata, it basically means that the Model Derivative service moved the model to origin to avoid floating point precision issues, and the viewer will then add the global offset back to each geometry when its loaded.
Try using the viewer APIs to get the bounding box of one of the rooms that the devices should map to, and see if the bounding box actually contains the XYZ point of the device you're trying to pass into the DataViz extension. The bounds of any object can be found like so:
function getObjectBounds(model, dbid) {
const tree = model.getInstanceTree();
const frags = model.getFragmentList();
let bounds = new THREE.Box3();
tree.enumNodeFragments(dbid, function (fragid) {
let _bounds = new THREE.Box3();
frags.getWorldBounds(fragid, _bounds);
bounds.union(_bounds);
}, true);
return bounds;
}
I have the same issue and my revit model was built by revit 2020. When I update model to 2022, heatmap can show on the room correctly.

How to train a model in nodejs (tensorflow.js)?

I want to make a image classifier, but I don't know python.
Tensorflow.js works with javascript, which I am familiar with. Can models be trained with it and what would be the steps to do so?
Frankly I have no clue where to start.
The only thing I figured out is how to load "mobilenet", which apparently is a set of pre-trained models, and classify images with it:
const tf = require('#tensorflow/tfjs'),
mobilenet = require('#tensorflow-models/mobilenet'),
tfnode = require('#tensorflow/tfjs-node'),
fs = require('fs-extra');
const imageBuffer = await fs.readFile(......),
tfimage = tfnode.node.decodeImage(imageBuffer),
mobilenetModel = await mobilenet.load();
const results = await mobilenetModel.classify(tfimage);
which works, but it's no use to me because I want to train my own model using my images with labels that I create.
=======================
Say I have a bunch of images and labels. How do I use them to train a model?
const myData = JSON.parse(await fs.readFile('files.json'));
for(const data of myData){
const image = await fs.readFile(data.imagePath),
labels = data.labels;
// how to train, where to pass image and labels ?
}
First of all, the images needs to be converted to tensors. The first approach would be to create a tensor containing all the features (respectively a tensor containing all the labels). This should the way to go only if the dataset contains few images.
const imageBuffer = await fs.readFile(feature_file);
tensorFeature = tfnode.node.decodeImage(imageBuffer) // create a tensor for the image
// create an array of all the features
// by iterating over all the images
tensorFeatures = tf.stack([tensorFeature, tensorFeature2, tensorFeature3])
The labels would be an array indicating the type of each image
labelArray = [0, 1, 2] // maybe 0 for dog, 1 for cat and 2 for birds
One needs now to create a hot encoding of the labels
tensorLabels = tf.oneHot(tf.tensor1d(labelArray, 'int32'), 3);
Once there is the tensors, one would need to create the model for training. Here is a simple model.
const model = tf.sequential();
model.add(tf.layers.conv2d({
inputShape: [height, width, numberOfChannels], // numberOfChannels = 3 for colorful images and one otherwise
filters: 32,
kernelSize: 3,
activation: 'relu',
}));
model.add(tf.layers.flatten());
model.add(tf.layers.dense({units: 3, activation: 'softmax'}));
Then the model can be trained
model.fit(tensorFeatures, tensorLabels)
If the dataset contains a lot of images, one would need to create a tfDataset instead. This answer discusses why.
const genFeatureTensor = image => {
const imageBuffer = await fs.readFile(feature_file);
return tfnode.node.decodeImage(imageBuffer)
}
const labelArray = indice => Array.from({length: numberOfClasses}, (_, k) => k === indice ? 1 : 0)
function* dataGenerator() {
const numElements = numberOfImages;
let index = 0;
while (index < numFeatures) {
const feature = genFeatureTensor(imagePath);
const label = tf.tensor1d(labelArray(classImageIndex))
index++;
yield {xs: feature, ys: label};
}
}
const ds = tf.data.generator(dataGenerator).batch(1) // specify an appropriate batchsize;
And use model.fitDataset(ds) to train the model
The above is for training in nodejs. To do such a processing in the browser, genFeatureTensor can be written as follow:
function loadImage(url){
return new Promise((resolve, reject) => {
const im = new Image()
im.crossOrigin = 'anonymous'
im.src = 'url'
im.onload = () => {
resolve(im)
}
})
}
genFeatureTensor = image => {
const img = await loadImage(image);
return tf.browser.fromPixels(image);
}
One word of caution is that doing heavy processing might block the main thread in the browser. This is where web workers come into play.
Consider the exemple https://codelabs.developers.google.com/codelabs/tfjs-training-classfication/#0
What they do is:
take a BIG png image (a vertical concatenation of images)
take some labels
build the dataset (data.js)
then train
The building of the dataset is as follows:
images
The big image is divided into n vertical chunks.
(n being chunkSize)
Consider a chunkSize of size 2.
Given the pixel matrix of image 1:
1 2 3
4 5 6
Given the pixel matrix of image 2 is
7 8 9
1 2 3
The resulting array would be
1 2 3 4 5 6 7 8 9 1 2 3 (the 1D concatenation somehow)
So basically at the end of the processing, you have a big buffer representing
[...Buffer(image1), ...Buffer(image2), ...Buffer(image3)]
labels
That kind of formatting is done a lot for classification problems. Instead of classifying with a number, they take a boolean array.
To predict 7 out of 10 classes we would consider
[0,0,0,0,0,0,0,1,0,0] // 1 in 7e position, array 0-indexed
What you can do to get started
Take your image (and its associated label)
Load your image to the canvas
Extract its associated buffer
Concatenate all your image's buffer as a big buffer. That's it for xs.
Take all your associated labels, map them as a boolean array, and concatenate them.
Below, I subclass MNistData::load (the rest can be let as is (except in script.js where you need to instantiate your own class instead)
I still generate 28x28 images, write a digit on it, and get a perfect accuracy since I don't include noise or voluntarily wrong labelings.
import {MnistData} from './data.js'
const IMAGE_SIZE = 784;// actually 28*28...
const NUM_CLASSES = 10;
const NUM_DATASET_ELEMENTS = 5000;
const NUM_TRAIN_ELEMENTS = 4000;
const NUM_TEST_ELEMENTS = NUM_DATASET_ELEMENTS - NUM_TRAIN_ELEMENTS;
function makeImage (label, ctx) {
ctx.fillStyle = 'black'
ctx.fillRect(0, 0, 28, 28) // hardcoded, brrr
ctx.fillStyle = 'white'
ctx.fillText(label, 10, 20) // print a digit on the canvas
}
export class MyMnistData extends MnistData{
async load() {
const canvas = document.createElement('canvas')
canvas.width = 28
canvas.height = 28
let ctx = canvas.getContext('2d')
ctx.font = ctx.font.replace(/\d+px/, '18px')
let labels = new Uint8Array(NUM_DATASET_ELEMENTS*NUM_CLASSES)
// in data.js, they use a batch of images (aka chunksize)
// let's even remove it for simplification purpose
const datasetBytesBuffer = new ArrayBuffer(NUM_DATASET_ELEMENTS * IMAGE_SIZE * 4);
for (let i = 0; i < NUM_DATASET_ELEMENTS; i++) {
const datasetBytesView = new Float32Array(
datasetBytesBuffer, i * IMAGE_SIZE * 4,
IMAGE_SIZE);
// BEGIN our handmade label + its associated image
// notice that you could loadImage( images[i], datasetBytesView )
// so you do them by bulk and synchronize after your promises after "forloop"
const label = Math.floor(Math.random()*10)
labels[i*NUM_CLASSES + label] = 1
makeImage(label, ctx)
const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
// END you should be able to load an image to canvas :)
for (let j = 0; j < imageData.data.length / 4; j++) {
// NOTE: you are storing a FLOAT of 4 bytes, in [0;1] even though you don't need it
// We could make it with a uint8Array (assuming gray scale like we are) without scaling to 1/255
// they probably did it so you can copy paste like me for color image afterwards...
datasetBytesView[j] = imageData.data[j * 4] / 255;
}
}
this.datasetImages = new Float32Array(datasetBytesBuffer);
this.datasetLabels = labels
//below is copy pasted
this.trainIndices = tf.util.createShuffledIndices(NUM_TRAIN_ELEMENTS);
this.testIndices = tf.util.createShuffledIndices(NUM_TEST_ELEMENTS);
this.trainImages = this.datasetImages.slice(0, IMAGE_SIZE * NUM_TRAIN_ELEMENTS);
this.testImages = this.datasetImages.slice(IMAGE_SIZE * NUM_TRAIN_ELEMENTS);
this.trainLabels =
this.datasetLabels.slice(0, NUM_CLASSES * NUM_TRAIN_ELEMENTS);// notice, each element is an array of size NUM_CLASSES
this.testLabels =
this.datasetLabels.slice(NUM_CLASSES * NUM_TRAIN_ELEMENTS);
}
}
I found a tutorial [1] how to use existing model to train new classes. Main code parts here:
index.html head:
<script src="https://unpkg.com/#tensorflow-models/knn-classifier"></script>
index.html body:
<button id="class-a">Add A</button>
<button id="class-b">Add B</button>
<button id="class-c">Add C</button>
index.js:
const classifier = knnClassifier.create();
....
// Reads an image from the webcam and associates it with a specific class
// index.
const addExample = async classId => {
// Capture an image from the web camera.
const img = await webcam.capture();
// Get the intermediate activation of MobileNet 'conv_preds' and pass that
// to the KNN classifier.
const activation = net.infer(img, 'conv_preds');
// Pass the intermediate activation to the classifier.
classifier.addExample(activation, classId);
// Dispose the tensor to release the memory.
img.dispose();
};
// When clicking a button, add an example for that class.
document.getElementById('class-a').addEventListener('click', () => addExample(0));
document.getElementById('class-b').addEventListener('click', () => addExample(1));
document.getElementById('class-c').addEventListener('click', () => addExample(2));
....
Main idea is to use existing network to make its prediction and then substitute the found label with your own one.
Complete code is in the tutorial. Another promising, more advanced one in [2]. It needs strict pre processing, so I leave it only here, I mean it is so much more advanced one.
Sources:
[1] https://codelabs.developers.google.com/codelabs/tensorflowjs-teachablemachine-codelab/index.html#6
[2] https://towardsdatascience.com/training-custom-image-classification-model-on-the-browser-with-tensorflow-js-and-angular-f1796ed24934
TL;DR
MNIST is the image recognition Hello World. After learning it by heart, these questions in your mind are easy to solve.
Question setting:
Your main question written is
// how to train, where to pass image and labels ?
inside your code block. For those I found perfect answer from examples of Tensorflow.js examples section: MNIST example. My below links have pure javascript and node.js versions of it and Wikipedia explanation. I will go them through on the level necessary to answer the main question in your mind and I will add also perspectives how your own images and labels have anything to do with MNIST image set and the examples using it.
First things first:
Code snippets.
where to pass images (Node.js sample)
async function loadImages(filename) {
const buffer = await fetchOnceAndSaveToDiskWithBuffer(filename);
const headerBytes = IMAGE_HEADER_BYTES;
const recordBytes = IMAGE_HEIGHT * IMAGE_WIDTH;
const headerValues = loadHeaderValues(buffer, headerBytes);
assert.equal(headerValues[0], IMAGE_HEADER_MAGIC_NUM);
assert.equal(headerValues[2], IMAGE_HEIGHT);
assert.equal(headerValues[3], IMAGE_WIDTH);
const images = [];
let index = headerBytes;
while (index < buffer.byteLength) {
const array = new Float32Array(recordBytes);
for (let i = 0; i < recordBytes; i++) {
// Normalize the pixel values into the 0-1 interval, from
// the original 0-255 interval.
array[i] = buffer.readUInt8(index++) / 255;
}
images.push(array);
}
assert.equal(images.length, headerValues[1]);
return images;
}
Notes:
MNIST dataset is a huge image, where in one file there are several images like tiles in puzzle, each and every with same size, side by side, like boxes in x and y coordination table. Each box has one sample and corresponding x and y in the labels array has the label. From this example, it is not a big deal to turn it to several files format, so that actually only one pic at a time is given to the while loop to handle.
Labels:
async function loadLabels(filename) {
const buffer = await fetchOnceAndSaveToDiskWithBuffer(filename);
const headerBytes = LABEL_HEADER_BYTES;
const recordBytes = LABEL_RECORD_BYTE;
const headerValues = loadHeaderValues(buffer, headerBytes);
assert.equal(headerValues[0], LABEL_HEADER_MAGIC_NUM);
const labels = [];
let index = headerBytes;
while (index < buffer.byteLength) {
const array = new Int32Array(recordBytes);
for (let i = 0; i < recordBytes; i++) {
array[i] = buffer.readUInt8(index++);
}
labels.push(array);
}
assert.equal(labels.length, headerValues[1]);
return labels;
}
Notes:
Here, labels are also byte data in a file. In Javascript world, and with the approach you have in your starting point, labels could also be a json array.
train the model:
await data.loadData();
const {images: trainImages, labels: trainLabels} = data.getTrainData();
model.summary();
let epochBeginTime;
let millisPerStep;
const validationSplit = 0.15;
const numTrainExamplesPerEpoch =
trainImages.shape[0] * (1 - validationSplit);
const numTrainBatchesPerEpoch =
Math.ceil(numTrainExamplesPerEpoch / batchSize);
await model.fit(trainImages, trainLabels, {
epochs,
batchSize,
validationSplit
});
Notes:
Here model.fit is the actual line of code that does the thing: trains the model.
Results of the whole thing:
const {images: testImages, labels: testLabels} = data.getTestData();
const evalOutput = model.evaluate(testImages, testLabels);
console.log(
`\nEvaluation result:\n` +
` Loss = ${evalOutput[0].dataSync()[0].toFixed(3)}; `+
`Accuracy = ${evalOutput[1].dataSync()[0].toFixed(3)}`);
Note:
In Data Science, also this time here, the most faschinating part is to know how well the model survives the test of new data and no labels, can it label them or not? For that is the evaluation part that now prints us some numbers.
Loss and accuracy: [4]
The lower the loss, the better a model (unless the model has over-fitted to the training data). The loss is calculated on training and validation and its interperation is how well the model is doing for these two sets. Unlike accuracy, loss is not a percentage. It is a summation of the errors made for each example in training or validation sets.
..
The accuracy of a model is usually determined after the model parameters are learned and fixed and no learning is taking place. Then the test samples are fed to the model and the number of mistakes (zero-one loss) the model makes are recorded, after comparison to the true targets.
More information:
In the github pages, in README.md file, there is a link to tutorial, where all in the github example is explained in greater detail.
[1] https://github.com/tensorflow/tfjs-examples/tree/master/mnist
[2] https://github.com/tensorflow/tfjs-examples/tree/master/mnist-node
[3] https://en.wikipedia.org/wiki/MNIST_database
[4] How to interpret "loss" and "accuracy" for a machine learning model

How do I create a loading indicator in bokeh?

I am using a radio button with Bokeh. I want to be able to show a loading indicator when a radio button is clicked and then show that it has finished after the python callback has completed. How do you do this with Bokeh?
I've tried using combinations of js_onchange with onchange for the radio buttons. I just can't come up with a way to notify the JS side of things when the Python callback is completed.
callback = CustomJS(args={}, code="""
document.getElementById('message_display').innerHTML = 'loading...';
""")
radio_button.js_on_change('active', callback)
radio_button.on_change('active', some_other_callback)
When I run it the innerHTML gets set to loading and the on_change Python callback runs and the graph updates but I have no way to trigger a change on the JS side of things change the innerHTML to say done.
Assuming the user already have a plot in view one option would be to set a callback on the start attribute of the plot's range so it will be triggered when the plot gets updated.
from bokeh.models import CustomJS
p = figure()
def python_callback()
p.y_range = Range1d(None, None)
# get your data here and update the plot
code = "document.getElementById('message_display').innerHTML = 'loading finished';"
callback = CustomJS(args = dict(), code = code)
p.y_range.js_on_change('start', callback)
See working example below:
import numpy as np
from bokeh.plotting import figure, show
from bokeh.models import CustomJS, ColumnDataSource
points = np.random.rand(50, 2)
cds = ColumnDataSource(data = dict(x = points[:, 0], y = points[:, 1]))
p = figure(x_range = (0, 1), y_range = (0, 1))
p.scatter(x = 'x', y = 'y', source = cds)
cb_to_make_selection = CustomJS(args = {'cds': cds}, code = """
function getRandomInt(max){return Math.floor(Math.random() * Math.floor(max));}
cds.selected.indices = [getRandomInt(cds.get_length()-1)]
""")
p.x_range.js_on_change('start', cb_to_make_selection)
show(p)

Chart.js - Display data label leader lines on a pie chart

I have been playing a lot with Chart.Js but trying my hardest to avoid getting into Canvas itself due to time constraints and a personal preference of the SVG route of D3 et al.
I have a mixture of charts on a dashboard page, and everything looks fantastic except for one issue - you have to hover over a pie segment in order to see the underlying % or value.
For a dashboard view, my users would prefer to just quickly see some data labels on the segments - as with Excel - possibly easier to explain with an image:
https://support.office.com/en-gb/article/Display-or-hide-data-label-leader-lines-in-a-pie-chart-d7e7c62e-aaa5-483a-aa00-6d2c437df61b
The problem with other solutions I've found here are that they are simply displaying the value in the segment, but some segments are too small for this to be the way forward.
There were also other solutions that always displayed tooltips - but there was a lot of overlapping and generally looked quite horrible.
I would even be happy if the legend could display data next to it, but I don't understand why a lot more people haven't requested the same functionality - am I missing something?
This feature isn't available so far, so there is no really quick solution for that.
It actually is possible to show the data within the legend (I have done this for dashboards I create at work). You just need to use the generateLabels legend label property to achieve this.
Here is an example that shows the data value in parenthesis within the legend (this is done in the legend item text property that is returned from the function).
generateLabels: function(chart) {
var data = chart.data;
if (data.labels.length && data.datasets.length) {
return data.labels.map(function(label, i) {
var meta = chart.getDatasetMeta(0);
var ds = data.datasets[0];
var arc = meta.data[i];
var custom = arc.custom || {};
var getValueAtIndexOrDefault = Chart.helpers.getValueAtIndexOrDefault;
var arcOpts = chart.options.elements.arc;
var fill = custom.backgroundColor ? custom.backgroundColor : getValueAtIndexOrDefault(ds.backgroundColor, i, arcOpts.backgroundColor);
var stroke = custom.borderColor ? custom.borderColor : getValueAtIndexOrDefault(ds.borderColor, i, arcOpts.borderColor);
var bw = custom.borderWidth ? custom.borderWidth : getValueAtIndexOrDefault(ds.borderWidth, i, arcOpts.borderWidth);
return {
// here is where we are adding the data values to the legend title
text: label + " (" + ds.data[i].toLocaleString() + ")",
fillStyle: fill,
strokeStyle: stroke,
lineWidth: bw,
hidden: isNaN(ds.data[i]) || meta.data[i].hidden,
index: i // extra data used for toggling the correct item
};
});
} else {
return [];
}
}
You can see it in action at this codepen.

Qualtrics: Custom JavaScript ceases to work when importing questions to a new survey

I have a Qualtrics survey containing a few questions with custom JavaScript (to enable a slider with two slider knobs). I can a) copy the survey as well as b) export the survey as a .qsf file and re-import it. In both cases, I get a new, working survey.
However, importing the survey questions to an existing survey using the "Import Questions From..." function does not work; the JavaScript fails to work in this case. Is there a way to import these questions to an existing survey while preserving the JavaScript?
The code used in the first (most relevant) question:
Qualtrics.SurveyEngine.addOnload(function()
{
document.getElementById("QR~QID7").setAttribute("readonly", "readonly");
document.getElementById("QR~QID8").setAttribute("readonly", "readonly");
var surface;
var cnst = 4;
// Sets the sliders parameters and starting values
$("#research-slider").slider({ id: "research-slider", min: 0, max: 10, range: true, value: [0, 10]});
// variable to store in surface area when user has stopped sliding
var surface, currentResponse;
$("#research-slider").on("slideStop", function(slideEvt) {
surface = slideEvt.value;
document.getElementById("minimum").innerHTML = surface[0];
document.getElementById("maximum").innerHTML = surface[1];
document.getElementById("newValue").innerHTML = (surface[1]- surface[0])/cnst ;
document.getElementById("QR~QID7").value = surface[0];
document.getElementById("QR~QID8").value = surface[1];
});
$('NextButton').onclick = function (event) {
// and now run the event that the normal next button is supposed to do
Qualtrics.SurveyEngine.navClick(event, 'NextButton')
}
})
Your JavaScript won't work when imported because you are using fixed QID codes (QID7 and QID8). The solution is to write your code to find the correct elements by walking the DOM. The easiest way is to use prototypejs instead of native JavaScript.
Assuming the min (QID7) and max (QID8) follow immediately after the question your code is attached to, then it would be something like:
var qid = this.questionId;
var min = $(qid).next('.QuestionOuter').down('.InputText');
var max = $(qid).next('.QuestionOuter',1).down('.InputText');
min.setAttribute("readonly", "readonly");
max.setAttribute("readonly", "readonly");
var surface;
var cnst = 4;
// Sets the sliders parameters and starting values
$("#research-slider").slider({ id: "research-slider", min: 0, max: 10, range: true, value: [0, 10]});
// variable to store in surface area when user has stopped sliding
var surface, currentResponse;
$("#research-slider").on("slideStop", function(slideEvt) {
surface = slideEvt.value;
$("minimum").innerHTML = surface[0];
$("maximum").innerHTML = surface[1];
$("newValue").innerHTML = (surface[1]- surface[0])/cnst ;
min.value = surface[0];
max.value = surface[1];
});

Categories