How do I prevent my javascript locking up the browser? - javascript

I have a WebSocket that receives data at a rate of ~50hz. Each time an update is pushed to the browser, it turns the JSON data it's had published to it into some pretty charts.
$(document).ready(function() {
console.log('Connecting web socket for status publishing')
allRawDataPublisher = new ReconnectingWebSocket("ws://" + location.host + '/raw/allrawdata');
var rawUnprocessedData = [];
for (i = 0; i < 256; i++)
{
rawUnprocessedData.push({x:i, y:0});
}
var unprocessedRawChart = new CanvasJS.Chart("rawUnprocessedData",{
title :{ text: "Raw Unprocessed Data"},
axisX: { title: "Bin"},
axisY: { title: "SNR"},
data: [{ type: "line", dataPoints : rawUnprocessedData},{ type: "line", dataPoints : noiseFloorData}]
});
var updateChart = function (dps, newData, chart) {
for (i = 0; i < 256; i++)
{
dps[i].y = newData[i];
}
chart.render();
};
allRawDataPublisher.onmessage = function (message) {
jsonPayload = JSON.parse(message.data);
var dataElements = jsonPayload["Raw Data Packet"]
updateChart(rawUnprocessedData, dataElements["RAW_DATA"].Val, unprocessedRawChart)
};
unprocessedRawChart.render();
});
This works great when my laptop is plugged into a power socket but if I unplug the power, my laptop drops it's processing power (and the same issue occurs on lower-specc'd tablets, phones etc). When there's less processing power available, the browser (Chrome) completely locks up.
I'm guessing the javascript is receiving updates faster than the browser can render them and consequently locking the tab up.
If the browser is unable to update at the requested rate, I would like it to drop new data until it's ready to render a new update. Is there a standard way to check the browser has had enough time to render an update and drop new frames if that's not the case?
*[Edit]
I did some digging with Chrome's profiler which confirms that (as expected) it's re-drawing the chart that is taking the bulk of the processing power.

You can do work between frames by using window.requestAnimationFrame.
The callback passed to this function will be called at a maximum of 60 times a second - or whichever number matches the refresh rate of your display.
It's also guaranteed to be called before the next repaint - and after the previous repaint has finished.
From MDN window.requestAnimationFrame()
The window.requestAnimationFrame() method tells the browser that you wish to perform an animation and requests that the browser call a specified function to update an animation before the next repaint. The method takes a callback as an argument to be invoked before the repaint.
Here's an example on how it's used:
function renderChart () {
// pull data from your array and do rendering here
console.log('rendering...')
requestAnimationFrame(renderChart)
}
requestAnimationFrame(renderChart)
However, it's better to render changes to your graph in batches, instead of doing rendering work for every single datum that comes through or on every frame.
Here's a Fiddle using Chart.js code that:
Pushes data to an Array every 100ms (10 Hz)
Renders data, in batches of 4 - every 1000ms (1 Hz)
const values = []
const ctx = document.getElementById('chartContainer').getContext('2d');
const chart = new Chart(ctx, {
type: 'line',
data: {
labels: ['start'],
datasets: [{
label: 'mm of rain',
data: [1],
borderWidth: 1
}]
}
});
// Push 1 item every 100ms (10 Hz), simulates
// your data coming through at a faster
// rate than you can render
setInterval(() => {
values.push(Math.random())
}, 100)
// Pull last 4 items every 1 second (1 Hz)
setInterval(() => {
// splice last 4 items, add them to the chart's data
values.splice(values.length - 4, 4)
.forEach((value, index) => {
chart.data.labels.push(index)
chart.data.datasets[0].data.push(value)
})
// finally, command the chart to update once!
chart.update()
}, 1000)
Do note that the above concept needs to handle exceptions appropriately, otherwise the values Array will start accumulating so much data that the process runs out of memory.
You also have to be careful in the way you render your batches. If your value render rate is slower than the rate at which you fill the values Array, you will eventually run into memory issues.
Last but not least: I'm not really convinced you ever need to update a piece of data faster than 2 Hz, as I doubt the human brain can make useful interpretations at such a fast rate.

Related

Sync an html5 video with an highchart.js line chart

I have a one minute long video of a train moving and some data series about his position, onboard accelerometer etc.
My goal is to make line charts (with Highcharts.js) of those measurements that are dynamically drawn with the video at a rate of 20points per second. The charts have to move with the video, so that if the user go back also the charts go at the same frame and so on.
I was wondering if there's a way to attach an event to the video progress bar and redraw the chart every x milliseconds and/or every time that the user play/stop the video
Connecting timeupdate event for a video element with setData method for a Highcharts series should be enough.
Example:
let currentDataIndex = 0;
const video = document.getElementById("video1");
const chart = Highcharts.chart('container', {
series: [{
// Highcharts mutate original data array, so use a copy
data: data[currentDataIndex].slice()
}]
});
const updateChartData = index => {
chart.series[0].setData(data[index].slice());
currentDataIndex = index;
};
video.addEventListener('timeupdate', e => {
const second = Math.floor(video.currentTime);
if (second !== currentDataIndex) {
updateChartData(second);
}
});
Live demo: http://jsfiddle.net/BlackLabel/8a6yvjs5/
API Reference: https://api.highcharts.com/class-reference/Highcharts.Series#setData
Docs:
https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement/currentTime
https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement/timeupdate_event

Missing images depending on scale in html2canvas (html2pdf)

I am trying to make a PDF of a reasonable amount of text and graphs in my html using html2pdf. So far so good, the PDF gets made and it looks fine. It is about 20 pages.
However, multiple graphs are missing. Some relevant information:
Most of the missing graphs are at the end of the document. However, the final graph is rendered, so it is not an explicit cut off at the end
The text accompanying the graphs is there, but the graph itself is not
The missing graphs are all of the same type. Other graphs of this type are rendered and look fine. It does not seem to be a problem with the type
If I reduce the scale on the html2canvas configuration to about 0.8, every graph gets rendered (but of course quality is reduced). I'd like the scale to be 2.
The fact that scale influences whether they are rendered or not, gives me the idea that something like timing / timeouts are a problem here. Larger scale means obviously longer rendering time, but it does not seem to wait for it to be done. Or something like that.
Below the majority of the code that makes the PDF.
The onClone is necessary for the graphs to be rendered correctly. If it is removed, the problem described above still occurs (but the graphs that áre rendered are ugly).
const options = {
filename: 'test.pdf',
margin: [15, 0, 15, 0],
image: { type: 'jpeg', quality: 1 }
html2canvas: {
scale: 2,
scrollY: 0,
onclone: (element) => {
const svgElements = element.body.querySelectorAll('svg');
Array.from(svgElements).forEach((item: any) => {
item.setAttribute('width', item.getBoundingClientRect().width.toString());
item.setAttribute('height', item.getBoundingClientRect().height.toString());
item.style.width = null;
item.style.height = null;
});
}
},
jsPDF: { orientation: 'portrait', format: 'a4' }
};
setTimeout(() => {
const pdfElement = document.getElementById('contentToConvert');
html2pdf().from(pdfElement).set(options).save()
.catch((err) => this.errorHandlerService.handleError(err))
}, 100);
It sounds like you may be exceeding the maximum canvas size on your browser. This varies by browser (and browser version). Try the demo from here to check out your browser's limit. If you can find 2 browsers with different limits (on my desktop, Safari and Chrome have the same, but the max area in FireFox is a bit lower - iPhone area much lower in Safari), try pushing your scale down on the one with the larger limit until it succeeds, and then see if that fails on the one with the lower limits. There are other limits in your browser (eg. max heap size) which may come into play. If this is the case, I don't have a good solution for you - its usually impractical to get clients to all reconfigure their browsers (and most of these limits are hard anyway). For obvious reasons, browsers don't allow the website to make arbitrary changes to memory limits. If you are using Node.js, you may have more success in dealing with memory limits. Either way (Node or otherwise), it's sometimes better to send things back to the server when you are pushing the limits of the client.

Different predictions if running in Node instead of Browser (using the same model_web - python converted model)

pretty new to ML and tensorflow!
I made an object detection model with http://cloud.annotations.ai that permits to train and convert a model in different formats, tfjs (model_web) too.
That website provides also boilerplates for running the model within a browser (react app)... just like you do - probably it is the same code, didn't spend enough time.
So I have this model running inside a browser, giving prediction about objects in a photo with pretty good results considering the amount of example I gave and the prediction score (0.89). the given bounding box is good too.
But, unfortunately, I didn't have "just one video" to analyze frame by frame inside a browser, I've got plenty of them. So I decided to switch to node.js, porting the code as is.
Guess what? TF.js relies on DOM and browser components, and almost none examples that works with Node exists. So not a big deal, just spent a morning figuring out all the missing parts.
Finally I'm able to run my model over videos that are splitted in frames, at a decent speed - although having the "Hello there, use tfjs-node to gain speed" banner when I'm already using tfjs-node - but results seems odd.
Comparing the same picture with the same model_web folder gave the same prediction but with lower score (0.80 instead of 0.89) and a different bounding box, with object not centered at all.
(TL;DR)
Does tfjs have different implementation of the libraries (tfjs and tfjs-node) that makes different use of the same model? I don't think it can be a problem of input because - after a long search and fight - i figure out two ways to give the image to tf.browser.getPixel in Node (and I'm still wondering why I have to use a "browser" method inside tfjs-node). Anyone made comparisons?
So... that's the code I used, for your reference:
model_web is being loaded with tf.loadGraphModel("file://path/to/model_web/model.json");
two different ways to convert a JPG and make it works with tf.browser.getPixel()
const inkjet = require('inkjet');
const {createCanvas, loadImage} = require('canvas');
const decodeJPGInkjet = (file) => {
return new Promise((rs, rj) => {
fs.readFile(file).then((buffer) => {
inkjet.decode(buffer, (err, decoded) => {
if (err) {
rj(err);
} else {
rs(decoded);
}
});
});
});
};
const decodeJPGCanvas = (file) => {
return loadImage(file).then((image) => {
const canvas = createCanvas(image.width, image.height);
const ctx = canvas.getContext('2d');
ctx.drawImage(image, 0, 0, image.width, image.height);
const data = ctx.getImageData(0, 0, image.width, image.height);
return {data: new Uint8Array(data.data), width: data.width, height: data.height};
});
};
and that's the code that use the loaded model to give predictions - same code for node and browser, found at https://github.com/cloud-annotations/javascript-sdk/blob/master/src/index.js - doesn't works on node as it is, I changed require("#tensorflow/tfjs"); with require("#tensorflow/tfjs-node"); and replaced fetch with fs.read
const runObjectDetectionPrediction = async (graph, labels, input) => {
const batched = tf.tidy(() => {
const img = tf.browser.fromPixels(input);
// Reshape to a single-element batch so we can pass it to executeAsync.
return img.expandDims(0);
});
const height = batched.shape[1];
const width = batched.shape[2];
const result = await graph.executeAsync(batched);
const scores = result[0].dataSync();
const boxes = result[1].dataSync();
// clean the webgl tensors
batched.dispose();
tf.dispose(result);
const [maxScores, classes] = calculateMaxScores(
scores,
result[0].shape[1],
result[0].shape[2]
);
const prevBackend = tf.getBackend();
// run post process in cpu
tf.setBackend("cpu");
const indexTensor = tf.tidy(() => {
const boxes2 = tf.tensor2d(boxes, [result[1].shape[1], result[1].shape[3]]);
return tf.image.nonMaxSuppression(
boxes2,
maxScores,
20, // maxNumBoxes
0.5, // iou_threshold
0.5 // score_threshold
);
});
const indexes = indexTensor.dataSync();
indexTensor.dispose();
// restore previous backend
tf.setBackend(prevBackend);
return buildDetectedObjects(
width,
height,
boxes,
maxScores,
indexes,
classes,
labels
);
};
Do different implementation of the libraries (tfjs and tfjs-node) that makes different use of the same model
If the same model is deployed both in the browser and in nodejs, the prediction will be the same thing.
If the predicted value are different, it might be related to the tensor used for the prediction. The processing from the image to the tensor might be different resulting in different tensors being used for the prediction thus causing the output to be different.
i figure out two ways to give the image to tf.browser.getPixel in Node (and I'm still wondering why I have to use a "browser" method inside tfjs-node)
The canvas package use the system graphic to create the browser like canvas environment that can be used by nodejs. This makes it possible to use tf.browser namespace especially when dealing with image conversion. However it is still possible to use directly nodejs buffer to create a tensor.

Moving navigator in Highcharts

I would like to fetch data for example from this website: https://www.highcharts.com/stock/demo/lazy-loading
To get the data with high resolution, I have to set the min max to small value and then I have to go over the complete diagram.
It is possible to get data from the website with "Highcharts.charts[0].series[0]"
But I didn't find a possibility to programable changing the navigators, to get other data ranges.
Is it possible to move the navigator to a specific position?
You can use Axis.setExtremes function for this.
Live demo: http://jsfiddle.net/kkulig/am3cgo5w/
API reference: https://api.highcharts.com/class-reference/Highcharts.Axis#setExtremes
Thank you for your answear. Here is my code if somebody needs:
I saw, that the redraw event will be called 5 times till the diagram is finishd redrawn
counter = 1
// Get chart
chart = $('#highcharts-graph').highcharts()
// add event
Highcharts.addEvent(chart, 'redraw', function (e) {
console.log(counter);
counter = counter+1;
if(counter>5){
console.log('finish loaded')
function2()
}
})
var newSelectedMin = 0
var newSelectedMax = 10
chart.xAxis[0].setExtremes(newSelectedMin, newSelectedMax);

JQuery Flot : How to add data to the existing point graph instead of redrawing

I am plotting a point graph and the graph should get update with new data for every 5 secs. Here min and max range are always fixed
Currently, when I get new data from server, I do merge the new data to the existing source.data and plotting the complete graph.
So, I dont want to redraw the complete data again and again. As the length of the source.data is increasing, performance is going down . So instead of redraw complete data, can I add only the new data to the existing graph
Please find the source code here
var source = [
{
data: [],
show: true,
label: "Constellation",
name: "Constellation",
color: "#0000FF",
points: {
show: true,
radius: 2,
fillColor: '#0000FF'
},
lines: {
show: false
}
}
]
var options = {...}
$.getJSON(URL , function(data) {
...
$.merge(source[0].data, new_data);
plotObj = $.plot("#placeholder", source, options);
}
Follow this steps:
plotObj.setData(newData);
plotObj.setupGrid(); //if you also need to update axis.
plotObj.draw(); //to redraw data
Another usefull method is getData(). With this method you can get the current data.
var data = plotObj.getData();
Your method of calling $.plot over and over should be avoided. It used to leak memory (not sure if it still does).
That said, #Luis is close, but let's put it all together. To add data to an existing plot do this:
var allData = plotObj.getData(); // allData is an array of series objects
allData[seriesIndex].data.push([newXPoint,newYPoint]);
plotObj.setData(allData);
plotObj.setupGrid(); // if axis have changed
plotObj.draw();
It should be noted that this does redraw the entire plot. This is unavoidable with HTML5 canvas. BUT flot draws extremely fast, you'll barely notice it.

Categories