Given a plain web video of say 30s:
<video src="my-video.mp4"></video>
How could I generate its volume level chart?
volume|
level| ******
| * * **
| * * * **
|** * *** *
| ** * * * *
+---------------*-*-----************------+--- time
0 30s
video is and quiet
loud here here
Note:
Plain JavaScript, please. No libraries.
There are several ways to do this depending on what the usage is.
For accuracy you could measure in conventional volumes and units such as RMS, LUFS/LKFS (K-weighted, loudness), dBFS (full-scale dB) and so forth.
The simple naive approach is to just plot the peaks of the waveform. You would be interested in the positive values only. To just get the peaks you would detect direction between two points and log the first point when the direction changes from upward to downwards (p0 > p1).
For all approaches you can finally apply some form of smoothing such as weighted moving average (example) or a generic smoothing algorithm to remove small peaks and changes, in case of RMS, dB etc. you would use a window size which can be combined with bin-smoothing (an average per segment).
To plot you will obtain the value for the current sample, assume it to be normalized and draw it as line or point to canvas scaled by plot area height.
Mini-discussion as to loading the source data
To address some of the questions in the comments; these are just off the top of my heads to give some pointers -
Since Web Audio API cannot do streaming on its own you have to load the entire file into memory and decode the audio track into a buffer.
Pros: works (analysis part), fast analysis when data is eventually ready, works fine for smaller files, if cached the URL can be used without re-downloading
Cons: long initial load time/bad UX, possible memory hog/not good for large files, audio is "detached" from video sync-wise, forces reuse of URL*, if large and/or cache is not in place the file will have to be downloaded again/streamed, currently causes issues in some browsers/versions (see example below).
*: There is always the option of storing the downloaded video as blob in IndexedDB (with its implications) and use an Object-URL with that blob to stream in the video element (may require MSE to work properly, haven't tried myself).
Plotting while streaming:
Pros: Cheap on memory/resources
Cons: the plot cannot be shown in full until the entire file has been played through, the user may skip/jump parts, may not finish
Side-loading a low-quality mono audio-only file:
Pros: audio can be loaded into memory independent of video file, results in good enough approximation for level use
Cons: can delay initial loading of video, may not be ready in time before video starts, will require additional processing in advance
Server-side plotting:
Pros: can be plotted when uploaded, can store raw plot data that is provided as meta-data when video is requested, low bandwidth, data ready when video starts (assuming data is representing averages over time-segments).
Cons: require infrastructure on server that can separate, analyze and produce the plot-data, depending on how the data is stored may require database modification.
I've might left out or missed some points, but it should give the general idea...
Example
This example measures conventional dB of a given window size per sample. The bigger the window size the smoother the result, but will also take more time to calculate.
Note that for simplicity in this example pixel position determines the dB window range. This may produce uneven gaps/overlaps depending on buffer size affecting the current sample value, but should work for the purpose demonstrated here. Also for simplicity I am scaling the dB reading by dividing it by 40, a somewhat arbitrary number here (ABS is just for the plotting and the way my brain worked (?) in the late night/early morning when I made this :) ).
I added bin/segment-smoothing in red on top to better show longer-term audio variations relevant to things such as auto-leveling.
I'm using a audio source here but you can plug in a video source instead as long as it contains an audio track format that can be decoded (aac, mp3, ogg etc.).
Besides from that, the example is just that, an example. It's not production code so take it for what it is worth. Make adjustments as needed.
(for some reason the audio won't play in Firefox v58beta, it will plot though. Audio plays in Chrome, FF58dev).
var ctx = c.getContext("2d"), ref, audio;
var actx = new (AudioContext || webkitAudioContext)();
var url = "//dl.dropboxusercontent.com/s/a6s1qq4lnwj46uj/testaudiobyk3n_lo.mp3";
ctx.font = "20px sans-serif";
ctx.fillText("Loading and processing...", 10, 50);
ctx.fillStyle = "#001730";
// Load audio
fetch(url, {mode: "cors"})
.then(function(resp) {return resp.arrayBuffer()})
.then(actx.decodeAudioData.bind(actx))
.then(function(buffer) {
// Get data from channel 0 (you will want to measure all/avg.)
var channel = buffer.getChannelData(0);
// dB per window + Plot
var points = [0];
ctx.clearRect(0, 0, c.width, c.height);
ctx.moveTo(x, c.height);
for(var x = 1, i, v; x < c.width; x++) {
i = ((x / c.width) * channel.length)|0; // get index in buffer based on x
v = Math.abs(dB(channel, i, 8820)) / 40; // 200ms window, normalize
ctx.lineTo(x, c.height * v);
points.push(v);
}
ctx.fill();
// smooth using bins
var bins = 40; // segments
var range = (c.width / bins)|0;
var sum;
ctx.beginPath();
ctx.moveTo(0,c.height);
for(x = 0, v; x < points.length; x++) {
for(v = 0, i = 0; i < range; i++) {
v += points[x++];
}
sum = v / range;
ctx.lineTo(x - (range>>1), sum * c.height); //-r/2 to compensate visually
}
ctx.lineWidth = 2;
ctx.strokeStyle = "#c00";
ctx.stroke();
// for audio / progressbar only
c.style.backgroundImage = "url(" + c.toDataURL() + ")";
c.width = c.width;
ctx.fillStyle = "#c00";
audio = document.querySelector("audio");
audio.onplay = start;
audio.onended = stop;
audio.style.display = "block";
});
// calculates RMS per window and returns dB
function dB(buffer, pos, winSize) {
for(var rms, sum = 0, v, i = pos - winSize; i <= pos; i++) {
v = i < 0 ? 0 : buffer[i];
sum += v * v;
}
rms = Math.sqrt(sum / winSize); // corrected!
return 20 * Math.log10(rms);
}
// for progress bar (audio)
function start() {if (!ref) ref = requestAnimationFrame(progress)}
function stop() {cancelAnimationFrame(ref);ref=null}
function progress() {
var x = audio.currentTime / audio.duration * c.width;
ctx.clearRect(0,0,c.width,c.height);
ctx.fillRect(x-1,0,2,c.height);
ref = requestAnimationFrame(progress)
}
body {background:#536375}
#c {border:1px solid;background:#7b8ca0}
<canvas id=c width=640 height=300></canvas><br>
<audio style="display:none" src="//dl.dropboxusercontent.com/s/a6s1qq4lnwj46uj/testaudiobyk3n_lo.mp3" controls></audio>
Related
I have a small app that accepts incoming audio stream from the internet and I'm trying to find the frequency of a tone or continuous beep. At the time of the tone / beep it is the only thing that would be playing. The rest of the audio is either silence or talking. I'm using the node-pitchfinder npm module to find the tone and when I use a sample audio clip I made of 2,000Hz the app prints out the frequency within one or two Hz. When I pull the audio stream online I keep getting results like 17,000 Hz. My guess is that there is some "noise" in the audio signal and that's what the node-pitchfinder module is picking up.
Is there any way I can filter out that noise in real time to get an accurate frequency?
The streaming audio file is: http://relay.broadcastify.com/fq85hty701gnm4z.mp3
Code below:
const fs = require('fs');
const fsa = require('fs-extra');
const Lame = require('lame');
const Speaker = require('speaker');
const Volume = require('pcm-volume');
const Analyser = require('audio-analyser')
const request = require('request')
const Chunker = require('stream-chunker');
const { YIN } = require('node-pitchfinder')
const detectPitch = YIN({ sampleRate: 44100})
//const BUFSIZE = 64;
const BUFSIZE = 500;
var decoder = new Lame.Decoder();
decoder.on('format', function(format){onFormat(format)});
var chunker = Chunker(BUFSIZE);
chunker.pipe(decoder);
var options = {
url: 'http://relay.broadcastify.com/fq85hty701gnm4z.mp3',
headers: {
"Upgrade-Insecure-Requests": 1,
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Safari/605.1.15"
}
}
var audio_stream = request(options);
//var audio_stream = fs.createReadStream('./2000.mp3');
audio_stream.pipe(chunker);
function onFormat(format)
{
//if (volume == "undefined")
volume = 1.0;
vol = new Volume(volume);
speaker = new Speaker(format);
analyser = createAnalyser(format);
analyser.on('data', sample);
console.log(format);
vol.pipe(speaker);
vol.pipe(analyser);
decoder.pipe(vol);
vol.setVolume(volume);
}
function createAnalyser(format)
{
return new Analyser({
fftSize: 8,
bufferSize: BUFSIZE,
'pcm-stream': {
channels: format.channels,
sampleRate: format.sampleRate,
bitDepth: format.bitDepth
}
});
}
var logFile = 'log.txt';
var logOptions = {flag: 'a'};
function sample()
{
if (analyser) {
const frequency = detectPitch(analyser._data)
console.log(frequency)
}
}
My goal is to find the most dominant audio frequency in a chunk of data so I can figure out the tone.
I found some code that supposedly does this with python
def getFreq( pkt ):
#Use FFT to determine the peak frequency of the last chunk
thefreq = 0
if len(pkt) == bufferSize*swidth:
indata = np.array(wave.struct.unpack("%dh"%(len(pkt)/swidth), pkt))*window
# filter out everything outside of our bandpass Hz
bp = np.fft.rfft(indata)
minFilterBin = (bandPass[0]/(sampleRate/bufferSize)) + 1
maxFilterBin = (bandPass[1]/(sampleRate/bufferSize)) - 1
for i in range(len(bp)):
if i < minFilterBin:
bp[i] = 0
if i > maxFilterBin:
bp[i] = 0
# Take the fft and square each value
fftData = abs(bp)**2
# find the maximum
which = fftData[1:].argmax() + 1
# Compute the magnitude of the sample we found
dB = 10*np.log10(1e-20+abs(bp[which]))
#avgdB = 10*np.log10(1e-20+abs(bp[which - 10:which + 10].mean()))
if dB >= minDbLevel:
# use quadratic interpolation around the max
if which != len(fftData)-1:
warnings.simplefilter("error")
try:
y0, y1, y2 = np.log(fftData[which-1:which+2:])
x1 = (y2 - y0) * .5 / (2 * y1 - y2 - y0)
except RuntimeWarning:
return(-1)
# find the frequency and output it
warnings.simplefilter("always")
thefreq = (which + x1) * sampleRate/bufferSize
else:
thefreq = which * sampleRate/bufferSize
else:
thefreq = -1
return(thefreq)
Original answer:
I can not provide you with a solution but (hopefully) give you enough advice to solve the problem.
I would recommend that you save a part of the stream you want to analyze to a file and then take a look at the file with a spectrum analyzer (e.g. with Audacity). This allows you to determine if the 17kHz signal is present in the audio stream.
If the 17 kHz signal is present in the audio stream then you can filter the audio stream with a low pass filter (e.g. audio-biquad with type lowpass and frequency at somewhere above 2 kHz).
If the 17 kHz signal is not present in the audio then you could try to increase the buffer size BUFSIZE (currently set to 500 in your code). In the example on node-pitchfinder's GitHub page they use a complete audio file for pitch detection. Depending on how the pitch detection algorithm is implemented the result might be different for larger chunks of audio data (i.e. a few seconds) compared to very short chunks (500 samples is around 11 ms at sample rate 44100). Start with a large value for BUFSIZE (e.g. 44100 -> 1 second) and see if it makes a difference.
Explanation of the python code: The code uses FFT (fast fourier transform) to find out which frequencies are present in the audio signal and then searches for the frequency with the highest value. This usually works well for simple signals like a 2 kHz sine wave. You could use dsp.js which provides an FFT implementation if you want to implement it in javascript. However, it is quite a challenge to get this right without some knowledge of digital signal processing theory.
As a side note: the YIN algorithm does not use FFT, it is based on autocorrelation.
Update
The following script uses the fft data of audio-analyser and searches for the maximum frequency. This approach is very basic and only works well for signals where just one frequency is very dominant. The YIN algorithm is much better suited for pitch detection than this example.
const fs = require('fs');
const Lame = require('lame');
const Analyser = require('audio-analyser')
const Chunker = require('stream-chunker');
var analyser;
var fftSize = 4096;
var decoder = new Lame.Decoder();
decoder.on('format', format => {
analyser = createAnalyser(format);
decoder.pipe(analyser);
analyser.on('data', processSamples);
console.log(format);
});
var chunker = Chunker(fftSize);
var audio_stream = fs.createReadStream('./sine.mp3');
audio_stream.pipe(chunker);
chunker.pipe(decoder);
function createAnalyser(format) {
return new Analyser({
fftSize: fftSize,
frequencyBinCount: fftSize / 2,
sampleRate: format.sampleRate,
channels: format.channels,
bitDepth: format.bitDepth
});
}
function processSamples() {
if (analyser) {
var fftData = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(fftData);
var maxBin = fftData.indexOf(Math.max(...fftData));
var thefreq = maxBin * analyser.sampleRate / analyser.fftSize;
console.log(maxBin + " " + thefreq);
}
}
What is the correct way to implement a Peak Meter like those in Logic Pro with the Web Audio API AnalyserNode?
I know AnalyserNode.getFloatFrequencyData() returns decibel values, but how do you combine those values to get the one to be displayed in the meter? Do you just take the maximum value like in the following code sample (where analyserData comes from getFloatFrequencyData():
let peak = -Infinity;
for (let i = 0; i < analyserData.length; i++) {
const x = analyserData[i];
if (x > peak) {
peak = x;
}
}
Inspecting some output from just taking the max makes it look like this is not the correct approach. Am I wrong?
Alternatively, would it be a better idea to use a ScriptProcessorNode instead? How would that approach differ?
If you take the maximum of getFloatFrequencyData()'s results in one frame, then what you are measuring is the audio power at a single frequency (whichever one has the most power). What you actually want to measure is the peak at any frequency — in other words, you want to not use the frequency data, but the unprocessed samples not separated into frequency bins.
The catch is that you'll have to compute the decibels power yourself. This is fairly simple arithmetic: you take some number of samples (one or more), square them, and average them. Note that even a “peak” meter may be doing averaging — just on a much shorter time scale.
Here's a complete example. (Warning: produces sound.)
document.getElementById('start').addEventListener('click', () => {
const context = new(window.AudioContext || window.webkitAudioContext)();
const oscillator = context.createOscillator();
oscillator.type = 'square';
oscillator.frequency.value = 440;
oscillator.start();
const gain1 = context.createGain();
const analyser = context.createAnalyser();
// Reduce output level to not hurt your ears.
const gain2 = context.createGain();
gain2.gain.value = 0.01;
oscillator.connect(gain1);
gain1.connect(analyser);
analyser.connect(gain2);
gain2.connect(context.destination);
function displayNumber(id, value) {
const meter = document.getElementById(id + '-level');
const text = document.getElementById(id + '-level-text');
text.textContent = value.toFixed(2);
meter.value = isFinite(value) ? value : meter.min;
}
// Time domain samples are always provided with the count of
// fftSize even though there is no FFT involved.
// (Note that fftSize can only have particular values, not an
// arbitrary integer.)
analyser.fftSize = 2048;
const sampleBuffer = new Float32Array(analyser.fftSize);
function loop() {
// Vary power of input to analyser. Linear in amplitude, so
// nonlinear in dB power.
gain1.gain.value = 0.5 * (1 + Math.sin(Date.now() / 4e2));
analyser.getFloatTimeDomainData(sampleBuffer);
// Compute average power over the interval.
let sumOfSquares = 0;
for (let i = 0; i < sampleBuffer.length; i++) {
sumOfSquares += sampleBuffer[i] ** 2;
}
const avgPowerDecibels = 10 * Math.log10(sumOfSquares / sampleBuffer.length);
// Compute peak instantaneous power over the interval.
let peakInstantaneousPower = 0;
for (let i = 0; i < sampleBuffer.length; i++) {
const power = sampleBuffer[i] ** 2;
peakInstantaneousPower = Math.max(power, peakInstantaneousPower);
}
const peakInstantaneousPowerDecibels = 10 * Math.log10(peakInstantaneousPower);
// Note that you should then add or subtract as appropriate to
// get the _reference level_ suitable for your application.
// Display value.
displayNumber('avg', avgPowerDecibels);
displayNumber('inst', peakInstantaneousPowerDecibels);
requestAnimationFrame(loop);
}
loop();
});
<button id="start">Start</button>
<p>
Short average
<meter id="avg-level" min="-100" max="10" value="-100"></meter>
<span id="avg-level-text">—</span> dB
</p>
<p>
Instantaneous
<meter id="inst-level" min="-100" max="10" value="-100"></meter>
<span id="inst-level-text">—</span> dB
</p>
Do you just take the maximum value
For a peak meter, yes. For a VU meter, there's all sorts of considerations in measuring the power, as well as the ballistics of an analog meter. There's also RMS power metering.
In digital land, you'll find a peak meter to be most useful for many tasks, and by far the easiest to compute.
A peak for any given set of samples is the highest absolute value in the set. First though, you need that set of samples. If you call getFloatFrequencyData(), you're not getting sample values, you're getting the spectrum. What you want instead is getFloatTimeDomainData(). This data is a low resolution representation of the samples. That is, you might have 4096 samples in your window, but your analyser might be configured with 256 buckets... so those 4096 samples will be resampled down to 256 samples. This is generally acceptable for a metering task.
From there, it's just Math.max(-Math.min(samples), Math.max(samples)) to get the max of the absolute value.
Suppose you wanted a higher resolution peak meter. For that, you need all the raw samples you can get. That's where a ScriptProcessorNode comes in handy. You get access to the actual sample data.
Basically, for this task, AnalyserNode is much faster, but slightly lower resolution. ScriptProcessorNode is much slower, but slightly higher resolution.
On my webpage, I have an audio file inside of an tag.
<!DOCTYPE html>
<html>
<audio src="myTrack.mp3" controls preload="auto"></audio>
</html>
I want to chop up this file stored in an tag into multiple 10 second audio files that I could then insert into the webpage as their own audio files in seperate <audio> tags.
Is it possible to do this in javascript?
Yes, of course this is possible! :)
Make sure the audio fulfill CORS-requirements so we can load it with AJAX (loading from same origin as the page will of course fulfill this).
Load the file as ArrayBuffer and decode it with AudioContext
Calculate the number of segments and length of each (I use a time based length independent of channels below)
Split the main buffer into smaller buffers
Create a file-wrapper for the new buffer (below I made a simple WAVE wrapper for the demo)
Feed that as Blob via an Object-URL to a new instance of the Audio element
Keep keep track of the object-URLs so you can free them up when not needed anymore (revokeObjectURL()).
One drawback is of course that you would have to load the entire file into memory before processing it.
Example
Hopefully the file I'm using for the demo will be available through the current CDN that is used to allow CORS usage (I own the copyright, feel free to use it for testing, but only testing!! :) ). The loading and decoding can take some time depending on your system and connection, so please be patient...
Ideally you should use an asynchronous approach splitting the buffers, but the demo targets only the needed steps to make the buffer segments available as new file fragments.
Also note that I did not take into consideration the last segment to be shorter than the others (I use floor, you should use ceil for the segment count and cut the last block length short). I'll leave that as an exercise for the reader...
var actx = new(AudioContext || webkitAudioContext)(),
url = "//dl.dropboxusercontent.com/s/7ttdz6xsoaqbzdl/war_demo.mp3";
// STEP 1: Load audio file using AJAX ----------------------------------
fetch(url).then(function(resp) {return resp.arrayBuffer()}).then(decode);
// STEP 2: Decode the audio file ---------------------------------------
function decode(buffer) {
actx.decodeAudioData(buffer, split);
}
// STEP 3: Split the buffer --------------------------------------------
function split(abuffer) {
// calc number of segments and segment length
var channels = abuffer.numberOfChannels,
duration = abuffer.duration,
rate = abuffer.sampleRate,
segmentLen = 10,
count = Math.floor(duration / segmentLen),
offset = 0,
block = 10 * rate;
while(count--) {
var url = URL.createObjectURL(bufferToWave(abuffer, offset, block));
var audio = new Audio(url);
audio.controls = true;
audio.volume = 0.75;
document.body.appendChild(audio);
offset += block;
}
}
// Convert a audio-buffer segment to a Blob using WAVE representation
function bufferToWave(abuffer, offset, len) {
var numOfChan = abuffer.numberOfChannels,
length = len * numOfChan * 2 + 44,
buffer = new ArrayBuffer(length),
view = new DataView(buffer),
channels = [], i, sample,
pos = 0;
// write WAVE header
setUint32(0x46464952); // "RIFF"
setUint32(length - 8); // file length - 8
setUint32(0x45564157); // "WAVE"
setUint32(0x20746d66); // "fmt " chunk
setUint32(16); // length = 16
setUint16(1); // PCM (uncompressed)
setUint16(numOfChan);
setUint32(abuffer.sampleRate);
setUint32(abuffer.sampleRate * 2 * numOfChan); // avg. bytes/sec
setUint16(numOfChan * 2); // block-align
setUint16(16); // 16-bit (hardcoded in this demo)
setUint32(0x61746164); // "data" - chunk
setUint32(length - pos - 4); // chunk length
// write interleaved data
for(i = 0; i < abuffer.numberOfChannels; i++)
channels.push(abuffer.getChannelData(i));
while(pos < length) {
for(i = 0; i < numOfChan; i++) { // interleave channels
sample = Math.max(-1, Math.min(1, channels[i][offset])); // clamp
sample = (0.5 + sample < 0 ? sample * 32768 : sample * 32767)|0; // scale to 16-bit signed int
view.setInt16(pos, sample, true); // update data chunk
pos += 2;
}
offset++ // next source sample
}
// create Blob
return new Blob([buffer], {type: "audio/wav"});
function setUint16(data) {
view.setUint16(pos, data, true);
pos += 2;
}
function setUint32(data) {
view.setUint32(pos, data, true);
pos += 4;
}
}
audio {display:block;margin-bottom:1px}
I am working on a project that requires end users to be able draw in the browser much like svg-edit and send SVG data to the server for processing.
I've started playing with the Raphael framework and it seems promising.
Currently I am trying to implement a pencil or freeline type tool. Basically I am just drawing a new path based on percentage of mouse movement in the drawing area. However, in the end this is going to create massive amount of paths to deal with.
Is it possible to shorten an SVG path
by converting mouse movement to use
Curve and Line paths instead of line
segments?
Below is draft code I whipped up to do the job ...
// Drawing area size const
var SVG_WIDTH = 620;
var SVG_HEIGHT = 420;
// Compute movement required for new line
var xMove = Math.round(SVG_WIDTH * .01);
var yMove = Math.round(SVG_HEIGHT * .01);
// Min must be 1
var X_MOVE = xMove ? xMove : 1;
var Y_MOVE = yMove ? yMove : 1;
// Coords
var start, end, coords = null;
var paperOffset = null;
var mouseDown = false;
// Get drawing area coords
function toDrawCoords(coords) {
return {
x: coords.clientX - paperOffset.left,
y: coords.clientY - paperOffset.top
};
}
$(document).ready(function() {
// Get area offset
paperOffset = $("#paper").offset();
paperOffset.left = Math.round(paperOffset.left);
paperOffset.top = Math.round(paperOffset.top);
// Init area
var paper = Raphael("paper", 620, 420);
// Create draw area
var drawArea = paper.rect(0, 0, 619, 419, 10)
drawArea.attr({fill: "#666"});
// EVENTS
drawArea.mousedown(function (event) {
mouseDown = true;
start = toDrawCoords(event);
$("#startCoords").text("Start coords: " + $.dump(start));
});
drawArea.mouseup(function (event) {
mouseDown = false;
end = toDrawCoords(event);
$("#endCoords").text("End coords: " + $.dump(end));
buildJSON(paper);
});
drawArea.mousemove(function (event) {
coords = toDrawCoords(event);
$("#paperCoords").text("Paper coords: " + $.dump(coords));
// if down and we've at least moved min percentage requirments
if (mouseDown) {
var xMovement = Math.abs(start.x - coords.x);
var yMovement = Math.abs(start.y - coords.y);
if (xMovement > X_MOVE || yMovement > Y_MOVE) {
paper.path("M{0} {1}L{2} {3}", start.x, start.y, coords.x, coords.y);
start = coords;
}
}
});
});
Have a look at the Douglas-Peucker algorithm to simplify your line.
I don't know of any javascript implementation (though googling directed me to forums for google maps developers) but here's a tcl implementation that is easy enough to understand: http://wiki.tcl.tk/27610
And here's a wikipedia article explaining the algorithm (along with pseudocode): http://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm
Here is a drawing tool which works with the iPhone or the mouse
http://irunmywebsite.com/raphael/drawtool2.php
However also look at Daves "game utility" #
http://irunmywebsite.com/raphael/raphaelsource.php which generates path data as you draw.
I'm working on something similar. I found a way to incrementally add path commands by a little bypass of the Raphael API as outlined in my answer here. In the modern browsers I tested on, this performs reasonably well but the degree to which your lines appear smooth depends on how fast the mousemove handler can work.
You might try my method for drawing paths using line segments and then perform smoothing after the initial jagged path is drawn (or as you go somehow), by pruning the coordinates using Ramer–Douglas–Peucker as slebetman suggested, and converting the remaining Ls to SVG curve commands.
I have a simillar problem , I draw using the mouse down and the M command. I then save that path to a database on the server. The issue I am having is to do with resolution. I have a background image where the users draw lines and shapes over parts of the image, but if the image is displayed on one resolution and the paths are created in that resolution then reopened on a different perhaps lower resolution, my paths get shifted and are not sized correctly. I guess what I am asking is : is there a way to draw a path over an image and make sure no matter the size of the underlying image the path remains proprtionally correct.
I was thinking of making a game using javascript for the game logic and the HTML5 canvas element to animate the drawing. My goal is to write something that works in browsers and on newer smartphones. So I wrote up a quick program that moves 100 circles around on the screen and shows me the frame rate. I was fairly disappointed with the results:
Chrome: ~90 FPS
Firefox: ~ 25 FPS
iPhone: ~11 FPS
This was a pretty simple test so I don't like my chances when it comes to actually making a complete game. Is this the standard result from the canvas element or are there some tricks to make drawing faster, if you have any good links let me know? Is canvas just a toy at this point or can it be used for real world applications.
Edit Here's the code:
var ctx;
var width;
var height;
var delta;
var lastTime;
var frames;
var totalTime;
var updateTime;
var updateFrames;
var creats = new Array();
function init() {
var canvas =document.getElementById('main');
width = canvas.width;
height = canvas.height;
ctx = canvas.getContext('2d');
for(var i=0; i < 100; ++i) {
addCreature();
}
lastTime = (new Date()).getTime();
frames = 0;
totalTime = 0;
updateTime = 0;
updateFrames =0;
setInterval(update, 10);
}
function addCreature() {
var c = new Creature(Math.random()*100,Math.random()*200);
creats.push(c);
}
function update() {
var now = (new Date()).getTime();
delta = now-lastTime;
lastTime = now;
totalTime+=delta;
frames++;
updateTime+=delta;
updateFrames++;
if(updateTime > 1000) {
document.getElementById('fps').innerHTML = "FPS AVG: " + (1000*frames/totalTime) + " CUR: " + (1000*updateFrames/updateTime);
updateTime = 0;
updateFrames =0;
}
for(var i=0; i < creats.length; ++i) {
creats[i].move();
}
draw();
}
function draw() {
ctx.clearRect(0,0,width,height);
creats.forEach(drawCreat);
}
function drawCreat(c,i,a) {
if (!onScreen(c)) {
return;
}
ctx.fillStyle = "#00A308";
ctx.beginPath();
ctx.arc(c.x, c.y, 10, 0, Math.PI*2, true);
ctx.closePath();
ctx.fill();
}
function onScreen(o) {
return o.x >= 0 && o.y >= 0 && o.x <= width && o.y <=height;
}
function Creature(x1,y) {
this.x = x1;
this.y = y;
this.dx = Math.random()*2;
this.dy = Math.random()*2;
this.move = function() {
this.x+=this.dx;
this.y+=this.dy;
if(this.x < 0 || this.x > width) {
this.dx*=-1;
}
if(this.y < 0 || this.y > height) {
this.dy*=-1;
}
}
}
init();
In order to make animations more efficient, and to synchronize your framerate with the UI updates, Mozilla created a mozRequestAnimationFrame() function that is designed to remove the inefficiencies of setTimeout() and setInterval(). This technique has since been adopted by Webkit for Chrome only.
In Feb 2011 Paul Irish posted a shim that created requestAnimFrame(), and shortly afterwards Joe Lambert extended it by restoring the "timeout" and "interval" delay to slow down animation ticks.
Anyway, I've used both and have seen very good results in Chrome and Firefox. The shim also fallsback to setTimeout() if support requestAnimationFrame() is unavailable. Both Paul and Joe's code is online at github.
Hope this helps!
It's largely dependent on the JavaScript engine. V8 (Chrome) and Carakan (Opera) are probably the two fastest production-quality engines. TraceMonkey (Firefox) and SquirrelFish (Safari) are far behind, with KJS bringing up the rear. This will change as hardware acceleration enters the mainstream.
As for specific optimizations, we'd probably have to see some code. Remember that the canvas supports compositing, so you really only need to redraw areas that changed. Perhaps you should re-run your benchmark without the canvas so you know if the drawing operations really were the limiting factor.
If you want to see what can be done now, check out:
js1k
Bespin
Canvas-stein
Arcs are math-intensive to draw. You can dramatically improve performance by using drawImage or even putImageData instead of drawing the path each frame.
The image can be a file loaded from a URL or it can be an image created by drawing on a separate canvas not visible to the user (not connected to the DOM). Either way, you'll save a ton of processor time.
I have written a simple bouncing ball which gives you points if you click it.
It works fine in Firefox, Safari, Chrome and on the iPad. However, the iPhone 3G/3GS were horribly slow with it. Same goes for my older Android phone.
I am sorry but I do lack specific numbers.
Chrome is the only browser thus far that I've seen high framerate results with.
You might also want to try the latest Preview of IE9. That should give you a decent benchmark of how the next generation of browsers (with hardware acceleration for HTML5) will handle your code.
So far, I've seen that IE9, Chrome 7, and Firefox 4 will all sport some form of hardware acceleration.
There's loads of optimizations to be done with Canvas drawing.
Do you have example code you could share?