I'm running NodeJS on the server-side, and I'm trying to do a bit of automated image processing to determine the 'base colour' of an image.
Here are the steps of what I want to do:
Take an image (on a remote server, so passed through with a URL) and get its dimensions
Use dimensions to calculate the centre of the image
Take a 10px x 50px (WxL) rectangle around the centre-point
Get the RGB value for each of these pixels (500 per image)
Output the average value of the pixels
I know such things are possible in PHP, but I'd like to use Node is possible. I've seen tutorials on using Node-imagick for basic processing (like resizing and cropping) but don't know where to start with more advanced analysis like this.
Questions
(a) Is this possible with Node?
(b) What libraries will allow me to do this?
A: yes
B: gm
here are some more characters to make this long enough for stackoverflow...
node-itk may be helpful to you.
Node-ITK is a node.js wrapper, which is built on top of ITK. It was built to facilitate noe.js' use in rapid prototyping, education, and web services for Medical Image Processing.
https://npmjs.org/package/node-itk
Related
I want to measure the roundtrip time in a web application to see how long it takes for a request to be sent, answered, and interpreted. I send a request to a database server, it sends me back some data and I want to visualize that data using WebGL. The WebGL part just consists of setting up a texture and plot it onto a quad.
I want my measurement to start when the request was sent and to stop as soon as the rendering has finished. For now, my (maybe naive) approach was something like this:
ws.send(JSON.stringify(request));
start = performance.now();
ws.onmessage = (d) => {
...
render(d); // here goes some typical plain WebGL code for preparing and plotting a 2D texture on a quad
end = performance.now();
}
roundtrip = end - start;
But I'm not sure if this is an accurate measurement and if end really refers to the finally drawn canvas. Is there any way to get the exact moment when the frame has been rendered? For now, I don't have a typical render loop, but instead just update the view with a new frame when a new request is triggered. I'm aware of gl.finish() but this doesn't seem to be an adequate solution. I also heard about WebGLSync from the WebGL2 API, but first I'm using WebGL1 and second it doesn't feel like a that much complicated problem...
What do you mean by "rendered"? Do you mean "visible to the user" or do you mean "pixels put in the backbuffer" or something else?
For "visible to the user" there is no way short of setting up a camera on another machine to look at your screen. There could be several frames of latency based on all the things between WebGL and the user. The browser compositing things, whether the browser is single, double, or triple buffered. The OS and how it composites windows, the monitor or TV's image processing, whether the user is using HDMI, DisplayPort, VGA, DisplayLink, Airplay, Chromecast
For "pixels put in the backbuffer" like the other answer you linked to says using gl.finish or similar like gl.readPixels will not tell you how long something took. It will instead tell you how long something took + how long it took to stop the graphics pipeline and there is no easy way to subtract out that last part. On the other hand you might be able to use it to tell if one way of doing things is faster than another way of doing things.
I have a requirement of displaying the 3D images. I have a .obj file that is of the size 80 MB that I converted to json file which is also nearly 75 MB.Now using three js I could display rotating 3D image but the issue is speed.It takes nearly 1 minute to load the 3D image.
Further client is expecting to display 2 such images on same page I could do that also but it is taking nearly 4-5 minutes to load images.
Is there any workaround for increasing the speed ??
If the object is 80MB in size then it should not be considered suitable for use in a web application. If the model does not seem to have a high level of detail, then perhaps your exporter has some problem with it and you are getting a lot of extra information that you don't need. If the model simply is very complex, then your only real option is to simplify the model dramatically, or find another model that has a lower polygon count.
If the model has come directly from the client, you are unfortunately going to be faced with the unenviable task of convincing them of the limitations.
If you absolutely have to use these models, perhaps you can try to compress the data and decompress it on the client side. A quick google brought up this Stack Overflow question as a starting point:
Client side data compress/decompress?
EDIT: Perhaps you could break it down into separate pieces. Since it is an indexed format, you would probably still have to download all of the vertex data first, and then chunks of the index data.
If you wanted to go deeper you could try breaking the index data into chunks and computing which vertices you would need for each chunk. There is no guarantee that the final result would look that great though, so it might not be worth the effort. That said, with some analysis you could probably rearrange the indices so the model loaded in a sensible order, say from front to back, or left to right.
If you have a 3d artist at your disposal, it might be easier to break the model down into several models and load them one by one into the scene. This would probably look much cleaner since you could make artistic choices about where to cut the model up.
I have a server that generates pngs very rapidly and I need to make this into a poor-man's video feed. Actually creating a video feed is not an option.
What I have working right now is a recursive loop that looks a little like this (in pseudo-code):
function update() {
image.src = imagepath + '?' + timestamp; // ensures the image will update
image.onload = function () {update()};
}
This works, however after a while, it crashes the browser (Google Chrome, after more than 10 minutes or so). These images are being updated very frequently (several times a second). It seems the images are caching, which causes the browser to run out of memory.
Which of these solutions would solve the problem while maintaining fast refresh:
HTML5 canvas with drawImage
HTML5 canvas with CanvasPixelArray (raw pixel manipulation)
I have access to the raw binary as a Uint8Array, and the image isn't too large (less than 50 kb or so, 720 x 480 pixels).
Alternatively, is there anyway to clear old images from the cache or to avoid caching altogether?
EDIT:
Note, this is not a tool for regular users. It's a tool for diagnosing analog hardware problems for engineers. The reason for the browser is platform independence (should work on Linux, Windows, Mac, iPad, etc without any software changes).
The crashing is due to http://code.google.com/p/chromium/issues/detail?id=36142. Try creating object URLs (use XHR2 responseType = "arraybuffer" along with BlobBuilder) and revoking (using URL.revokeObjectURL) the previous frame after the next frame is loaded.
Edit: You really should be processing these into a live low-fps video stream on the server side, which will end up giving you greatly decreased latency and faster load times.
#Eli Grey seems to have identified the source of your crashing. It looks like they have a fix in the works, so if you don't want to modify your approach hopefully that will be resolved soon.
With regard to your other question, you should definitely stick with drawImage() if you can. If I understand your intention of using the CanvasPixelArray, you are considering iterating over each pixel in the canvas and updating it with your new pixel information? If so, this will be nowhere near as efficient as drawImage(). Furthermore, this approach is completely unnecessary for you because you (presumably) do not need to reference the data in the previous frame.
Whether fortunately or not, you cannot directly swap out the internal CanvasPixelArray object stored within an HTML5 canvas. If you have a properly-formatted array of pixel data, the only way you can update a canvas element is by calling either drawImage() or putImageData(). Right now, putImageData() is much slower than drawImage(), as you can see here: http://jsperf.com/canvas-drawimage-vs-putimagedata. If you have any sort of transparency in the frames of your video, you will likely want to clear the canvas and then use drawImage() (otherwise you could see through to previous frames).
Having said all that, I don't know that you really even need to use a canvas for this. Was your motivation for looking into using a canvas so that you could avoid caching (which now doesn't seem to be the underlying issue for you)?
If the "movie" is data-driven (ie. based on numbers and calculations), you may be able to push MUCH more data to the browser as text and then have javascript render it client-side into a movie. The "player" in the client can then request the data in batches as it needs it.
If not, one thing you could do is simply limit the frames-per-second (fps) of the script, possibly a hard-coded value, or a slider / setable value. Assuming this doesn't limit the utility of the tool, at the very least it would let the browser run longer w/o crashing.
Lastly, there are lots of things that can be done with headers (eg. in the .htaccess file) to indicate to browsers to cache or not cache content.
iPad, you say ?.. Nevertheless, i would advice using Flash/video or HTML5/video.
Because WebKit is very easily crashed with even moderate influx of images, either just big images or just a huge number of small ones..
From the other side, XHR with base64 image data or pixel array MIGHT work. I have had short polling app, which was able to run for 10-12 hours with XHR polling server every 10 seconds.
Also, consider delta compression, - like, if its histogram with abscissa being time scale - you can only send a little slice from the rigth, - of course, for things like heat-maps, you cannot do that.
These images are being updated very frequently (several times a
second).
.. if its critical to update at such a high rate - you MUST use long polling.
I am trying to use arborjs to generate a graph of a computer network where I can setup the length of the edges between each nodes to represent the ping response time. The API itself supports doing this, but I need to build up the whole stack of drawing/update routines from scratch if I use it directly.
In this demo page http://arborjs.org/halfviz/#/a-new-hope on the other hand, we can simply provide a simple text-representation of the graph (from an XHR json file for ex) and the library handles all the work, what it doesn't, is the support of edge lengths between nodes.
I am wondering if anyone had played with this, and how to tweak the parsing part of the library to take in consideration the length of the edges?
Thank you.
I would like a web interface for a user to describe a one-dimensional real-valued function. I'm imagining the user being presented with a blank pair of axes and they can click anywhere to create points that are thick and draggable. Double-clicking a point, let's say, makes it disappear. The actual function should be shown in real time as an interpolation of the user-supplied points.
Here's what this looks like implemented in Mathematica (though of course I'm looking for something in javascript):
(source: yootles.com)
If your website users install the new CDF player plugin, they will be able to work with the above example you coded!!
While I have no experience with this yet, I believe the CDF file code drops directly into your page and will load automatically with the correct MIME type enabled.
Here is an example of a live manipulatable interface embedded in a blog post: Mathematica: Interactive mathematics in the web browser.
Cool, huh?
Remember that a probability distribution has to be monotonically non-decreasing over its entire run, which your example is not. Even worse, that small dip is not due to user error -- their points are increasing as required -- but is an artifact of the interpolation method. If you use linear interpolation, then any non-monotonicity is your user's fault, and you can warn them.
The Distribution Builder tool by Dan Goldstein has an alternative interface for eliciting probability distributions.