I want to measure the roundtrip time in a web application to see how long it takes for a request to be sent, answered, and interpreted. I send a request to a database server, it sends me back some data and I want to visualize that data using WebGL. The WebGL part just consists of setting up a texture and plot it onto a quad.
I want my measurement to start when the request was sent and to stop as soon as the rendering has finished. For now, my (maybe naive) approach was something like this:
ws.send(JSON.stringify(request));
start = performance.now();
ws.onmessage = (d) => {
...
render(d); // here goes some typical plain WebGL code for preparing and plotting a 2D texture on a quad
end = performance.now();
}
roundtrip = end - start;
But I'm not sure if this is an accurate measurement and if end really refers to the finally drawn canvas. Is there any way to get the exact moment when the frame has been rendered? For now, I don't have a typical render loop, but instead just update the view with a new frame when a new request is triggered. I'm aware of gl.finish() but this doesn't seem to be an adequate solution. I also heard about WebGLSync from the WebGL2 API, but first I'm using WebGL1 and second it doesn't feel like a that much complicated problem...
What do you mean by "rendered"? Do you mean "visible to the user" or do you mean "pixels put in the backbuffer" or something else?
For "visible to the user" there is no way short of setting up a camera on another machine to look at your screen. There could be several frames of latency based on all the things between WebGL and the user. The browser compositing things, whether the browser is single, double, or triple buffered. The OS and how it composites windows, the monitor or TV's image processing, whether the user is using HDMI, DisplayPort, VGA, DisplayLink, Airplay, Chromecast
For "pixels put in the backbuffer" like the other answer you linked to says using gl.finish or similar like gl.readPixels will not tell you how long something took. It will instead tell you how long something took + how long it took to stop the graphics pipeline and there is no easy way to subtract out that last part. On the other hand you might be able to use it to tell if one way of doing things is faster than another way of doing things.
Related
I've built a very basic game around an 50ms gameloop (setinterval) and I've been considering how to fetch player positions. So far I've had the idea to use ajax to post the player position to a php script which will update a sql database, then select all player positions and pass them back in the return. However i'm concerned that this may hit a race condition or that the entire action will simply take to long >5ms, it also feels wasteful to repeatedly poll the sql database for the same data (player positions). Is there a more effective way to achieve this? I've started looking at caching and sharing objects between php instances but not found much info
It probably is not needed to write this to your database all the time. Use an extension like APCu. You could still write the player's positions to the database at regular intervals with a background process.
It probably is also not needed to return all user positions, but only those that are nearby. From your comment I understood that your game has different worlds, so you would only need to return the user positions that relate to the world the player is in. And even then you could limit it further based on distance, if the player's view on that world only covers a part of it at any time.
I'm building a visualization for some industrial devices that produce large amounts of time-based data like temperatures, currents or voltages. All data is constantly written to a SQL Server database (can't control that part).
The HTML5 frontend consists of an interactive zoomable chart I made with d3.js. Data series can be added(loaded) to the chart on demand, in which case the frontend sends an ajax request, ASP.NET MVC and EF6 fetches the values from the DB and returns them as Json.
Each data element simply consists of a DateTime and a value. Please note that the values do not get written in regular intervals (like every 2 seconds or so), but in irregular intervals. This is because the device doesn't get polled regularly but sends data on specific events, like a raise/drop of a temperature by a given change of 0.1 °C, for example.
So far everything works really well and smooth, but the large amount of data becomes a problem. For example, when I want to show a line chart for a selected period of lets say 3 month, each data series already consists of appr. 500.000 values, so the Json response from the server also gets bigger and bigger and the request takes longer with growing time periods.
So I am looking for a way to reduce the amount of data without losing relevant information, such as peaks in temperature curves etc., but at the same time I want to smoothen out the noise in the signal.
Here's an example, please keep in mind that this is just a chosen period of some hours or days, usually the user would like to see data for several months or even years as well:
The green lines are temperatures, the red bars are representations of digital states (in this case a heater that makes one of the temp curves go up).
You can clearly see the noise in the signals, this is what I want to get rid of. At the same time, I want to keep characteristic features like the ones after the heater turns on and the temperature strongly rises and falls.
I already tried chopping the raw data into blocks of a given length and then aggregating the data in them, so that I have a min, max and average for that interval. This works, but by doing so I the characteristic features of the curve get lost and everything gets kind of flattened or averaged. Here's a picture of the same period as above zoomed out a bit, so that the aggregating kicks in:
The average of the upper series is shown as the green line, the extent (min/max) of each chop is represented by the green area around the average line.
Is there some kind of fancy algorithm that I can use to filter/smoothen/reduce my data right when it comes out of the DB and before it gets send to the frontend? What are the buzzwords here that I need to dig after? Any specific libraries, frameworks or techniques are highly appreciated, as well as general comments on this topic. I'm interested primarily in server-side solutions, but please feel free to mention client-side Javascript solutions as well as they might surely be of interest for other people facing the same problem.
"Is there some kind of fancy algorithm that I can use to filter/smoothen/reduce my data right when it comes out of the DB and before it gets send to the frontend? What are the buzzwords here that I need to dig after?"
I've asked a friend at the University where I work and she says Fourier Transforms can probably be used... but that looks like Dutch to me :)
Edit: looking at it a bit more myself, and because your data is time sampled, I'm guessing you'll be interested in Discrete Time Fourier Transforms
Further searching around this topic led me here - and that, to my (admittedly unexpert) eyes, looks like something that's useful...
Further Edit:
So, that link makes me think that you should be able to remove (for example) every second sample on the server-side: then on the client-side, you can use the interpolation technique described in that link (using the inverse fourier transform) to effectively "restore" the missing points on the client-side: you've transferred half of the points and yet the resulting graph will be exactly the same because on the client you've interpolated the missing samples.... or is that way off base? :)
I'm running NodeJS on the server-side, and I'm trying to do a bit of automated image processing to determine the 'base colour' of an image.
Here are the steps of what I want to do:
Take an image (on a remote server, so passed through with a URL) and get its dimensions
Use dimensions to calculate the centre of the image
Take a 10px x 50px (WxL) rectangle around the centre-point
Get the RGB value for each of these pixels (500 per image)
Output the average value of the pixels
I know such things are possible in PHP, but I'd like to use Node is possible. I've seen tutorials on using Node-imagick for basic processing (like resizing and cropping) but don't know where to start with more advanced analysis like this.
Questions
(a) Is this possible with Node?
(b) What libraries will allow me to do this?
A: yes
B: gm
here are some more characters to make this long enough for stackoverflow...
node-itk may be helpful to you.
Node-ITK is a node.js wrapper, which is built on top of ITK. It was built to facilitate noe.js' use in rapid prototyping, education, and web services for Medical Image Processing.
https://npmjs.org/package/node-itk
I am writing a javascript web game that has a 3 x 3 viewport similar to the one Urban Dead uses. The "world map" is stored as a 100 x 100 2D array server-side (nodejs), each pair of coordinate defines a "room". So, the 3 x 3 viewport shows the name and content of 9 of the rooms.
User's location are stored server-side as coordinates. E.g. Bob = { x : 2, y : 3 }. So Bob's x-coordinate is Bob.x. The client (browser) can retrieve this and work out the coordinates of the other 8 rooms, and asks the server for the content of those rooms. These are then displayed in the viewport. This is suppose to look like the viewport in Urban Dead (top left corner).
Question
How should I think about making the viewport "refresh" or update? I am thinking of doing it this way...
1) When the player moves from coordinates (2,3) to (1,3). The client asks the server about the content of the 9 rooms again and re-draw/display everything.
2) When the content of one of the room changes, the server (nodejs) runs a client-side function that tells it to ask the server for the content of the 9 rooms and re-draw/display everything.
Being new to programming, I wonder if this implementation would be too naive or inefficient?
The former option would be the best imo, purely for the reason that as your map expands it will be easier for you to set-up trigger regions that cause your client to load specific areas of the map. You could even only load the rooms that they can see or have access to.
This would also be of benefit, for example, if you were to have multiple characters. Imagine you have your main character in one location, and a secondary character elsewhere. It would be far better for client to know where they are and to understand what each character can see and only request this information from the server. This is a much more expandable solution than having the server just constant broadcast all room information to whoever is listening.
With regards to changes in room content, this could be flagged as an event from the server to all clients - but based purely on room co-ordinates and minimal data. If this event covers one of the rooms the client can currently see, a request by that client for the room information can occur. So in part this involves something of your option two, but shouldn't broadcast much.
As an analogy, this is more akin to having clients/users requesting only the resources they want from a website at the time they want them and possibily signing up to mail alerts about content that's interesting to them. Rather than having the user sign-up to an RSS feed and being notified when anything changes on the site. The former is more optimal and can be controlled in specific ways, the latter is better if your user is interested in overviewing the whole content of the site (which is not the case usually in games -- unless you're designing bots that have to cheat).
Overall, after talking it through implementing part of both approaches would be useful. But if it's a choice the first one will give you more control.
I wouldn't worry about too naive or inefficient solution. The difficult part is to make game playable and fun - optimize later.
Looking at the example, the amount of data to transfer doesn't look very big. You should be able to fetch full 9 room set for update.
Are you pulling the data from server with intervals (polling) or are you somehow pushing changes from server to client?
The question is do you care of AFK being notified, and how important is information about next room.
If you think a strategy can change if a room is updated ? In this case update every time a update is done.
If a player don't care then change are done when pass to next room.
A middle way solution is to ask a "delta" update ( just element who have been modified ) every 10s or 30s depending of the game. In this case us a central clock can be funny ( multiple player will update a same time ), and this create a turn-style gameplay.
As a RPG player ( paper RPG ) I will think the third is a good way. You can even mix solution : short time update for current room, and perception based time ( a bling and deaf will update exterior room only at major event ).
I have a server that generates pngs very rapidly and I need to make this into a poor-man's video feed. Actually creating a video feed is not an option.
What I have working right now is a recursive loop that looks a little like this (in pseudo-code):
function update() {
image.src = imagepath + '?' + timestamp; // ensures the image will update
image.onload = function () {update()};
}
This works, however after a while, it crashes the browser (Google Chrome, after more than 10 minutes or so). These images are being updated very frequently (several times a second). It seems the images are caching, which causes the browser to run out of memory.
Which of these solutions would solve the problem while maintaining fast refresh:
HTML5 canvas with drawImage
HTML5 canvas with CanvasPixelArray (raw pixel manipulation)
I have access to the raw binary as a Uint8Array, and the image isn't too large (less than 50 kb or so, 720 x 480 pixels).
Alternatively, is there anyway to clear old images from the cache or to avoid caching altogether?
EDIT:
Note, this is not a tool for regular users. It's a tool for diagnosing analog hardware problems for engineers. The reason for the browser is platform independence (should work on Linux, Windows, Mac, iPad, etc without any software changes).
The crashing is due to http://code.google.com/p/chromium/issues/detail?id=36142. Try creating object URLs (use XHR2 responseType = "arraybuffer" along with BlobBuilder) and revoking (using URL.revokeObjectURL) the previous frame after the next frame is loaded.
Edit: You really should be processing these into a live low-fps video stream on the server side, which will end up giving you greatly decreased latency and faster load times.
#Eli Grey seems to have identified the source of your crashing. It looks like they have a fix in the works, so if you don't want to modify your approach hopefully that will be resolved soon.
With regard to your other question, you should definitely stick with drawImage() if you can. If I understand your intention of using the CanvasPixelArray, you are considering iterating over each pixel in the canvas and updating it with your new pixel information? If so, this will be nowhere near as efficient as drawImage(). Furthermore, this approach is completely unnecessary for you because you (presumably) do not need to reference the data in the previous frame.
Whether fortunately or not, you cannot directly swap out the internal CanvasPixelArray object stored within an HTML5 canvas. If you have a properly-formatted array of pixel data, the only way you can update a canvas element is by calling either drawImage() or putImageData(). Right now, putImageData() is much slower than drawImage(), as you can see here: http://jsperf.com/canvas-drawimage-vs-putimagedata. If you have any sort of transparency in the frames of your video, you will likely want to clear the canvas and then use drawImage() (otherwise you could see through to previous frames).
Having said all that, I don't know that you really even need to use a canvas for this. Was your motivation for looking into using a canvas so that you could avoid caching (which now doesn't seem to be the underlying issue for you)?
If the "movie" is data-driven (ie. based on numbers and calculations), you may be able to push MUCH more data to the browser as text and then have javascript render it client-side into a movie. The "player" in the client can then request the data in batches as it needs it.
If not, one thing you could do is simply limit the frames-per-second (fps) of the script, possibly a hard-coded value, or a slider / setable value. Assuming this doesn't limit the utility of the tool, at the very least it would let the browser run longer w/o crashing.
Lastly, there are lots of things that can be done with headers (eg. in the .htaccess file) to indicate to browsers to cache or not cache content.
iPad, you say ?.. Nevertheless, i would advice using Flash/video or HTML5/video.
Because WebKit is very easily crashed with even moderate influx of images, either just big images or just a huge number of small ones..
From the other side, XHR with base64 image data or pixel array MIGHT work. I have had short polling app, which was able to run for 10-12 hours with XHR polling server every 10 seconds.
Also, consider delta compression, - like, if its histogram with abscissa being time scale - you can only send a little slice from the rigth, - of course, for things like heat-maps, you cannot do that.
These images are being updated very frequently (several times a
second).
.. if its critical to update at such a high rate - you MUST use long polling.