I am creating a simple game in html5 canvas. i run it using javascript . ANd i want it to be a multiplayer game. but first i need to have a database where i can put the x and y position of an object that will run in every 30 milliseconds(it is the keyframes of my game animation.) . i need to save it in a file or database so other players can see the update of x and y position of other players...i hope you get my point...
now i am asking what database or file should i use to do this data position updating . that can be able to update that fast
For a scenario like this, you will probably get more mileage if you cache locations in memory locally, but then periodically "sync" them with the database. This will require ways to resolve conflicts in position (e.g. if the position you predict / have on the client-side JavaScript deviates from the actual position as reported by the database) but will allow you to be more efficient (e.g. updating at a faster rate when players are nearer to your player and less frequently when they are far away, for example). It will also allow you to animate your player's movements more steadily without jank in the event that a particular database request falls outside of your frame rate requirements.
As for the database, itself, there are a lot of databases to choose from. However, if you don't want to write the server-side code to provide an API for interacting with your database, then you may be interested in Firebase, which provides direct access from client-side JavaScript (without the need to create your own server / API layer on top of the database). Of course you can also use any other database -- Google Cloud Datastore, Google Cloud SQL, MySQL, Cassandra, MongoDB -- and write an appropriate API server layer in the language of your choice (which could also be JavaScript) to provide access to the underlying data, as a valid option as well (and, in fact, that might make more sense if you already have or plan to have a frontend webserver).
relatively new to databases here (and dba).
I've been recently looking into Riot Games' APIs, however now realising that you're limited to 10 calls per 10 seconds, I need to change my front-end code that was originally just loading all the information with lots of and lots of API calls into something that uses a MySQL database.
I would like to collect ranked data about each player and list them (30+ players) in an ordered list of ranking. I was thinking, as mentioned in their Rate Limiting Page, "caching" data when GET-ing it, and then when needing that information again, check if it is still relevant - if so use it, if not re-GET it.
Is the idea of adding a time of 30 minutes (the rough length of a game) in the future to a column in a table, and when calling check whether server time is ahead of the saved time. Is this the right approach/idea of caching - If not, what is the best practice of doing so?
Either way, this doesn't solve the problem of loading 30+ values for the first time, when no previous calls have been made to cache.
Any advice would be welcome, even advice telling me I'm doing completely the wrong thing!
If there is more information needed I can edit it in, let me know.
tl;dr What's best practice to get around Rate-Limiting?
Generally yes, most of the large applications simply put guesstimate rate limits, or manual cache (check DB for recent call, then go to API if its an old call).
When you use large sites like op.gg or lolKing for Summoner look ups, they all give you a "Must wait X minutes before doing another DB check/Call", I also do this. So yes, giving an estimated number (like a game length) to handle your rate limit is definitely a common practice that I have observed within the Riot Developer community. Some people do go all out and implement actual caching though with actual caching layers/frameworks, but you don't need to do that with smaller applications.
I recommend building up your app's main functionality first, submit it, and get it approved for a higher rate limit as well. :)
Also you mentioned adjusting your front-end code for calls, make sure your API calls are in server-side code for security concerns.
We are investigating using Breeze for field deployment of some tools. The scenario is this -- an auditor will visit sites in the field, where most of the time there will be no -- or very degraded -- internet access. Rather than replicate our SQL database on all the laptops and tablets (if that's even possible), we are hoping to use Breeze to cache the data and then store it locally so it is accessible when there is not a usable connection.
Unfortunately, Breeze seems to choke when caching any significant amount of data. Generally on Chrome it's somewhere between 8 and 13MB worth of entities (as measured by the HTTPResponse headers). This can change a bit depending on how many tabs I have open and such, but I have not been able to move that more than 10%. the error I get is the Chrome tab crashes and tells me to reload. The error is replicable (I download the data in 100K chunks and it fails on the same read every time and works fine if I stop it after the previous read) When I change the page size, it always fails within the same range.
Is this a limitation of Breeze, or Chrome? Or windows? I tried it on Firefox, and it handles even less data before the whole browser crashes. IE fares a little better, but none of them do great.
Looking at performance in task manager, I get the following:
IE goes from 250M memory usage to 1.7G of memory usage during the caching process and caches a total of about 14MB before throwing an out-of-memory error.
Chrome goes from 206B memory usage to about 850M while caching a total of around 9MB
Firefox goes from around 400M to about 750M and manages to cache about 5MB before the whole program crashes.
I can calculate how much will be downloaded with any selection criteria, but I cannot find a way to calculate how much data can be handled by any specific browser instance. This makes using Breeze for offline auditing close to useless.
Has anyone else tackled this problem yet? What are the best approaches to handling something like this. I've thought of several things, but none of them are ideal. Any ideas would be appreciated.
ADDED At Steve Schmitt's request:
Here are some helpful links:
Metadata
Entity Diagram (pdf) (and html and edmx)
The first query, just to populate the tags on the page runs quickly and downloads minimal data:
var query = breeze.EntityQuery
.from("Countries")
.orderBy("Name")
.expand("Regions.Districts.Seasons, Regions.Districts.Sites");
Once the user has select the Sites s/he wishes to cache, the following two queries are kicked off (used to be one query, but I broke it into two hoping it would be less of a burden on resources -- it didn't help). The first query (usually 2-3K entities and about 2MB) runs as expected. Some combination of the predicates listed are used to filter the data.
var qry = breeze.EntityQuery
.from("SeasonClients")
.expand("Client,Group.Site,Season,VSeasonClientCredit")
.orderBy("DistrictId,SeasonId,GroupId,ClientId")
var p = breeze.Predicate("District.Region.CountryId", "==", CountryId);
var p1 = breeze.Predicate("SeasonId", "==", SeasonId);
var p2 = breeze.Predicate("DistrictId", "==", DistrictId);
var p3 = breeze.Predicate("Group.Site.SiteId", "in", SiteIds);
After the first query runs, the second query (below) runs (also using some combination of the predicates listed to filter the data. At about 9MB, it will have about 50K rows to download). When the total download burden between the two queries is between 10MB and 13MB, browsers will crash.
var qry = breeze.EntityQuery
.from("Repayments")
.orderBy('SeasonId,ClientId,RepaymentDate');
var p1 = breeze.Predicate("District.Region.CountryId", "==", CountryId);
var p2 = breeze.Predicate("SeasonId", "==", SeasonId);
var p3 = breeze.Predicate("DistrictId", "==", DistrictId);
var p4 = breeze.Predicate("SiteId", "in", SiteIds);
Thanks for the interest, Steve. You should know that the Entity Relationships are inherited and currently in production supporting the majority of the organization's operations, so as few changes as possible to that would be best. Also, the hope is to grow this from a reporting application to one with which data entry can be done in the field (so, as I understand it, using projections to limit the data wouldn't work).
Thanks for the interest, and let me know if there is anything else you need.
Here are some suggestions based on my experience building on an offline capable web application using breeze. Some or all of these might not make sense for your use cases...
Identify which entity types need to be editable vs which are used to fill drop-downs etc. Load non-editable data using the noTracking query option and cache them in localStorage yourself using JSON.stringify. This avoids the overhead of coercing the data into entities, change tracking, etc. Good candidates for this approach in your model might be entity types like Country, Region, District, Site, etc.
If possible, provide a facility in your application for users to identify which records they want to "take offline". This way you don't need to load and cache everything, which can get quite expensive depending on the number of relationships, entities, properties, etc.
In conjunction with suggestion #2, avoid loading all the editable data at once and avoid using the same EntityManager instance to load each set of data. For example, if the Client entity is something that needs to be editable out in the field without a connection, create a new EntityManager, load a single client (expanding any children that also need to be editable) and cache this data separately from other clients.
Cache the breeze metadata once. When calling exportEntities the includeMetadata argument should be false. More info on this here.
To create new EntityManager instances make use of the createEmptyCopy method.
EDIT:
I want to respond to this comment:
Say I have a client who has bills and payments. That client is in a
group, in a site, in a region, in a country. Are you saying that the
client, payment, and bill information might each have their own EM,
while the location hierarchy might be in a 4th EM with no-tracking?
Then when I refer to them, I wire up the relationships as needed using
LINQs on the different EMs (give me all the bills for customer A, give
me all the payments for customer A)?
It's a bit of a judgement call in terms of deciding how to separate things out. Some of what I'm suggesting might be overkill, it really depends on the amount of data and the way your application is used.
Assuming you don't need to edit groups, sites, regions and countries while offline, the first thing I'd do would be to load the list of groups using the noTracking option and cache them in localStorage for offline use. Then do the same for sites, regions and countries. Keep in mind, entities loaded with the noTracking option aren't cached in the entity manager so you'll need to grab the query result, JSON.stringify it and then call localStorage.setItem. The intent here is to make sure your application always has access to the list of groups, sites, regions, etc so that when you display a form to edit a client entity you'll have the data you need to populate the group, site, region and country select/combobox/dropdown.
Assuming the user has identified the subset of clients they want to work with while offline, I'd then load each of these clients one at a time (including their payment and bill information but not expanding their group, site, region, country) and cache each client+payments+bills set using entityManager.exportEntities. Reasoning here is it doesn't make sense to load several clients plus their payments and bills into the same EntityManager each time you want to edit a particular client. That could be a lot of unnecessary overhead, but again, this is a bit of a judgement call.
#Jeremy's answer was excellent and very helpful, but didn't actually answer the question, which I was starting to think was unanswerable, or at least the wrong question. However #Steve in the comments gave me the most appropriate information for this question.
It is neither Breeze nor the Browser, but rather Knockout. Apparently the knockout wrapper around the breeze entities uses all that memory (at least while loading the entities and in my environment). As described above, Knockout/Breeze would crap out after reading around 5MB of data, causing Chrome to crash with over 1.7GB of memory usage (from a pre-download memory usage around 300MB). Rewriting the app in ANgularJS eliminated the problem. So far I have been able to download over 50MB from the exact same EF6 model into Breeze/Angular, total Chrome memory usage never went above 625MB.
I will be testing larger payloads, but 50 MB more than satisfies my needs for the moment. Thanks everyone for your help.
First of all, I've looked around the internet and found it quite badly documented.
Somewhere in my code I have a big memory leak that I'm trying to track and after using:
window.performance.memory.usedJSHeapSize
it looks like the value remains at the same level of 10MB, which is not true because when we compare to the values either visible here:
chrome://memory-internals/
or if we look at the Timeline in devTools we can see a big difference. Does anyone encountered a similar issue? Do I need to manually update these values (to run a command "update", "measure" etc?)
Following this topic:
Information heap size
it looks like this value is increased by a certain step, can we somehow see what is it or modify it? In my case from what I can see now the page has about 10MB, 30 minutes later there will be about 400MB, and half an hour after the page will crash..
Any ideas guys?
(Why the code is leaking it's a different issue, please treat this one as I was trying to use this variable to create some kind of test).
There's a section of the WebPlatform.org docs that explains this:
The values are quantized as to not expose private information to attackers. If Chrome is run with the flag --enable-precise-memory-info the values are not quantized.
http://docs.webplatform.org/wiki/apis/timing/properties/memory
So, by default, the number is not precise, and it only updates every 20 minutes! This should explain why your number doesn't change. If you use the flag, the number will be precise and current.
The WebKit commit message explains:
This patch adds an option to expose quantized and rate-limited memory
information to web pages. Web pages can only learn new data every 20
minutes, which helps mitigate attacks where the attacker compares two
readings to extract side-channel information. The patch also only
reports 100 distinct memory values, which (combined with the rate
limits) makes it difficult for attackers to learn about small changes in
memory use.
I am testing different browsers on how do they read/write large amounts of data to/from local storage. Sample data is 1500 customer records with some set of data for everyone (first name, last name, some ids for their location, type, etc). Test application is built on GWT paltform.
And what I noticed is that IE8, IE9, Chrome improved their performance in at least 30% after moving to loading data from local storage (rather than from web server). And only Firefox (5.0) is the one who worsened the results (around 30% slower). Remote web-server was used to bring some sort of reality into experiment.
The difference between browsers is almost invisible on small data chunks (100-200 records) and the resulting time is also about to be the same. But large amounts reveal the problem.
I found mentioning of this issue on mozilla support site - https://support.mozilla.com/en-US/questions/750266
But still no solution or workaround there how to fix it.
Javascript profiling shows that calls to function implemented in GWT StorageImpl.java class
function $key(storage, index){
return index >= 0 && index < $wnd[storage].length ?
$wnd[storage].key(index) : null;
}
take the lion's share of time during execution. Which is actually storage.getItem(key) call in GWT.
To avoid this frequent calls I would rather prefer a single call to translate storage contents to the map, for example, and it might help me to save time spent on Firefox's cache I/O operations (if any). But Storage interface ( http://dev.w3.org/html5/webstorage/#storage-0 ) contains only getItem() function to receive any contents from the storage.
Any thoughts about how to force Firefox work faster?
P.S. Maybe will be useful for someone: I found FF local storage contents using addon SQLite manager, and loading webappstore.sqlite database from the drop-down list of default built-in databases.
Which version of Firefox are you testing? Your post on support.mozilla.org mentions Firefox 3.6.8, and you mention IE, so are presumably on Windows, in which case you're probably hitting https://bugzilla.mozilla.org/show_bug.cgi?id=536544 which was fixed in Firefox 4. Or are you seeing the problem in a recent Firefox?