I have a web application and when the users clicks save, a write to a file should occur. The application is "a football scheduler", thus more than one players can click on the save button at the very same time.
Initially I thought that this would be something rare, but this is not the case, since when the match in announced players rush into, because the match will be full in a short period of time.
What happens if two players press save at the same time? Only the save of one player will take effect and the other one will be lost (and it is not rare that this happens with the last available position and then there is some conflict).
How can I cope with this phenomenon?
// removed code since it wasn't needed
I would like to know if this is possible without using a database. So a no answer is also accepted.
I would suppose you would handle this the same way that a database would. Add a lock to the file and wait.
Something like:
while (file_exists('file.lock')) {
usleep(10000);
}
touch('file.lock');
...
unlink('file.lock');
You may want to add a timeout. I would think this would be safe enough, but I suppose it is still possible for two users to run this at the EXACT same time. In that case, you may want to add a check that their content was successfully saved before continuing.
Update: As Marc B pointed out, file locking appears to be included in php, didn't know about this function until he pointed it out flock().
$fp = fopen("file", "r+");
// try to acquire an exclusive lock, otherwise sleep 10ms before trying again
while (!flock($fp, LOCK_EX)) {
usleep(10000);
}
fwrite($fp, "Write something here\n");
fflush($fp); // flush output before releasing the lock
flock($fp, LOCK_UN); // release the lock
fclose($fp)
This would be similar to the function I wrote above. Of course don't pay attention to the fwrite, I just copied the example from php.net to show how flock() works.
It is not possible to write to the same file at the same time.
You could use a database, which can handle more than one access at the same time. In my opinion this is the best solution for handling and accessing a lot of data!
Or you could store the data in seperate files for seperate users, which are all created in a certain folder, and if there are more files available, save them one after another and delete them afterwards. Then keep on writing in the masterfile until no more files are available. This would be a workaround and I really would not recommend it, but anyways it should work if you really don't want to use a database.
Related
I made a code this summer holidays and today I look for the first time at my code again, and I am strugging on one thing I did.
My system is a system with multiple types (pages, newsletters etc.) and multiple subtypes (items, archive, concepts etc.). The idea now I have an object like this:
object { 1: { normal: { 1: { content: 'somecontent', title: 'sometitle' } } } }
Another example:
object { 1: { normal: { 1: { content: 'somecontent', title: 'sometitle' } }, archive: {} }, 2: { normal: {} } }
The data originally comes from the database. I'm making a system to edit pages on the website and other things like newsletters. Because I have multiple types and subtypes.
I made a cache for the reason I don't want to get all items from the database every time. But now the problem is if I add an item, edit an item and remove an item I have to delete it from the cache / edit / add.
My question: is this a good way? I thought it is because you don't have to call an AJAX file to get the data from the database.
I'm sorry if I'm not allowed to ask this here.
My question: is this a good way? I thought it is because you don't
have to call an AJAX file to get the data from the database.
The answer is that "it depends". There is no always right and always wrong answer for caching because caching is a tradeoff between efficiency and timeliness of data.
If you want maximum efficiency, you cache like crazy, but your data may not be perfectly up to date because you're using old data from the cache.
If you want the most up-to-date data, you don't cache anything so you always get the latest data, but obviously efficiency may suffer if you are regular requesting the same data over and over.
So, it's a tradeoff and the tradeoff depends entirely upon the application, its needs, how often the data is modified and what the consequences are for having stale data or for not caching. There is no single right or wrong answer for that tradeoff. It depends entirely upon the particular situation for your application and the tradeoff may even be different for some types of data vs. others within the same application.
For example, let's supposed you were writing an online bidding site that offered some functionality like eBay. You would probably be fine caching the item description for at least several hours because that almost never changes and even if it does, the consequences of being a bit tardy on seeing a new item description are fairly low. But, you could never cache the data on the current bid because the timeliness of that information is critical. The user needs to always see the latest info on the current bid, even if you have to make some sacrifices in efficiency.
Also, remember that caching isn't completely all or none. You can set a lifetime for a cached value such that it can only be used for a certain period of time that is appropriate for the type of data. For example, you might cache an item description in the above auction for up to 2 hours. This allows you to achieve some efficiency gains, but also to eventually see the new data if it happens to change.
In general, you have to review the consequences of showing stale data. If the consequences for having data that is even minutes out of date are high (like the latest price in a live auction), then you can't cache that data at all.
If the consequences of having data that is even hours out of date are low, then you can likely cache that value for at least several hours - maybe even longer.
And, when considering what to cache, you obviously want to first look at the items that are most requested and are the most expensive on your server to retrieve. Some analysis of the usage pattern on your server would give you a prioritized list of candidates to consider for caching.
My question: is this a good way? I thought it is because you don't
have to call an AJAX file to get the data from the database.
This is fine if
1) You want to provide offline reading continuity to the user. User doesn't have to wait for internet connection to be available so that they can read at any time.
2) Your data-service is quite heavy and you want to avoid multiple/frequent visits to the server to get the same data over and over again.
3) You want your app to be bundled with a native package (like phonegap) to become a hybrid app and give a complete offline experience to the user.
This is not a comprehensive list, but just to get your started in terms of when to go for offline and when to keep totally offline
So, on the other hand, this is a bad idea if
1) Your local storage structure is going to change frequently for user to require re-install (unless you can figure out auto-upgrate of local storage)
2) All your features are transactional and require synch with other users also.
Nothing wrong with your approach, just make sure you have kept these points in mind while managing client-side cache
You have one variable 'version' maintained, this version is to be increased whenever there's any change in structure, this version will be sent to client every time, client is responsible for comparison of versions and empty client cache if server version is greater than client version.
You can implement or find any open-sources to handle your ajax responses, this one might be useful - https://github.com/SaneMethod/jquery-ajax-localstorage-cache.
you can set proper expiry tag from server, which can also help, browser to cache response for you, if it is 'get' request.
You can also implement server-side cache, which will not make calls to database, it will cache response against request-url, Note - if different users are supposed to receive different response than this approach wont work. You can delete the cache if any changes happens related to that particular data set - delete/update
In your case you can also maintain flags on server, which simply tells if data has been updated or not the time of article update, if stored version is older you can make server-request or just use local version.
I hope it helps.
I don't know if this is possible or even if it exist, but I'm very curious to find out. I don't want to give the wrong ideas, and describing what I'm trying to achieve before giving out examples definitely will - so I'll just dive right into it.
As far as I know, codes on websites are only ever executed when those websites are accessed by someone. If no one accesses those website, the codes just sit there. The codes have no reason to run if no one's using them, right?
Now what I'm going to propose may sound ridiculous, but please hear me out. I don't know if there is a way to do this, so I'm just going to ask. Is there a way to run those codes without someone accessing the website itself?
Now I know some of you are like, "Huh? What is he talking about? Why would you even want to run the codes if no one is on the website? That literally makes no sense," so I'm going to try and justify why I want something like that to be possible.
For example, if you want to create a script for automatically logging out an user if they've stayed inactive for a certain amount of time, you would need to check whether they've been active in the last (amount of time to wait before logging them out). You can use AJAX to check if they've been active in the last (amount of time to wait before logging them out). If they navigates, or refreshes the page, then it'll reset the counter, and let you know that they've been active in the last (amount of time to wait before logging them out). However, if they do nothing in (amount of time to wait before logging them out), they will be automatically logged out.
If they closed their browser, or exit the tabs that monitors their progress using AJAX, it will no longer monitor their progress, and thus their counter will not be updated, and thus you will have no idea whether they've been active or not. You can't just log them out if they close a tab or a browser, because what if they have multiple tabs or browsers of your website open? Then you would only want to log them out when they closed all of them.
I have other examples, but this is the gist of it. Is there a way to execute codes on a website without the website being accessed by a user? Thank you.
You are looking for cron jobs. They are basically scheduled jobs that run at set times. A cron job can run all kinds of scripts including PHP scripts.
Whether such a script can easily clear expired sessions, I don't know. It will probably depend on the way you store the sessions.
It may be just as easy to implement it in the website. If you store the last activity timestamp of a user, you can just check on a new request whether that timestamp is too old, and if so, delete the session and redirect to the login page. That way, the user officially remains logged in until their next request.
Optionally you may delete old sessions that are remembered by PHP. See related question: Cleanup PHP session files.
One approach is that you could run your PHP scripts on a timer using CRON jobs.
These jobs typically repeat every x hours, minutes, or days.
I'm not sure about the example you provided, though.
I've created a subscription-based system that deals with a large data-set. In its first iteration, it had semi-complicated joins that would execute, based on user-set filters, on every 'data view' page. Each query would fetch anywhere from a few kilobytes to several megabytes depending on the filter range. I decided this was unacceptable and so learned about APC (I had heard about its data-store features).
I moved all of the strings out of the queries into an APC preload routine that fires upon first login. In the same routine, I am running the "full set" join query to get all of the possible IDs for the data set into a $_SESSION variable. The entire set is anywhere from 100-800Kb, depending on what data the customer is subscribed to.
I convert this set into a JSON array and shuffle the data around dynamically when the user changes the filters. In creating the system I wanted it to seem as if the user was moving around lots of data very quickly, with minimal page loading (AJAX + APC when string representations are needed), as they played with the filters.
My multipart question is, is it possible for the user to effectively "cancel" the initial cache/query routine by surfing to another page after the first login? If so, can I move this process to an AJAX page for preloading, or does this carry the same problem? Or, am I just going about all of this in the wrong way? I came up with the idea on my own and I'm worried that I've created an unusable monster.
Also, I've been warned that my questions suck and I'm in danger of being banned. Every question I've asked has come from a position of intelligent wonder, written as well as I knew how at the time, and so it's really aggravating when an outsider votes me down without intelligent criticism. Just tell me what I did wrong and I will quickly fix the problem. Bichis.
I'm interested how does google docs store documents on server side because I need to create similar application.
Does it use pure RTF/ODF files or own database?
How do they make possible versioning and undo/redo feature?
If anybody have knowing according this question please share with me.
To answer you question specifically to how Google Docs works. They use a technology called
Operational Transformation
You may be able to use one of operational transformation engines listed on: https://en.wikipedia.org/wiki/Operational_transform#OT_software
The basic idea is that every operation has a context, e.g. "delete the fourth word in the fifth paragraph" or "add an input box after the button". The clients all send each other operations thru the server. The clients and server each keep their own version of the document and apply operations as they come.
When operations have overlapping contexts, there are a bunch of rules that kick in to resolve conflicts. Like you can't modify something that's been deleted, so the delete must come last in a sequence of concurrent operations on that context.
It's possible that the various clients and server will get out of sync, so you need a secondary algorithm to maintain consistency. One way would be to reload the data from the server whenever a conflict is detected.
--This is an answer I got from a professor when I asked the same thing a couple of years ago.
You should use a database. Perhaps a table storing each document revision. First, find a way to determine whether an update is significant or not. You can store minor changes client side for redo/undo, and then, either periodically or per some condition (e.g., user hits save), create a database entry per revision (you can store things like bytes changed, bytes added, bytes deleted, etc.).
Take a look at MediaWiki, which is open source, and essentially does what you're asking (i.e., take a look at their tables and code).
RTF/ODF would typically be generated, and served, when a user requests exporting the document.
Possibly, you should consider utilizing Google Drive's public API. See link for details.
I want to write a little game where the users has to click on appearing elements/objects in a given time. In detail the objects appears in holes onto the ground and after x seconds the objects disappear. The gamer has y lifes and all clicks gets counted until he lost the game.
After that his highscore gets posted to a database (via form post or AJAX). Long story short how can I avoid the user faking his highscore before sending? The program language is JS.
I know its not possible to hide all the code and make it not hack-able. But I think it's enough if the code is so difficult that the user has to do a lot of work to understand where he has to intervent to send faked data.
Has anybody some ideas howto make the code as difficult as its possible?
Thanks in advance for any ideas :)
You should never really try to make your source code unreadable. It will make as great a headache for yourself than any obstruction to anyone modifying it.
That said, you could refactor all your variable names to complete gibberish and play with whitespace, but anyone seriously trying to understand your code could revert that in a decent text editor. To make it any more complex would take away from the efficiency of your program - otherwise you could fill it with useless calls to functions that don't do anything and strange incrementation of counters that the program does not depend on.
there are compressors that do exact the job you want! Some of them can be downloaded and used as offline tools, some are directly via web accessible:
http://javascriptcompressor.com
like jquery and others you can use your code to maintain the scripts and deliver a faster loadable packed version that is hardly readable
How about this:
Create two PHP pages, with one containing the game interface and the other containing the game's code. Program the first one so that it creates a one-time-use string that the tag will pass along as a parameter when it calls the JS code from the second one. Program the second one so it checks the validity of the string sent. If the string is valid, the script should output the JS code, then invalidate the string.
Then, when the user copies the URL of the script, pastes it into his browser, and hits "Return," all he sees is either a blank page or a "not authorized" message.