Here is the problem:
I have a web application - a frequently changing notification system - that runs on a series of local computers. The application refreshes every couple of seconds to display the new information. The computers only display info, and do not have keyboards or ANY input device.
The issue is that if the connection to the server is lost (say updates are installed and a server must be rebooted), a page not found error is displayed). We must then either reboot all computers that are running this app, OR add a keyboard and refresh the browser, OR try to access each computer remotely and refresh the browser. None of these are good options and result in a lot of frustration.
I cannot change the actual application OR server environment.
So what I need is some way to test the call to the application, and if an error is returned or it times out, continue trying every minute or so until the connection is reestablished.
My idea is to create a client-side page scraper, that makes a JS request to the application (which displays basic HTML), and can run locally on the machine, no server required. If the scrape returns the correct content, it displays it. If not it continues to request the page until the actual page content is returned.
Is this possible? What is the best way to do it?
Instead of scraping, check the status code in the response from the server. if it's not 200 or 304, you've received an error page and should try again.
Check out this link: http://www.ibm.com/developerworks/web/library/wa-ajaxintro3/#N102DB
Related
I have a project with the following problem:
If the page is accessed directly via the domain, you can switch to any subpage without an error. However, if a URL is called directly, e.g. https://impalawolfmitbiss.com/consulting, a server error is issued. This actually affects every subpage that is not called from the main page.
Error Logs:
Apache server logs (500 GET /consulting HTTP/1.1)
Chrome Console Network: 500 Internal Server Error
Since I rarely have to deal with such applications, I just want to ask in advance if this is a known problem and if anyone has any suggestions for a quick solution or Tips i can search.
a note: if I start the application locally, the problem does not occur.
Site: https://impalawolfmitbiss.com
I'm writing a code in PHP and JS for a project that will request data from a remote database every minute.
The PHP makes the connection and return the data from my MYSQL database, and the JS is responsible for printing this data on my website and requesting it again.
My doubt is: On the PHP file, do I keep the connection with the database server alive or close it and open it again every minute?
I'm sorry about my english.
Seems your application is a web page. That means that the web server serving it will run a PHP process for each request and then it will die and so the connection will get closed with it.
Your question applies better to CLI applications specially if running in daemon mode.
UPDATE: as per #deceze comment "it's perfectly possible to have a persistent PHP script backing a web server… it's just not the default MO".
close operation Releases the Connection object's database and resources immediately. so if you have alot of computation on the open connection and extracted the full result from database etc, you may want to close the connection since this brings down the performance on the server. but if you don't have alot of computation on the connection you can keep it open until a trigger making it close(this trigger can be anything really, from a button like logging out to creating a timer for closing the connection).
I'm trying to develop chat system in php, sql and ajax. I created function by ajax to get messages from database this function its event when window upload, so if i open 2 windows in browser to test the application, I found the messages bu when i send message it appear in just the window which send from not both of the 2 windows. To solve this problem i used setInterval function every 1 second to show messages.
Do this huge requests damage the server ??
I don't quite know what you meant with "Damage", but nothing can be really damaged by a few extra requests.
If you're wondering whether the webserver can handle the load, it really depends on how many chat sessions are going at the same time. Any decent web server should be able to handle a lot more than two requests per second. If you have thousands of chat sessions open, or you have very CPU intensive code, then you may notice issues.
A bigger issue may be your network latency. If your network takes more than a second for a round-trip communication with the server, then you may end up with multiple requests coming from the same client at the same time.
I have a asp.net webpage with just a couple divs on it. Most of the work is done through javascript, JSON, and a webservice running on the same page.
My webservice has three different functions I use. All functions perform a select statement from the same database, the database that is being queried is located on the same machine the website is on.
I originally started working on the webpage on my local machine and could run all my webservice functions correctly and get a response. However, I am now trying to move the page to a server but I am having no luck.
When on the server the page loads correctly, which requires a call to the first function (this works fine). However on click of a button a second call to the function is made but I get a timeout error.
I tried running the functions directly from the server, and the first function works fine, however the other two do not, they give me a page error code 500. Any ideas what could be causing this?
I figured out the issue. I had some code on the two functions that was trying to get the hostname from an ip address but my webpage was being hosted on a computer in the network outside of the domain. So it was getting an exception when trying to convert the ip to a hostname. I resolved the issue by hosting the webpage on a computer on the same domain so that it could resolve the ip addresses to a hostname. Thanks for the help.
I am building a simple web page that will run on a computer connected to a large TV, displaying some relevant information for whomever passes it.
The page will (somehow) fetch some text files which are located on a svn server and then render the them into html.
So I have two choices how to do this:
Set up a cron job that periodically checks the svn server for any changes, and if so updates the files from svn, and and (somehow) updates the page. This has the problem of violating the Access-Control-Allow-Origin policy, since the files now exist locally, and what is a good way to refresh a page that runs in full screen mode?
Make the javascript do the whole job: Set it up to periodically ajax request the files directly from the svn server, check for differences, and then render the page. This somehow does not seem as elegant.
Update
The Access-Control-Allow-Origin policy doesn't seem to be a problem when running on a web server, even though the content is on the same domain..
What I did in the end was a split between the two:
A cron job update the files from svn.
The javascript periodicly requests the files using window.setInterval and turning on the ifModified flag on the ajax request to only update the html if a changed had occured.