I have a web service which the customers use by inserting an external JavaScript (hosted on my servers). Recently, due to server outage - the external JavaScript became unavailable and my customers' websites came to a crawl as browser didn't load rest of the website until it loaded the JS (it goes into header of the websites).
I am trying to work out methods so that customers' website don't slow down even if my server goes down and for that I wanted to simulate a condition where the my server isn't responding. Note that if I specify a wrong URL, browser won't load the JS but in case URL is right and server isn't responding, browser will stall loading rest of the page. I want to simulate the last case. Any ideas how can I go about it?
PS: On server side, I am using the LAMP stack.
Create a script that sleeps for a configurably long time
Something like
<?php
$how_long = $_GET['seconds'];
sleep($how_long);
echo "alert('Finished sleeping!');";
?>
Then you just access this script instead, for example by putting this in your HTML code
<script src="http://example.com/hang_for.php?seconds=3600" />. That would sleep for an hour. There will be another timeouts that'll trigger first configured in php.ini, but that's exactly what you want to test, no?
If the "P" in your LAMP is PHP, you could use the sleep function (documented here). Then, have your test page load your PHP script as the source of your Javascript to see what happens.
Did you try looping back the server into itself (or any other HTTP server w/o the webservice on)?
unplugging is pretty drastic, the off-button should do.
Unplug the server. Having no power makes a server unresponsive...
Related
I've been wrapping my head around this problem for a couple of days searching for all possible solutions on the forums and online but can't seem to get it working.
I'm calling a script by a link on a "button" to start a script on a server (in HTML):
<a href="#" onClick="RunScript();">
The script code is:
<script type="text/javascript" language="javascript">
function RunScript() {
var objShell = new ActiveXObject("WScript.Shell");
objShell.Run("%comspec% /k my_projects_EN.vbs" "), 1, false;
}
</script>
So why am I using a vbs? What I'm trying to do is create custom pages for each employee. So the vbs is actually checking the computer name and an if clause directs the employee to a custom page. With my basic knowledge of programming and a lot of hours of searching I did not find a better solution for this yet. So I'm trying to make this one to work.
And it does but only if I'm running the script locally (desktop). But as the webpage will be used in an intranet location this script will be on a server. And this is where it became a bit hairy as I can't seem to find the right combination of commands to do so. I already tried pushd for creating a mounted volume or currentDir for setting up the location of script but nothing seems to work completely.
I assume that I'm missing a subroutine for the function as adding anything there just stops the script - but how to go at it is beyond me.
All help is appreciated even if it means I have to bury myself into another program language (not preferred of course).
I am certain that there is a way to solve this other than sending a script to each employee to put on their desktop (each time a new employee comes to work).
Thanks
Edit: I see an additional clarification is in order:
We're creating an intranet webpage as a help for more efficient work for our employees. We're on the same level as the rest so not IT or admin rights guys so we're on our own.
The point is to have a personal page for each employee which can be accessed via the same interface. So a link has to send each person to another page that is why I've created the vbs code which helps with that. Checking several other options this seemed to be the simplest and best one - and it works at least partially. I don't see any security risks as all will be done on each client computer - the files themselves will be located on the server. The script itself does not represent any risk at least not that I would see it - but of course I'm not a specialist.
So in short this is what we're trying to do:
Main page -> link to My_projects button -> start script (located on the same server as the main page) -> determine the client computer name -> redirect to the right webpage.
Sorry for a lack of details, I see that it's sometimes hard to explain exactly what you want if you're not a pro in these things.
Thanks again.
If those computers are physically located at your workplace and you have control over the system, it would be better to tweak DNS redirections on those computers. Otherwise, more general and OS independent solution, would be session, cookie, or token on employee's computer. Still, some kind of authentication other than having one piece of machine, could be more versatile and secure (unless your PCs are 1000 feet underground :-) ).
Edit: What kind of info/data are sent to the server script? Server script runs on server and everything related to "this computer" (e.g. name) is actually referring to the server itself. Thus the script needs some data from the client to recognise his computer.
thanks for the effort
Everything is actually located on the server so the client computer only runs the page or interface which is in \Server\folder\folder for example.
In your browser you open the start page which contains a button with a link to this script (located on the same server).
When the script executes it searches for the computer name and send the user to his personal page:
Set wshShell = CreateObject( "WScript.Shell" )
strComputerName = wshShell.ExpandEnvironmentStrings( "%COMPUTERNAME%" )
On Error Resume Next
'#01 name_surname
If strComputerName = "XXXXXXXX" Then
CreateObject("WScript.Shell").Run """name_surname.html"""
and so on.
And this is all there is. As mentioned before we don't have admin rights to change anything on the client computer. So nothing is being done on the client side other that executing a script located on the server.
I have built a simple webpage for a touchscreen kiosk (Win7, XAMPP).
The interface is built up of 9 tiles (windows metro style). HTML, PHP and CSS only. Each of the tiles are simple links
What I would like to do is track how many times each of the tiles have been clicked.
Examples of my pages are;
www.example.com/help.html
www.example.com/contact.html
www.example.com/map/floor1.html
The kiosk will be running on localhost and I feel that Google Analytics, Piwik or AWStats are too resource intensive for such a small task. Obviously as the kiosk is running on localhost the IPs, location, browser etc... aren't important.
Are there any other ways I could track the clicks to a log file or similar?
Any advice appreciated.
You can use onclick functions on the links, and use javascript to write a log file. This might help you creating a log file through javascript
I would say this data can be found inside Apaches access logs if you only want to know how many times a page has been accessed. This can be easliy done by using a tool such as Apache Log Viewer.
If you actually want to log link clicks, you probably have to use javascript action handlers. Because I consider writing from JavaScript ugly, I would probably send an ajax request to my PHP server every time.
Edit:
Another way would be, to convert all you html files into php and log from there (I can also add an example how).
Example:
<html>
<?php
$count= include 'count.php';
$count['count-'. __FILE__]= $count['count-'.__FILE__] + 1;
file_put_contents('count.php', '<?php return ' . var_export($count, true) .'; ?>');
?>
</html>
I am writing an app that includes about 12 short JS files in the <head> section (I know, I should merge them and move them just before the end of <body>; would get to those when in production)
The trouble is that when I try to load the app in Chrome, Some files load immediately while some never finish loading at all! Chrome keeps trying to load the 12 JS files and never renders the page until I hit "Stop".
When I hit stop, the HTML is rendered and the JS files fail as in the image below:
Note that different JS files fail on each attempt! It's not the same file that gets stuck every time!
Inspecting the headers of the failed files shows "Caution: request is not finished yet". The files are stuck in "Receiving" sometimes for many minutes!
Now here's the fun part, after hitting stop, if I focus on the omnibar and press enter, all the JS files load instantly and the application works fine!
On the server side, I am using Apache, PHP and MySQL. Have I misconfigured something in Apache?
STATUS after 2 gruelling days: zilch, nothing, nada, this is driving me nuts. I have tried running the code from different machines, have tried changing apache cache settings and changed myriad things in javascript but nothing has worked. The worst thing is that no one can pin point where the problem is!
One possibility is that you have Keep-Alive enabled and it is not working properly. I've seen this before. The browser establishes it's maximum number of connections to your server (typically 6) to download the first few files (CSS, JS etc.) then those connections are not released until they time out. My symptoms were not quite the same as yours - the timeout was 20 seconds and everything would load in batches of 6 after that - but this could still be the cause.
In your Apache configuration (httpd.conf on most systems), look for the KeepAlive line (or add it if it's missing) and set it to Off.
More than an answer, here's how I would troubleshoot the problem.
One of the things I'd try is commenting out tags one at a time and reload, to see where the threshold is. Also, because this is probably a server configuration problem, I'd restart it after each try, to have a clean slate, so to speak, i.e., no state preserved between tries.
I'd try to get more hints by trying to make the requests for the various Javascripts from a script in your favourite language. Ideally, I'd try GETting the scripts one by one (say with curl) waiting a few milliseconds between requests. I imagine I'd hit a threshold here as well. Like, getting one script per second works, but when requests get too close, the server gets stuck.
If still no clue, I'd use tcpdump to watch the traffic between the browser and the server. Okay, this may be a little too low level!
But perhaps I'd use netstat to see how many connections the browser opens in parallel to the server to fetch the resources, and see if we hit a concurrency limit.
I'm sorry this is a solution but I hope you get some ideas, and I'd be very curious to know what your problem is, in the end!
We got exactly the same message "Caution: request is not finished yet" after a request in the browser.
The request itself was in the order of 15 MB of JSON, which was fed into an Angular 1 application. This request took about 2 seconds. Right after the request was finished, the Angular digest cycle blocked the browser for more than 12 seconds (this is visible by profiling during the request). During that time Chrome showed this "Caution: request is not finished yet" message in request whilst it actually was already finished.
Check the Content-Length header. Server may pass incorrect value - greather than actual content length.
Try to emulate problem in https://www.stevesouders.com/cuzillion/
It should say you side of problem - chrome or server
There are several good answers, but if you want a foolproof method of doing this, you can use PHP to send all the scripts at once. If the problem is truly too many connections to the server, you could add some html code like this:
<script type="text/javascript" src="scripts/scripts.php />
And then in scripts.php you use the include() function to include all of the JavaScript files like so:
<?php
header(''); // control all your headers
include('jquery-1.9.1.min.js');
// rest of scripts
?>
If nothing else works and the problem is excessive connections, this should work.
I need to load a var by getting JSON from a webservice, so my question is where does this code go? I tried to put it in the content script but XHR would fail there.
Any suggestions?
Starting from Chrome 13 content scripts can also perform XHR requests (before only background pages could). So you can put your code wherever you like.
If it doesn't work then you probably didn't specify domain permissions (or trying to connect to non-80 port, to non-http(s) protocol etc).
So i'm trying to create a web spider. I've run into a website, that has some javascript, and I want to trick the browser into thinking that an event has been fired and that it must call the corresponding javascript code to handle the event. How would I be able to do this in Perl? using the WWW::Mechanize or WWW::Scripter::Plugin::Javascript?
Also, it would be very appreciated I someone could put up an example of how to use WWW::Scripter::Plugin::Javascript.
Thanks in advance. Also if someone has a better way to word the question please go ahead and edit it
In a normal browser setup, the JavaScript is in the browser, not on the server. It's the client that executes the JavaScript.
That means you need manually figure out what the JavaScript code does and code that in Perl, or you need to load a JavaScript engine.
Here are three JavaScript engines:
WWW::Mechanize::Firefox
Win32::IE::Mechanize
WWW::Scripter::Plugin::JavaScript
Using WWW::Mechanize and Live Http Headers, I did a Live Http Replay.
From the replay, I copied the headers (ie Connection: keep-alive to $agent->add_header( "Connection" => "keep-alive"); ) and then copied the Post content to my $content = '..
Then $agent->post( $url_of_the_site, Content => $content);
This worked to click a link like 2 on a aspx site.
I used this code as a guide http://pastie.org/1728196/wrap