This is a weird scenario I just experienced and I am not sure how to phrase the question.
It may be best to describe my application and what it does 1st.
I have an IP camera connected to my router.
I use a C# VLC wrapper to get 10 frames a second using a RTSP protocol.
I then upload to my web server using a [web method] these seperate jpegs to my server.
Then via browser using a javascript timer set to 100ms it renders the image into a HTML image control by calling an ashx page repteadly.
Now this has worked for a few days OK.
Now this is what I have experienced in the last 48hrs.
The images coming from the IP Camera was jumpy. That is to say sometimes the images flow in a timely order and sometimes it will slow down, stop and speed up again to 'catch up'.
I noticed when viewing via a web browser client on another PC on my network that the javascript timer calls were slow and sometimes stopped for periods of time. I used Google Chrome to view how often the ashx url was being called.
I closed down my own applications. Rebooted all my PCs and started VLC application without using the wrapper. Again, the flow was 'jumpy'. So the conclusion there was that it was not my application.
For some reason I decided to log into my router (192.168.0.1).
Page was not found.
In fact I had to do a complete restart of my router to be able to access my router 'page'.
As soon as I did this everything worked OK again.
So, the 2 questions I have is (1) why could I not access my router through that IP address and (2). Why was my javascript timer crashing to a stand-still?
Like I said this is a weird scenario and I would not blame anyone for wanting to close or vote down this question.
But on the off-chance this is a known thing I would like to be educated.
Thanks
Related
I am developing a basic web app interfacing the Nordic BLE devkit.
I am new to javascript development and came across a rather common but a weird problem for me while testing my app.
Basically, I have 2 html pages and a common javascript file.
First page finds the nearby BLE devices and connects with it and then stores it's characteristics and services which are needed for the communication. (Processing done in the javascript file)
After a button press on the first html the app runs location.replace("path for second html") and switches the activity to the second html file.
Here I noticed that after transferring to the second page the devkit is disconnected.
I have few buttons on the second page which when pressed invokes routines in the javascript file.
Now since the device is disconnected the characteristics and the services read earlier were lost and the app crashes.
I know this is a typical binding problem but I am quite not familiar with the exact javascript concepts that I need to be looking at in order to have more information for this issue.
Can anyone help me with this?
It is not currently possible to transfer a BluetoothDevice or any of the other associated objects to a new page during a navigation (which is what happens when you call location.replace()). If possible you should keep the user on the same page for the entire time that it is connected to a device.
There is upcoming work on Chromium issue 974879 which will make it possible to keep the permission the user granted your site to connect to the device across navigations and sessions but you will still have to reconnect on each page.
Can a old version of jquery (specifically a javascript auto refresh code) crash a website for a single user/local network?
I have a small/simple MySQL database with PHP coded site to insert and retrieve data for internal (non-public) use. It was developed over 4 years ago and still uses jquery-1.2.6.min.js and have had no problems until the past month. Now the site is very random on the operation but only affects the local networks the user I developed it for. It is not a browser crash such as when it goes down the site is unavailable on any computer in the network. Switch to a hotspot or a different building and all works as designed.
I don’t have any reason to think it is website or hosting problem (have talked to hosting support 3 times) but I am trying to cover my bases as the network engineers and ISP tech support are all scratching their heads.
I have used two instances of an auto refresh script 1) to show current time to the user and 2) to refresh the screen with new database content every 15 seconds.
An example on one of the two refresh scripts:
<script type=“text/javascript”>
function load_content({ $(‘#db_content’).load(‘ac_data.php’).hide().show();}
setInterval(‘load_content()’, 15000);
<div id=“db_content”>
<?php require ‘ac_data.php’;?
</div>
Is there any remote chance the code could be to blame since the site has not been updated since original development? I am merely looking for ammo to put the ball back in the network teams court but if there is any chance my code could cause a hang up then I would like to be able to fix it.
I'm writing this web app, which has quite a lot of .png's on it so it is relatively heavy (5MB). The problem I have is, that iPad seems not to be loading all elements every time I start the webapp in homescreen mode.
The app is basically a bunch of DIV's with background images for them which are "nice buttons" and js code running. Suddenly the image is not loaded, so I see no button, but I can press it and the functionality is there, so my JS code is loading and working. I suspect it has something to do with having too much images, so these don't get loaded if the ipad thinks it has no ressources for them -
Does anyone had such experience?
The problem was not the iPad or HTML 5. My Webapp is loading a lot of images and on the server side I have IIS / ASP .NET in an XP machine. The MaxConnections of the IIS is set by default to 10. As it gets flooded by requests from the App, it locks randomly some images and the iPad (or any browser) cannot load them (403 error). Increasing the MaxConnections parameter to the maximum value of 40 solved the problem:
{C:\Inetpub\AdminScripts>cscript adsutil.vbs SET w3svc/1/MaxConnections 40}
Now I want to detect these 403 errors to warn user if it may still happen, but that's another story (and another question in StackOverflow)...
Hopefully this isnt a tricky one. I've got a web app that doesn't load all javascript/css/images on the first visit. Second visit is fine.
After approximately 2 minutes of inactivity the problem reoccurs.
These problems only started occuring after the customer requested SSL be applied to the application.
Ajax requests stop working after 2 minutes of activity despite a successful page load of all javascript elements.
Application timeout is 30 minutes - like I said, everything was fine before SSL was applied.
All javascript and CSS files use absolute URLS - e.g https://blablabla
There appears to be no pattern as to why certain files arent loaded. The firebug Net output shows the status for the failed elements as 'Aborted'. For example, site.css and nav.css are in the same folder, are declared after each other in the head tag yet one is loaded and the other is not. Both will load fine after refreshing the page (unless roughly two minutes have passed).
An Ajax request also shows as aborted after two minutes. However, if i do the request again the Ajax request will succeed. Almost as if the first request woke something up.
None of these problems occur in Chrome
Any ideas? :)
FYI this is a .Net 4 C# MVC app running under IIS7 but I'm not sure its relevant since it works in Chrome. Everything worked fine before SSL was applied.
Removed SSL across the board and secured action methods with [RequireHttps]. Then changed the scripts and CSS in the master files to point to absolute HTTP urls. Javascript then worked fixing the ajax.
If anybody has any idea why CSS/Javascript broke over SSL it would be cool. Im guessing it's perhaps the work load? Since it worked the second time I'm guessing half the CSS and scripts were cached making less of a workload over SSL?
Anyway, working now!
I have a site which uses AJAX and preloaders. Now I would like to see the impact of these preoloaders before deploying the site online.
The "problem" is that localhost doesn't have loading time and the response is immediate, so that I can't see my preloaders.
How can I simulate loading or limited bandwidth (with Firefox, Rails or whatever else)?
If on windows, download Fiddler and set it to act like you are on a modem:
Tools-->Performance-->Simulate Modem Speeds
[edit]
Since you said you are now on a MAC, you have Charles which has throttling
[/edit]
I don't have a rails app in front of me right now but why don't you just add a delay to the appropriate controller?
i.e.
def index
# ...
sleep 2 # sleeps for 2 seconds
# ...
end
Alternatively, use a debugger and place a breakpoint in the controller code. This should mean that your preloader will show until execution is continued.
One option would be to deploy the site briefly to the host you will be using for production under an alternate URL for performance testing.
However, the way it performs for you won't necessarily be the same for everyone else in other locations.
If you provide some more detail on what these "preloaders" are and how they work and what you mean by "see the impact" we might be able to give better answers. Do you mean you want to eyeball the AJAX spinner gifs and get a feel for how it will look to the end user as the loading takes place? Or do you mean you want to do some kind of formal benchmarking on them?
You can use Firebug plugin to Firefox to determine the network behavior of your page. This works fine for localhost. You should see all images being retrieved simultaneously at the time of the preload execution.
You could configure your router so that it forwards requests on a certain port to the computer you're running the website on. Then, when you open your.ip.add.ress:the_port in your browser, the bottleneck will be your upload speed, which is generally quite low.
But that's just how I would do it ;)