I'm writing this web app, which has quite a lot of .png's on it so it is relatively heavy (5MB). The problem I have is, that iPad seems not to be loading all elements every time I start the webapp in homescreen mode.
The app is basically a bunch of DIV's with background images for them which are "nice buttons" and js code running. Suddenly the image is not loaded, so I see no button, but I can press it and the functionality is there, so my JS code is loading and working. I suspect it has something to do with having too much images, so these don't get loaded if the ipad thinks it has no ressources for them -
Does anyone had such experience?
The problem was not the iPad or HTML 5. My Webapp is loading a lot of images and on the server side I have IIS / ASP .NET in an XP machine. The MaxConnections of the IIS is set by default to 10. As it gets flooded by requests from the App, it locks randomly some images and the iPad (or any browser) cannot load them (403 error). Increasing the MaxConnections parameter to the maximum value of 40 solved the problem:
{C:\Inetpub\AdminScripts>cscript adsutil.vbs SET w3svc/1/MaxConnections 40}
Now I want to detect these 403 errors to warn user if it may still happen, but that's another story (and another question in StackOverflow)...
Related
I have created a VERY basic (w.i.p) single-page photography website hosted on git hub which uses very basic HTML, CSS and some javascript.
The site works locally, but online the browser back button (Chrome, Safari) has a habit of leaving the website in a broken / non-visible state - I am struggling to understand exactly why but I believe it happens when the back button invokes a previously cached version of the page and not executing javascript.
http://www.alkchan.com/
Specific elements begin with 0% opacity and are fadedin with a javscript activated CSS tansition
I really struggle to understand why javascript isn't executed when the back button accesses a cached page, from the reading I've done I understand my best bet is using the no-cache command in something called HTTP headers, however currently I don't think I can access these on a git hub hosted web page.
Can anyone shed any light on why javascript doesn't seem to work on a cached version of the page? Thanks.
I have a simple <input type="file"> in a web form (to be viewed in a browser) and I need it to work on Android (besides other mobile devices and desktop).
Due to a well known but still unfixed bug in Android (https://code.google.com/p/android/issues/detail?id=53088), any such input field may miserably fail to work, because while you are choosing for the file to upload (with whatever application, e.g. the Gallery or a third party file browser), the browser activity is in the background and the system may kill it at any time (no matter how huge your RAM is), and hence the page may reload when the browser activity is restored, and the file you've selected will be forgotten.
This still happens in Chrome on Android 4.4.4.
Of course it does work at times, but not always, and it's unpredictable.
I can think of (painful to implement) workarounds for a webview within a native app, but I can't think of any workaround in pure html+javascript for a web page to be visited by a browser.
The thing is, some workaround must exist, because there are web pages out there with file uploads that never occur into this issue, such as m.facebook.com to name only one. EDIT: forget this paragraph, facebook and twitter are affected as much as every other web page with uploads (and btw, Instagram's mobile web page does not allow upload at all, funny huh?)
Does anyone know what the working workaround is? Or if any exists at all?
Just to be clear, I need a workaround that can be applied by just adjusting the html and/or adding no-matter-how-much javascript code, but without forcing the user to install any specific extra app.
"interesting" problem...
It is not a ready-to-use solution, but you could save the state of the page before requesting a file:
http://www.html5rocks.com/en/features/storage
This is a weird scenario I just experienced and I am not sure how to phrase the question.
It may be best to describe my application and what it does 1st.
I have an IP camera connected to my router.
I use a C# VLC wrapper to get 10 frames a second using a RTSP protocol.
I then upload to my web server using a [web method] these seperate jpegs to my server.
Then via browser using a javascript timer set to 100ms it renders the image into a HTML image control by calling an ashx page repteadly.
Now this has worked for a few days OK.
Now this is what I have experienced in the last 48hrs.
The images coming from the IP Camera was jumpy. That is to say sometimes the images flow in a timely order and sometimes it will slow down, stop and speed up again to 'catch up'.
I noticed when viewing via a web browser client on another PC on my network that the javascript timer calls were slow and sometimes stopped for periods of time. I used Google Chrome to view how often the ashx url was being called.
I closed down my own applications. Rebooted all my PCs and started VLC application without using the wrapper. Again, the flow was 'jumpy'. So the conclusion there was that it was not my application.
For some reason I decided to log into my router (192.168.0.1).
Page was not found.
In fact I had to do a complete restart of my router to be able to access my router 'page'.
As soon as I did this everything worked OK again.
So, the 2 questions I have is (1) why could I not access my router through that IP address and (2). Why was my javascript timer crashing to a stand-still?
Like I said this is a weird scenario and I would not blame anyone for wanting to close or vote down this question.
But on the off-chance this is a known thing I would like to be educated.
Thanks
Running out of ideas that could be causing this issue so just thought I would throw it out there.
I am currently developing a site based on Wordpress which has some search functionality powered by a web service. The web service returns a load of details of various residential properties, including absolute image paths which I am requesting and caching on first request.
I embedded RoyalSlider into the page to display the images in a form of gallery. This works brilliantly locally, but the issue I am seeing is when I stick the site onto a staging server. My page just fails silently whilst loading, halfway through loading my RoyalSlider thumbnails. I have checked the raw response from the server and indeed it appears that the response just stops halfway through loading all of my RoyalSlider images. But its a 200 response, no 404's, no console errors, nothing.
Oh and I should mention that if I set a break on my loop which renders my markup, i.e limit it to say, 10 iterations, the problem does not persist and the entire page loads (albeit with only 10 of my images loaded).
Does anybody have any idea what would cause something like this to happen?
Thanks in advance
The main cause of this issue was a difference in PHP versions. This should have been one of the first things I checked, as I was seeing problems in one environment and not in another - I had just complacently assumed I was on 5.3.
The php version on my staging server (5.2) seems to have had some kind of issue with my caching layer - although what that was exactly is still unclear (leaning toward something to do with output buffering as suggested above). Updated the php version on staging though and that appears to have fixed my issue.
Thanks for the ideas Adam!
Hopefully this isnt a tricky one. I've got a web app that doesn't load all javascript/css/images on the first visit. Second visit is fine.
After approximately 2 minutes of inactivity the problem reoccurs.
These problems only started occuring after the customer requested SSL be applied to the application.
Ajax requests stop working after 2 minutes of activity despite a successful page load of all javascript elements.
Application timeout is 30 minutes - like I said, everything was fine before SSL was applied.
All javascript and CSS files use absolute URLS - e.g https://blablabla
There appears to be no pattern as to why certain files arent loaded. The firebug Net output shows the status for the failed elements as 'Aborted'. For example, site.css and nav.css are in the same folder, are declared after each other in the head tag yet one is loaded and the other is not. Both will load fine after refreshing the page (unless roughly two minutes have passed).
An Ajax request also shows as aborted after two minutes. However, if i do the request again the Ajax request will succeed. Almost as if the first request woke something up.
None of these problems occur in Chrome
Any ideas? :)
FYI this is a .Net 4 C# MVC app running under IIS7 but I'm not sure its relevant since it works in Chrome. Everything worked fine before SSL was applied.
Removed SSL across the board and secured action methods with [RequireHttps]. Then changed the scripts and CSS in the master files to point to absolute HTTP urls. Javascript then worked fixing the ajax.
If anybody has any idea why CSS/Javascript broke over SSL it would be cool. Im guessing it's perhaps the work load? Since it worked the second time I'm guessing half the CSS and scripts were cached making less of a workload over SSL?
Anyway, working now!