Approach for fetching files from svn server using ajax - javascript

I am building a simple web page that will run on a computer connected to a large TV, displaying some relevant information for whomever passes it.
The page will (somehow) fetch some text files which are located on a svn server and then render the them into html.
So I have two choices how to do this:
Set up a cron job that periodically checks the svn server for any changes, and if so updates the files from svn, and and (somehow) updates the page. This has the problem of violating the Access-Control-Allow-Origin policy, since the files now exist locally, and what is a good way to refresh a page that runs in full screen mode?
Make the javascript do the whole job: Set it up to periodically ajax request the files directly from the svn server, check for differences, and then render the page. This somehow does not seem as elegant.
Update
The Access-Control-Allow-Origin policy doesn't seem to be a problem when running on a web server, even though the content is on the same domain..

What I did in the end was a split between the two:
A cron job update the files from svn.
The javascript periodicly requests the files using window.setInterval and turning on the ifModified flag on the ajax request to only update the html if a changed had occured.

Related

Android WebView AJAX sandbox

I need to embed WebView in my application, which has to pull some data via AJAX from multiple remote servers. Unfortunetely due to ajax sandbox, connections to foreign servers are blocked. How can I disable it, as the js code I'm running is trusted?
There's a simple workaround to allow connections to single server. It's as simple as using loadDataWithBaseUrl and passing the top level url as the first parameter. But what to do, when js should be able to access multiple different domains?
Thanks
Are the pages loaded into the webview local? i.e, Are they loaded from the local file system like: file://yourpage.html or are they remote pages?
Webpages loaded locally are not affected by the cross-domain ajax restrictions so you can load whatever you like.
If they're remote pages then i'm not sure how you're going to get around it, perhaps setup your own webservice on the same domain as where the pages are served from which simply fetches the data from the remote services and spits it back

Crazy need to ENABLE cross site scripting

Yes, I need to enable cross site scripting for internal testing of an application I am working on. I would have used Chrome's disable-xss-auditor or disable-web-security switches, but it looks like they are no longer included in the chrome build:
http://src.chromium.org/svn/trunk/src/chrome/common/chrome_switches.cc
What I am basically trying to achieve is to have a javascript application running locally on pages served by Apache (also running locally) be allowed to run scripts from a resource running on another server on our network.
Failing a way to enable xss for Firefox, Chrome, or my least favourite - IE, would there be a way to run some kind of proxy process to modify headers to allow the xss to happen? Any quick way to use Apache mod rewrite or some such to do this?
Again, this is for testing only. In production, all these scripts run from the same server, so there isn't even a need to sign them, but during development and testing, it is much easier to work only on the parts of the application you are concerned with and not have to run the rest that requires an full-on application server setup.
What you need is just a little passthrough service running on the first server that passes requests over to the second server, and returns the results it gets back from the second server.
You don't say what language the server side of your application is written in or what kind of data is passed to or returned from your service, so I can't be more specific than that, but it really should be about 15 lines of code to write the passthrough service.
What are asking for isn't cross-site scripting (which is a type of security vulnerability in which user input (e.g. from the URL) is injected into the page in such a way that third party scripts could be added via a link).
If you just want to run a script on a different server, then just use an absolute URI.
<script src="http://example.com/foo.js"></script>
If you need to perform Ajax requests to a remote server, use CORS or run a proxy on the current origin.
Again, this is for testing only
Just for testing, look at Charles Proxy. It's Map Remote feature allows you to (transparently) forward some requests to a remote server (based on wild card URL matching).

IIS7 ASP.NET MVC Static JavaScript File Cache?

I have a really simple site that I created. I am trying to test JS caching in the browser but it doesn't seem to be working. I thought that most major browsers cached your JS file by default as long as the file name doesn't change. I have the site running in IIS 7 locally.
For my test I have a simple JS file that is doing a document write on body load. If I make a change to the JS file (change the text the document write is writing), then save the file, I see that updated when refreshing the browser. Why is this? Shouldn't I see the original output as long as the JS file name hasn't changed?
Here is the simple site I created to test.
When you refresh your browser, the browser sends a request to the server for all the resources required to display the page. If the browser has a cached version of any of the required resources, it may send an If-Modified-Since header in the request for that resource. When a server receives this header, rather than just serving up the resource, it compares the modified time of the resource to the time submitted in the If-Modified-Since header. If the resource has changed, the server will send back the resource as usual with a 200 status. But, if the resource has not changed, the server will reply with a status 304 (Not Modified), and the browser will use its cached version.
In your case, the modified date has changed, so the browser sends the new version.
The best way to test caching in your browser would probably be to use fiddler and monitor requests and responses while you navigate your site. Avoid using the refresh button in your testing as that frequently causes the browser to request fresh copies of all resources (ie, omitting the If-Modified-Since header).
Edit: The above may be an over-simplification of what's going on. Surely a web search will yield plenty of in-depth articles that can provide a deeper understanding of how browser caching works in each browser.

Client side page call/scrape?

Here is the problem:
I have a web application - a frequently changing notification system - that runs on a series of local computers. The application refreshes every couple of seconds to display the new information. The computers only display info, and do not have keyboards or ANY input device.
The issue is that if the connection to the server is lost (say updates are installed and a server must be rebooted), a page not found error is displayed). We must then either reboot all computers that are running this app, OR add a keyboard and refresh the browser, OR try to access each computer remotely and refresh the browser. None of these are good options and result in a lot of frustration.
I cannot change the actual application OR server environment.
So what I need is some way to test the call to the application, and if an error is returned or it times out, continue trying every minute or so until the connection is reestablished.
My idea is to create a client-side page scraper, that makes a JS request to the application (which displays basic HTML), and can run locally on the machine, no server required. If the scrape returns the correct content, it displays it. If not it continues to request the page until the actual page content is returned.
Is this possible? What is the best way to do it?
Instead of scraping, check the status code in the response from the server. if it's not 200 or 304, you've received an error page and should try again.
Check out this link: http://www.ibm.com/developerworks/web/library/wa-ajaxintro3/#N102DB

Cross Domain requests using JQuery

This is a followup question to the one here
Here's briefly what I am trying to do. The File server creates a text file to indicate an end of the process. On a webpage on the Web Server, I loop every x seconds and make an ajax request to find out if the test file exists (ajax request to http://fileserver/files/UserFile.txt)
I've tried the following approaches so far:
Trigger a web method from the client side that creates a HttpContext object to verify if the text file exists. But this is too strenous on the server and I started getting all kinds of exceptions in the Event Viewer.
YQL works great but unfortunately it's too slow. The user waits atleast twice the amount of time.
I am looking for a solution that doesn't involve the server side. Somehow, I'd like to use JQuery to verify the existence of a text file on the fileserver.
Any thoughts?
You should be able to use JSONP and JQuery.ajax() to do cross-domain request work. Play with the jsonp and jsonpCallback attributes. Alternatively, you can use JQuery.getJSON().
Serving a single file from the filesystem is the most simple operation a web server can do. If that is already too much, then all other solutions will be worse. Find out why the server takes so long to serve a simple file and fix that.
Note: I'm assuming that the file is small since you say "test file". If it's a big file, the server will actually send it to the client which will need a lot of resources.
What you can try is to add an ASP page to the web site which runs code on the server that checks whether the file is there and just returns a tiny piece of HTML which you can add to the page with jQuery.load().
I may be miles off base here but... could you not create ONE asynchronous (!) Ajax client request with a HUMONGOUS timeout. Fire it, and wait. You would be invoking some server script that checks every so often, in a loop on the server (using sleep in between), whether the file exists. And not replying to the Ajax request until the file finally shows. The server script then replies and exits.
EDIT: Depending on the server-side scripting framework used, you may even get some OS support. You may be able to sleep on a status change in the directory...

Categories