How do I grab CCTV DVR's broadcast for custom application - javascript

I have 16 analog cameras feeding in to my Defender DVR. I'm able to access the camera feeds locally by going to a specific port and entering login details using an ActiveX control in IE. I can also view them through apps from the Apple apps tore on my iPad by simply entering my external IP address, port number, and login credentials.
My question is, if I wanted to create an app similar to those in the app store. How would I go about communicating with the DVR's stream of videos/images?
I'd image I need to poll the IP address to get the data but I'm not sure what type of connection is needed and what to expect there.
It seems that many DVRs defer to the same apps in the marketplace so my guess is that they all conform to some standard when outputting the data.
Thank you.

A lot of cameras stream data via HTTP using the Mixed-Replace Content-Type. If you can access your camera on a browser, it is very likely that it uses HTTP.
Assuming this is your case, you'll have to find out what URL your camera uses to serve the stream. So you could:
Try to find a list like this on the internet, by Googling your camera model
Inspecting the browser yourself (this will require some knowledge on HTML5)
Once you have the URL,you can be sure if you're dealing with a Mixed-Replace request, if you have a terminal with curl, you can use something like:
$ curl --head http://user:password#192.168.99.230/video.cgi
My camera returns the following header:
HTTP/1.0 200 OK
Server: alphapd
Date: Thu Jan 9 09:04:59 2014
Pragma: no-cache
Cache-Control: no-cache
Content-Type: multipart/x-mixed-replace;boundary=video boundary--
This means I have a Mixed-Replace response separated by the "--video boundary--" string (look to the Content-Type field).
The request body looks like this:
--video boundary--
<metadata>
<image>
--video boundary--
<metadata>
<image>
...
(Neverending request body of real time delivered images)
Now your approach will depend on which application you want to embed the stream. On my case, I just needed to put them on a web page, so Firefox did me a favor and I could integrate it with:
<img src="http://user:password#192.168.99.230/video.cgi">
But you might have to be parsing and capturing each incoming image by yourself depending on your application.

There are standards out there so you just need to look around also don't expect any dvr manufacturer to give you api access many have tried they just dont give it away any way you don't want to be tied down to a specific dvr implementation I would advice you start with the ispy c# code http://www.ispyconnect.com and that will give you an idea of how it works in general since Ispy supports many types of cameras be it ip or webcam...
Code for analog cameras will be more difficult to find so good luck...

Related

Sending a POST request to API with localhost

I am sending an image to an API using my localhost address but for some reason it doesn't identify the image. It works fine if I use links on google. The code looks something like this:
unirest.post(requestString)
.header("X-RapidAPI-Key", API_KEY)
.field("urls", "http://localhost:4000/uploads/1570544614486-test.jpg")
.field("album", ALBUM_NAME)
.field("albumkey", ALBUM_KEY)
.field("entryid", entryId)
.end(result => {
console.log(result.body);
});
I believe this will work once on a domain but I need it to work now for testing. How can I make this work using my localhost?
You haven't exactly specified what API you're reaching out to, so your mileage may vary with different APIs.
However, based on your error message, I've determined you're trying to leverage the Lambda Face Recognition and Face Detection API via RapidAPI. This (linked) docs for this web service clearly show that the urls parameter you're attempting to use with your localhost URL above is actually meant to hold a comma-separated set of URLs to publicly-accessible image files. The remote API can't possibly resolve localhost in this context, because (a) it can't possibly have any idea what IP localhost should refer to, and (b) it's highly likely that your localhost here doesn't respond to HTTP requests from the broader Internet.
Instead, modify your request to use the files parameter (type binary) to upload the raw binary data for your image(s).

Tomcat, service unavailable 503

My webapp uses JSP / JavaScript/ google visualization, and runs on Tomcat 7 on a 64bit windows server with enough resources dedicated to this app.It is still under testing, so, I have control over the load.
The problem is when I work from device at same network of the server, everything works fine. But when I work from device from different network with a request took a long time (more than 6 minutes) I get Service Unavailable [503] message after 6 minutes of waiting while processing in the server is going on and completed successfully. I checked the Tomcat logs but i couldn't find any thing every thing seems to be work fine. I tried different solutions but non of them worked with me:
Increase Tomcat's connector timeout.
Increase the Tomcat RAM.
Disable the server firewall
Try different browsers
Adjust the request timeout from the browser.
I experimented by setting Tomcat's Connector properties in conf/server.xml. I played around with all combinations and ranges of connectionTimeout and keepAliveTimeout.
The final configuration is:
<Connector port="80" protocol="HTTP/1.1"
address="0.0.0.0"
connectionTimeout="3600000"
redirectPort="8443" />
I'm wondering if anybody else has run into such a problem, and how they solved it.
I think you server.xml is having wrong data . Change connector port from 80 to 8080 it always allow four digit and start from 8080 not sure . please update as below
<Connector port="8080" protocol="HTTP/1.1"
address="0.0.0.0"
connectionTimeout="3600000"
redirectPort="8443" />
503 Service Unavailable
The server is currently unable to handle the request due to a temporary overloading or maintenance of the server. The implication is that this is a temporary condition which will be alleviated after some delay. If known, the length of the delay MAY be indicated in a Retry-After header. If no Retry-After is given, the client SHOULD handle the response as it would for a 500 response.
Note: The existence of the 503 status code does not imply that a
server must use it when becoming overloaded. Some servers may wish
to simply refuse the connection.click here for more information
let me know if you face any issue

Unable to access httponly flagged cookie on own domain loaded in iframe

I'm making a chrome extension that injects an iframe on a webpage and show some stuff.
Content loaded in iframe is from https://example.com and i have full control over it. I'm trying to access cookies of https://example.com from the iframe (which i think should be available) by document.cookie. This is not letting me access httponly flagged cookie and i do not know reason for this. After all this is no cross-domain. Is it?
Here is the code i'm using to get cookie
jQuery("#performAction").click(function(e) {
e.preventDefault();
console.log(document.domain); // https://example.com
var cookies = document.cookie;
console.log('cookies', cookies);
var httpFlaggedCookie1 = getCookie("login_sess");
var httpFlaggedCookie2 = getCookie("login_pass");
console.log('httpFlaggedCookie1 ', httpFlaggedCookie1 ); // shows blank
console.log('httpFlaggedCookie2 ', httpFlaggedCookie2 ); // shows blank
if(httpFlaggedCookie2 != "" && httpFlaggedCookie2 != ""){
doSomething();
} else{
somethingElse();
}
});
Any suggestions what can be done for this?
By default in Chrome, HttpOnly cookies are prevented to be read and written in JavaScript.
However, since you're writing a chrome extensions, you could use chrome.cookies.get and chrome.cookies.set to read/write, with cookies permissions declared in manifest.json. And be aware chrome.cookies can be only accessed in background page, so maybe you would need to do something with Message Passing
Alright folks. I struggled mightily to make httponly cookies show up in iframes after third party cookies have been deprecated. Eventually I was able to solve the issue:
Here is what I came up with:
Install a service worker whose script is rendered by your application server (eg in PHP). In there, you can output the cookies, in a closure, so no other scripts or even injected functions can read them. Attempts to load this same URL from other user-agents will NOT get the cookies, so it’s secure.
Yes the service workers are unloaded periodically, but every time it’s loaded again, it’ll have the latest cookies due to #1.
In your server-side code response rendering, for every time you add a Set-Cookie header, also add a Set-Cookie-JS header with the same content. Make the Service Worker intercept this response, read that cookie, and update the private object in the closure.
In the “fetch” event, add a special request header such as Cookie-JS, and pass what would have been passed in the cookie. Add this to the request headers before sending the request to the server. In this way, you can send all “httponly” cookies back to the server, without the Javascript being able to see them, even if actual cookies are blocked!
On your server, process the Cookie-JS header and merge that into your usual Cookies mechanism, then proceed to run the rest of your code as usual.
Although this seems secure to me — I’d appreciate if anyone reported a security flaw!! — there is a better mechanism than cookies.
Consider using non-extractable private keys such as ECDSA to sign hashes of payloads, also using a service worker. (In super-large payloads like videos, you may want your hash to sample only a part of the payload.) Let the client generate the key pair when a new session is established, and send the public key along with every request. On the server, store the public key in a session. You should also have a database table with the (publicKey, cookieName) as the primary key. You can then look up all the cookies for the user based on their public key — which is secure because the key is non-extractable.
This scheme is actually more secure than cookies, because cookies are bearer tokens and are sometimes subject to session fixation attacks, or man-in-the-middle attacks (even with https). Request payloads can be forged on the server and the end-user cannot prove they didn’t make that request. But with this second approach, the user’s service worker is signing everything on the client side.
A final note of caution: the way the Web works, you still have to trust the server that hosts the domain of the site you’re on. It could just as easily ship JS code to you one day to sign anything with the private key you generated. But it cannot steal the private key itself, so it can only sign things when you’ve loaded the page. So, technically, if your browser is set to cache a top-level page for “100 years”, and that page contains subresource integrity on each resource it loads, then you can be sure the code won’t change on you. I wish browsers would show some sort of green padlock under these conditions. Even better would be if auditors of websites could specify a hash of such a top-level page, and the browser’s green padlock would link to security reviews published under that hash (on, say, IPFS, or at a Web URL that also has a hash). In short — this way websites could finally ship code you could trust would be immutable for each URL (eg version of an app) and others could publish security audits and other evaluations of such code.
Maybe I should make a browser extension to do just that!

using mjpeg with basic authentication in cordovaapp

To get a motionjpeg stream from a ip camerIn a native App I would add a RequestHeader to the GET-Request containing the credentials. In an ajax-call I also can append headers to get a single image.
But to show continuous images the only way seem to be using
<img src="url_to_mpjeg">
The webui of the camera is successfully doing the GET-call like this:
1. you enter the ui with a request to index.html, which needs credentials
2. any further request (like the GET request) automatically have the basic authentication injected by the browser
So I also tried calling another URL of the camera with authentication-header in advanced but this doesn't work in cordova. Every single request needs a manual authentication in the header, nothing is magically added to the headerfields.
I think the reason why it automatically works in the camera webui is because the cameras index.html and further requests are all from the same origin, but in my cordovaapp, the UI is coming from file://local somewhere.
Is there a way in javascript to call a jpeg stream with basic authentication?
As stated by the chromium team, images credentials does not work anymore
If you want to load some stream as mjpg or img with basic auth protected url use iframe instead.
<iframe src="you_stream_link"></iframe>

Android HTTP GET cookie / javascript issue?

I wrote an Android app that should 'connect' to a (private) forum using HTTP GET (and sometimes POST) requests. The basic idea is as such:
Login page where users submit their credentials. Login is performed by doing a HTTP POST (tried GET too, same result) to the Login page of the forum, with their username and password as the parameters. The request should return some cookies that I store in a BasicCookieStore.
Every page of the forum they want to visit is retrieved using HTTP GET. I parse the HTML source that I obtain and show them only the relevant info. In order to authenticate the users, the same BasicCookieStore that I used for login (step 1) is set as the cookiestore for the HttpClient.
This method has been working all the time during my testing, and has worked for my beta testers too. Now that I released the app, it became apparent that many users were having issues, especially on mobile connections (Wifi seems to be no problem).
By logging the HTML source that was returned in all the HTTP GET requests, I have a strong suspicion that the actual login works fine, but somehow the cookies don't get returned or stored or something in that direction. The problem is that the HTML source of the first page they will receive should be the list of forums. In the case of users with problems however, they get served a page that basically reads "You must enable Javascript to view this page".
The strange thing is, I don't receive that page when testing, nor do many of my users. Even worse: some users are now reporting it worked fine for them for days or weeks, and has now stopped working. Others have the exact opposite: not working for days, suddenly working now. One user has reported he was in Greece for 2 weeks, where it worked flawlessly, then he got back to Germany, and it stopped working again.
There seems to be a random component at play here.
I have tried various things, mostly with the way I do the HTTP GET requests. I started out using the normal DefaultHttpClient, with various settings, such as this:
HttpClient httpClient = new DefaultHttpClient();
// Define parameters
HttpParams httpParams = httpClient.getParams();
HttpConnectionParams.setConnectionTimeout(httpParams, TIMEOUT);
HttpConnectionParams.setSoTimeout(httpParams, TIMEOUT);
HttpProtocolParams.setVersion(httpParams, HttpVersion.HTTP_1_1);
// Set cookiestore (getCookieStore returns the same cookiestore)
HttpContext localContext = new BasicHttpContext();
localContext.setAttribute(ClientContext.COOKIE_STORE, getCookieStore());
HttpGet http = new HttpGet(url);
http.addHeader("Accept", ACCEPT_STRING);
http.addHeader("Content-Type", "application/x-www-form-urlencoded; charset=utf-8");
// Execute
HttpResponse response = httpClient.execute(http, localContext);
//... Process result (omitted)
Now I have switched to using AndroidHttpClient instead, with the rest of the code basically unchanged, and seem to get the same result.
I have also tried using the AsyncHttpClient library, which works quite differently, but once again the same result. I tried using its PersistentCookieStore as well, and you guessed it - same result.
I am clueless at this point. Am I looking in the wrong direction? The fact that a website would respond with "you need to enable Javascript" for some users but not for all seems to indicate an issue with cookies. I don't know how a website determines if javascript is enabled, but surely with a HTTP GET request there is no javascript at play. So why do I (and many other users) get to the page without any problems, while others get the 'no javascript' message? The only reason I can think of is cookies, but I have no clue what the problem exactly is.
Any help would be much appreciated!
I doubt the problem is cookies. More likely is a network configuration problem.
For example, your user might have connected to a wifi hotspot with a captive portal page (which uses javascript to make you sign in before you can use the hotspot). In this case they should first open the browser, try to browse to (e.g.) http://google.com, get redirected, sign in, and then launch your app.
Or, your user might be connecting through a proxy. Many mobile carriers around the world will proxy their users' HTTP connections, sometimes doing horrible things to the content. Switching to HTTPS might help with that.

Categories