Can a browser make an HTTP request to itself? - javascript

It's easy enough to use AJAX from a browser to an external address, (i.e. external to the browser, even if it's localhost) but I have a different question.
Is there some kind of object or service that would allow a browser to make (or mock) an HTTP request to itself, e.g. from JavaScript?
E.g. from Firefox I can see the following raw request:
GET /headers.php HTTP/1.1
Host: localhost
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:49.0) Gecko/20100101 Firefox/49.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Upgrade-Insecure-Requests: 1
Cache-Control: max-age=0
And by using a PHP script (the aforementioned headers.php in the GET request):
<pre>
<?php
print_r(apache_request_headers ());
I can see the following on the page:
[Content-Type] =>
[Content-Length] => 0
[Upgrade-Insecure-Requests] => 1
[User-Agent] => Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:49.0) Gecko/20100101 Firefox/49.0
[Host] => localhost
[Accept-Language] => en-GB,en;q=0.5
[Accept-Encoding] => gzip, deflate
[Accept] => text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
[Connection] => keep-alive
[Cache-Control] => max-age=0
Can I get the same information via JavaScript, without making a call to a server script?

You can request either a Blob URL or a data URI see Does Stack Overflow have an "echo page" to test AJAX requests, inside a code snippet?, or use ServiceWorker to serve a specific Response Chrome extension: Block page items before access. An empty string passed to either XMLHttpRequest.open() or fetch() requests the current URL.

Related

Flasgger / Swagger - apidocs works with localhost but not Openshift (You need to enable JavaScript to run this app)

I am running a Python Flask server. A co-worker added Flasgger/Swagger support and I can successfully display the API using
http://localhost:5000/apidocs
Similarly, I can get the json version of the API
http://localhost:5000/api_documentation.json
Python code configures that filename
That same code is deployed in an Openshift project and uses Traefik to route external requests to the Python Flask server
https://my-openshift-url-here/apidocs
does not display the API
only displays "Powered by Flasgger 0.9.5
https://my-openshift-url-here/api_documentation.json
works same as the localhost request
Traefik keys off of the "apidocs" and "api_documentation.json" and routes it directly to the Python Flask server
rule: PathPrefix(`/apidocs`)
entryPoints:
- web
middlewares:
- gzip
- mysecurity-no-token
service: my-python-server
apidocumentation:
rule: PathPrefix(`/api_documentation.json`)
entryPoints:
- web
middlewares:
- gzip
- mysecurity-no-token
service: my-python-server
What I see in my Chrome browser debugger (F12 - Network) for the swagger-ui-bundle.js response is "You need to enable JavaScript to run this app."
Why does this work in Chrome for the localhost version but not when accessing the server deployed on Openshift? Both are being accessed from the same Chrome window - just different tabs.
This is the Headers content of the apidocs request for the localhost version
Request Method: GET
Status Code: 200 OK
Remote Address: 127.0.0.1:5000
Referrer Policy: strict-origin-when-cross-origin
Content-Length: 3041
Content-Type: text/html; charset=utf-8
Date: Thu, 19 Aug 2021 12:08:41 GMT
Server: Werkzeug/2.0.1 Python/3.7.4
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9
Cache-Control: no-cache
Connection: keep-alive
Host: localhost:5000
Pragma: no-cache
sec-ch-ua: "Chromium";v="92", " Not A;Brand";v="99", "Google Chrome";v="92"
sec-ch-ua-mobile: ?0
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36
This is the Headers content of the apidocs request for the Openshift version
Request Method: GET
Status Code: 200 OK
Remote Address: 123.456.789.123:443 (obfuscated, of course)
Referrer Policy: origin
content-encoding: gzip
content-length: 1264
content-type: text/html; charset=utf-8
date: Thu, 19 Aug 2021 12:08:59 GMT
server: istio-envoy
vary: Accept-Encoding
x-envoy-upstream-service-time: 28
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9
Cache-Control: no-cache
Connection: keep-alive
Host: my-openshift-server-url-here
Pragma: no-cache
sec-ch-ua: "Chromium";v="92", " Not A;Brand";v="99", "Google Chrome";v="92"
sec-ch-ua-mobile: ?0
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36
Turns out it was a Traefik routing issues. After the initial response to the /apidocs request, it was also making requests to /flasgger_static/foo. I had to add a route for flassgger_static in my Traefik routing table.
flassger:
rule: PathPrefix(`/flasgger_static`)
entryPoints:
- web
middlewares:
- gzip
- mysecurity-no-token
service: my-python-server

Code is not working when sending request behind "Proxies"

I am sending a request to a website ( I am using request module ) and it returns data in response. Everything is working fine until I send request behind a proxy ( even when the proxy is not banned on the site ).
my problem is the same
How to stop NodeJS "Request" module changes request when using proxy
I tried every solution in the above post but nothing helped
Headers I am using in request
Host: www.somesite.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Upgrade-Insecure-Requests: 1

burpsuite intruder and hashed passwords

In order to learn how to use Burpsuite, I am trying to use it to hack into the management gui on an IP camera of mine. Access credentials are currently admin/admin. From Burpsuite's proxy I select the following POST command for an intruder attack:
POST /login/ HTTP/1.1
Host: 10.XXX.XXX.173
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: http://10.XXX.XXX.173/
Content-Type: application/x-www-form-urlencoded
Content-Length: 29
DNT: 1
Connection: close
Upgrade-Insecure-Requests: 1
username=admin&password=admin
When I run a sniper attack using a list of passwords, including admin, all the tries come back with the exact same 200 success response/message. Investigating further, I see that after each attack and response message, there is a subsequent GET message from the browser that looks like this:
GET /login/?_=1551456400210&_username=admin&_login=true&_signature=§de6af126fa27f887c20ca2de02411aa913815d9b§ HTTP/1.1
Host: 10.XXX.XXX.173
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0
Accept: */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: http://10.XXX.XXX.173/
X-Requested-With: XMLHttpRequest
DNT: 1
Connection: close
And this generates a success response in the case of admin/admin and a failure response in other cases. So, it appears the camera is hashing the password and passing on the hash in the "signature" element of the GET command. If I use the GET command for the sniper attack and include the hashed value in the password list, the attack sees the hash of "admin" as being successful.
At this point, it appears that I need a two-part attack. Part one gets the hashes of the password list from the gui and part two tells me when a hash/password works to unlock the gui. Am I right? How do I do this in Burpsuite?

Post request looks like 2 different requests on ESP8266. Is this a Chrome bug?

I am setting up a server on ESP8266 WiFi module. Basic operation is, you request a URL. ESP serves that page. It has a form. You fill it in and click submit, and the browser sends POST request by AJAX. I am not using jQuery, just js. From Chrome dev-tools, it looks like all is well.
But on the ESP Server side, I noticed I am missing post data once in a while. After digging deep, I found this issue.
Ideal result from Chrome on my windows: And this works correctly. Post data comes in as expected.
+IPD,0,507:POST /wifi.htm HTTP/1.1
Host: 192.168.4.1
Connection: keep-alive
Content-Length: 63
Origin: http://192.168.4.1
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36
Content-Type: text/plain;charset=UTF-8
Accept: */*
Referer: http://192.168.4.1/wifi.htm
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.8
AlexaToolbar-ALX_NS_PH: AlexaToolbar/alx-4.0
ethOrWiFi=1&ewln=1&dhcp=1&ssid=Esensors&key=tgfgfdgfdtrd&auth=4
But on my Mac Chrome, I see the following result.
+IPD,0,472:POST /wifi.htm HTTP/1.1
Host: 192.168.4.1
Connection: keep-alive
Content-Length: 63
Origin: http://192.168.4.1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36
Content-Type: text/plain;charset=UTF-8
Accept: */*
DNT: 1
Referer: http://192.168.4.1/wifi.htm
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.8,ml;q=0.6
AlexaToolbar-ALX_NS_PH: AlexaToolbar/alx-4.0
+IPD,0,63:ethOrWiFi=1&ewln=1&dhcp=1&ssid=Esensors&key=asdfasdfasdf&auth=4
And I can repeat this. The only different in each case is I am using Chrome on Windows as opposed to Chrome on Mac. To double check, I downloaded Chrome canary version and tried. The first request worked fine. From second request onwards, it shows this problem. Why is this happening? Any ideas? May be my laptop has issues? :)
Here are Chrome dev-tools info from Chrome on Mac (the one with the problem)
**Request Headers:**
POST /wifi.htm HTTP/1.1
Host: 192.168.4.1
Connection: keep-alive
Content-Length: 61
Origin: http://192.168.4.1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36
Content-Type: text/plain;charset=UTF-8
Accept: */*
DNT: 1
Referer: http://192.168.4.1/wifi.htm
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.8,ml;q=0.6
AlexaToolbar-ALX_NS_PH: AlexaToolbar/alx-4.0
**Request Payload**
ethOrWiFi=1&ewln=1&dhcp=1&ssid=Esensors&key=asdfasdfoi&auth=4
+IPD is the AT command that says data was received from the network. +IPD,0,63: is saying to receive 63 bytes from connection 0. That matches with your Content-Length header. Notice that it also appears at the beginning of the header portion of the request.
Your WiFi library on the ESP side is throwing that in. Here and on line 281 is the source code where it might be happening. There are a couple variables that affect whether or not the +IPD is added, maybe you have set or inadvertently changed one.

AJAX request fails in Firefox, works in IE (server returns 200 in both cases)

I am testing locally. I have IIS serving JS and HTML on localhost:50972 and Java/Jersey acting as an application server on localhost:8080.
The following AJAX request succeeds in Internet Explorer, but fails in Chrome and Firefox, even though the server shows 200 OK:
public getTest() {
var settings: JQueryAjaxSettings = {
url: "http://localhost:8080/getData",
type: "GET",
crossDomain: true,
dataType: "text",
};
jQuery.ajax(settings).done(function (o) {
alert(o);
}).fail(function (request) {
alert(request);
});
}
The code on the Java side looks like this:
#GET
#Path("/getData")
public Response getData() {
NewCookie cookie = new NewCookie("test", "key:val", "/", null, "comment", 100, false );
return Response.status(Response.Status.OK).entity("Hello World").cookie(cookie).build();
}
Below are the relevant HTTP Requests/Responses from IE and Firefox:
Internet Explorer Request (Succeeds)
GET http://localhost:8080/getData?_=1451863561652 HTTP/1.1
Referer: http://localhost:50972/
Accept: text/plain, */*; q=0.01
Accept-Language: en-US
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko
Connection: Keep-Alive
DNT: 1
Host: localhost:8080
Firefox Request (Fails)
GET http://localhost:8080/getData?_=1451863686206 HTTP/1.1
Host: localhost:8080
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0
Accept: text/plain, */*; q=0.01
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: http://localhost:50972/
Origin: http://localhost:50972
Connection: keep-alive
Response from server (sent to both)
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Set-Cookie: test=key:val;Version=1;Comment=comment;Path=/
Content-Type: text/plain
Content-Length: 11
Date: Sun, 03 Jan 2016 23:26:07 GMT
Hello World
I have also tried this with the response as {} and dataType: json instead of Hello World and dataType: text, but with no change. I have also tried with crossDomain: true and with crossDomain: false Help?
The observed behavior is because Firefox (and Chrome, etc.) correctly consider a different port to establish a different origin; IE does not.
See MDN - Same Origin Policy:
Two pages have the same origin if the protocol, port (if one is specified), and host are the same for both pages..
..[but] IE doesn't include port into Same Origin components, therefore http://company.com:81/index.html and http://company.com/index.html are considered from same origin and no restrictions are applied.
Use CORS - enabled on the server - for the request to succeed in all browsers.

Categories