Know the nature of a URL before GETting it - javascript

Is it possible to know the nature of a URL before GETting it ?
I have one URL in particular that ends with m3u but is not a simple file I can download. This is actually a radio stream. As I expect a (finite) file, the GET method never ends. The timeout options does not works in this case (normal).
const options = {timeout: 5000};
return HTTP.call('GET', "http://av.rasset.ie/av/live/radio/junior.m3u", options);
The safe solution should be to ask for the type of the response before actually getting the file.
How can I do that?
Thanks,
Mickael.

I guess you can run a HEAD request first (instead of GET) and verify the headers. Then after making GET you will know how to react.
Unfortunately in this particular case HEAD works for the first request (which returns a redirect):
 curl -v -X HEAD http://icecast1.rte.ie/junior http://av.rasset.ie/av/live/radio/junior.m3u
Warning: Setting custom HTTP method to HEAD with -X/--request may not work the
Warning: way you want. Consider using -I/--head instead.
* Trying 104.16.107.29...
* TCP_NODELAY set
* Connected to av.rasset.ie (104.16.107.29) port 80 (#0)
> HEAD /av/live/radio/junior.m3u HTTP/1.1
> Host: av.rasset.ie
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 302 FOUND
< Date: Mon, 13 Nov 2017 11:07:33 GMT
< Content-Type: text/html; charset=utf-8
< Connection: keep-alive
< Set-Cookie: __cfduid=d89353ae357a0452208835b3092f0fbee1510571253; expires=Tue, 13-Nov-18 11:07:33 GMT; path=/; domain=.rasset.ie; HttpOnly
< Location: http://icecast1.rte.ie/junior
< X-Server-Name: djd
< Cache-Control: max-age=0
< Accept-Ranges: bytes
< X-Varnish: 2802121867
< X-Served-By: MISS: mt-www2.rte.ie
< CF-Cache-Status: MISS
< Server: cloudflare-nginx
< CF-RAY: 3bd1449d2764410c-HAM
* no chunk, no close, no size. Assume close to signal end
<
^C
But fails for second (probably HEAD is not supported, should return 405):
 curl -v -X HEAD http://icecast1.rte.ie/junior
Warning: Setting custom HTTP method to HEAD with -X/--request may not work the
Warning: way you want. Consider using -I/--head instead.
* Trying 89.207.56.171...
* TCP_NODELAY set
* Connected to icecast1.rte.ie (89.207.56.171) port 80 (#0)
> HEAD /junior HTTP/1.1
> Host: icecast1.rte.ie
> User-Agent: curl/7.51.0
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 400 Bad Request
< Server: Icecast 2.4.2
< Date: Mon, 13 Nov 2017 11:07:42 GMT
< Content-Type: text/html; charset=utf-8
< Cache-Control: no-cache
< Expires: Mon, 26 Jul 1997 05:00:00 GMT
< Pragma: no-cache
<
<html><head><title>Error 400</title></head><body><b>400 - unknown request</b></body></html>
* Curl_http_done: called premature == 0
* Closing connection 0

Even if in this particular case, running HEAD runs into an HTTP Parse Error, while I know the stream is OK (if someone can still explains me why, I would be grateful), I think Opal gave the general solution:
You can run a HEAD request first

Related

CORS - Duplicate cookie for multiple domains - set-cookie header is ignored by fetch

I want to share a user consent state on multiple domains. I tried to do it by sending multiple fetch requests, that contain the value as body. Each handler should simply accept the request and store the body as cookie-value.
Client-Side:
fetch(data.handler, {
method: 'POST',
mode: 'cors',
cache: 'no-cache',
credentials: 'include',
headers: {
'Content-Type': 'text/plain', // if we use application/json, a OPTIONS request (preflight) will be sent, that causes problems
},
redirect: 'follow',
body: JSON.stringify(cookieValue),
}).then(response => {
if (response.ok) {
resolve();
} else {
reject();
}
})
On the server side i send a response with a Set-Cookie Header
This is my response for the CORS domain:
curl 'http://localhost-domain2:8000/handler' -v -X POST \
-H 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0' \
-H 'Accept: */*' \
-H 'Accept-Language: de,en;q=0.7,en-US;q=0.3' \
-H 'Accept-Encoding: gzip, deflate' \
-H 'Referer: http://localhost-domain1:8000/' \
-H 'Content-Type: text/plain' \
-H 'Origin: http://localhost-domain:8000' \
-H 'Connection: keep-alive' \
-H 'Pragma: no-cache' \
-H 'Cache-Control: no-cache' \
--data-raw '{"my":"values"}'
* Trying 127.0.0.1:8000...
* TCP_NODELAY set
* Connected to localhost-domain2 (127.0.0.1) port 8000 (#0)
> POST /handler HTTP/1.1
> Host: localhost-domain2:8000
> User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0
> Accept: */*
> Accept-Language: de,en;q=0.7,en-US;q=0.3
> Accept-Encoding: gzip, deflate
> Referer: http://localhost-domain1:8000/
> Content-Type: text/plain
> Origin: http://localhost-domain1:8000
> Connection: keep-alive
> Pragma: no-cache
> Cache-Control: no-cache
> Content-Length: 145
>
* upload completely sent off: 145 out of 145 bytes
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.17.8
< Date: Tue, 02 Aug 2022 10:14:49 GMT
< Content-Type: application/json; charset=utf-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< Vary: Accept-Encoding
< Set-Cookie: gdpr=%7B%22my%22%3A%22values%22%7D; expires=Wed, 02 Aug 2023 10:14:49 GMT; path=/; samesite=lax
< Access-Control-Allow-Origin: http://localhost-domain1:8000
< Access-Control-Allow-Methods: GET, POST, OPTIONS, DELETE, PUT
< Access-Control-Allow-Credentials: true
< Access-Control-Allow-Headers: User-Agent,Keep-Alive,Content-Type,Set-Cookie
< Content-Encoding: gzip
I masked some things like the json or the domain names, so some content-lengths will be wrong, but this is the actual response
The request is sent, the server does what it should but the Set-Cookie is not processed, neither by chrome, nor by firefox. If i visit localhost-domain2:8000 no cookie exists.
So - what combination of fetch-parameters, HTTP Headers and Cookie-Parameters are required to make the client store the cookie for the other domain?
Edit: Just to clarify: I know i cant share cookies directly and i dont want to, i simply want the cookie to exist for the other domain, just like a single-sign-on
Update
Changing everything to ssl and using "None" for the sameSite Attribute does not change anything. Using Strict wont work neither.
Update 2
Opening the network request to the second domain from the dev-tools in a new window results in the cookie being correctly set, so its indeed the fetch api, that i have problems with.
UPDATE
Okay, i was curious - it also works with fetch if i do everything else exactly the same as described below.
Setting of the Cookie exactly the same on client and server
Use the following fetch request
fetch(url, {
mode: "cors",
credentials: "include",
method: "get",
cache: "no-cache",
redirect: "follow",
})
The rest (Server-side, headers) like described below.
Firefox with cookie-protection still not working...
ORIGINAL ANSWER
The following configuration works:
Insteadof using fetch to send a post request, i create an iframe with the src being set to the handler. The cookie-value is added as url parameter
To resolve the promise i use iframe.onload and iframe.onerror
The Cookie ist set identically in the client and on the server.
javascript example:
const secure = window.location.protocol.indexOf('https') > -1;
const sameSite = secure ? 'none' : 'lax';
// so in vanillaJs
document.cookie = 'key=value;' + (secure ? '; secure' : '') + '; sameSite=' + sameSite;
The domain field is kept empty for both sides (client, server) and the path is set to '/' in both cases, secure and sameSite are set the same way as on the client...
The HTTP Response from the Server contains the following headers (besides Set-Cookie)
Access-Control-Allow-Origin: ${HTTP_ORIGIN:-localhost-domain1}
Access-Control-Allow-Methods: GET
Access-Control-Allow-Credentials: true
Cache-Control: no-cache, must-revalidate, no-store
Content-Type: text/html
pragma: no-cache
expires: 0
And an empty html5 document to avoid console errors
I dont know what really caused the issue, but this is how it works for the moment.
Firefox
This wont work in latest Firefox versions if Cookie Protection is enabled! It did not prompt me to allow the cookie usage, it just blocked it.
Edit: And it only seems to work via https

Set custom headers in second natural get browser with angular

get headers with angular in second get.
In browser we write this:
http://localhost:4200
En console network apper the headers like this:
Accept-Ranges: bytes
Access-Control-Allow-Origin: *
Content-Length: `1428`
Content-Type: text/html; charset=UTF-8
Date: Thu, 05 Mar 2020 00:34:18 GMT
ETag: W/"54e-4HSDfsd4538gfdGFDgdf"
X-Powered-By: Express
With Angular I cant get those params, so, I do a second get with
httpClient.get('http://localhost:4200')
to set headers and then get headers params
But I dont know how put this params in header. So any one have idea?
I think that with a java can do that? maybe with a #Override. I dont know and then return headers with custom paramms
Some guide I will be grateuful
You need to build your headers and then pass it along to the get request:
Example:
const headers = new HttpHeaders({'Content-Type':'application/json; charset=utf-8'});
httpClient.get('<url>', { headers });
https://angular.io/guide/http#http-headers

Chrome/IE abort POST response

Chrome and Microsoft IE are aborting a POST response but it's working fine in Firefox. When I run a test through Fiddler I notice the POST headers in Chrome/IE do not contain Cache-Control: max-age=0 and Origin: ... but are otherwise the same.
In Chrome when I press the submit button the POST is sent to the server, processed, and the client aborts the response. After a few reposts, the client finally accepts the response; however the server has already processed the request so it results in duplicate info. This never happens on Firefox, it just always accepts the first response.
It seems to only happen if the request is large (ie: contains a lot more fields for the server to process) leading me to think this has something to do with the time it takes for the server to process the request (in Firefox the request shows as taking about 9 seconds).
Is there something in Firefox that would cause it to wait longer for this response? Or vice-versa, something in IE/Chrome that could be making it end prematurely?
It may not be relevant but this is a Perl Mason site. The response headers in the page which has the form being submitted:
HTTP/1.1 200 OK
Date: Tue, 07 Aug 2018 19:08:57 GMT
Server: Apache
Set-Cookie: ...; path=/
Set-Cookie: TUSKMasonCookie=...; path=/
Expires: Mon, 1 Jan 1990 05:00:00 GMT
Pragma: no-cache
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0, max-age=0
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
It turns out it was Javascript on the page responsible for the reposting. A setTimeout() which recursively called its own function was continuing to be set even after the form data had been posted within the method.
In Chrome/IE/Edge the form submission would be POSTed and the function would continue to set another timeout calling itself. The subsequent call would again POST and abort the connection waiting on the original.
Firefox however would not repost although it too would continue to set and recall the function.
The fix was to add a flag to track when the POST was submitted and when set, to stop the timeout cycle:
function countdown(secs, starttime) {
var d = new Date();
if (!starttime) {
starttime = d.getTime() / 1000;
}
var nowtime = d.getTime() / 1000;
var timeleft = (starttime + secs) - nowtime;
timeleft = Math.max(0, timeleft);
var isposted = false; // <-- Added as fix
if (timeleft == 0) {
isposted = true; // <-- Added as fix
alert("Time is up. Click OK.");
var frm = document.getElementById("frm");
frm.submit();
}
if (!isposted) { // <-- Added as fix
timeout = setTimeout(["countdown(", secs, ", ", starttime, ")"].join(""), 1000);
}
}

Fetch response does not contain Authorization Header sent by Server

I am building an application in which server authenticates client's token and generates an Application token for further use.
I use curl to make sure everything works fine.
curl -XPOST -v -H "X-ID-TOKEN:eyJhbGciOiJSUzI1NiIsImtpZCI6ImIyYmRjZDkyNGZhNWI1ZThhYjkwNTQ3M2ZjZTYxMGU3MWU0MjJlNmQifQ.eyJpc3MiOiJodHRwczovL3NlY3VyZXRva2VuLmdvb2dsZS5jb20vc3RhZ2luZy1wZW5ueXRyYWsiLCJuYW1lIjoiSGFyaXQgSGltYW5zaHUiLCJwaWN0dXJlIjoiaHR0cHM6Ly9saDQuZ29vZ2xldXNlcmNvbnRlbnQuY29tLy1fbFhqMk9VbVRuZy9BQUFBQUFBQUFBS
S9BQUFBQUFBQUFDTS9YYU5jMTJadGV5OC9waG90by5qcGciLCJhdWQiOiJzdGFnaW5nLXBlbm55dHJhayIsImF1dGhfdGltZSI6MTUwMTczMTc2MSwidXNlcl9pZCI6InJ4WjZtb240MGhhN1J5SDVpSEFPSHkxN0hrbzEiLCJzdWIiOiJyeFo2bW9uNDBoYTdSeUg1aUhBT0h5MTdIa28xIiwiaWF0IjoxNTAxNzMxNzYyLCJleHAiOjE1MDE3MzUzNjIsImVtYWlsIjoiaGFyaXQuc3Vic2NyaXB0aW9uc0BnbWFpbC5jb20iLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwiZmlyZWJhc
2UiOnsiaWRlbnRpdGllcyI6eyJnb29nbGUuY29tIjpbIjEwMDIxNjY5NjgzMjQ3MDQzMTUwNyJdLCJlbWFpbCI6WyJoYXJpdC5zdWJzY3JpcHRpb25zQGdtYWlsLmNvbSJdfSwic2lnbl9pbl9wcm92aWRlciI6Imdvb2dsZS5jb20ifX0.oWWug78iVJITZsJdA7npwjaG_CFnhQahwWCjnkz8Vi2famuTL61s8_Shx4oZVbKzju-L7ebEC4MSOvMc3HeEUwiwt9SunOo8JWfzwgpDbVzFTlnHu5OUeESssniXY4EyAF0uvI6jh1zoEz4SbPO-D87RXMNZYo69c6PFJVDYv--0sm4M7Ajmh7ynMmoEMH0pzjh-7l91yRguO5piQE9GQYwWe9-Jj8YlqWMnMa69M_jMrE14fMCB2mjoa9jJvZR1a-ao8LqO1U1FO64mzgf55yG8OS7aGVDN7gLxk1-RcqLxJogo0BDqsrdDykoeGHb1UflQP7dtazc47r3flELBGw" "http://loc
alhost:8080/login"
* Trying ::1...
* Connected to localhost (::1) port 8080 (#0)
> POST /login HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.43.0
> Accept: */*
> X-ID-TOKEN:eyJhbGciOiJSUzI1NiIsImtpZCI6ImIyYmRjZDkyNGZhNWI1ZThhYjkwNTQ3M2ZjZTYxMGU3MWU0MjJlNmQifQ.eyJpc3MiOiJodHRwczovL3NlY3VyZXRva2VuLmdvb2dsZS5jb20vc3RhZ2luZy1wZW5ueXRyYWsiLCJuYW1lIjoiSGFyaXQgSGltYW5zaHUiLCJwaWN0dXJlIjoiaHR0cHM6Ly9saDQuZ29vZ2xldXNlcmNvbnRlbnQuY29tLy1fbFhqMk9VbVRuZy9BQUFBQUFBQUFBSS9BQUFBQUFBQUFDTS9YYU5jMTJadGV5OC9waG90by5qcGciLCJhdWQi
OiJzdGFnaW5nLXBlbm55dHJhayIsImF1dGhfdGltZSI6MTUwMTczMTc2MSwidXNlcl9pZCI6InJ4WjZtb240MGhhN1J5SDVpSEFPSHkxN0hrbzEiLCJzdWIiOiJyeFo2bW9uNDBoYTdSeUg1aUhBT0h5MTdIa28xIiwiaWF0IjoxNTAxNzMxNzYyLCJleHAiOjE1MDE3MzUzNjIsImVtYWlsIjoiaGFyaXQuc3Vic2NyaXB0aW9uc0BnbWFpbC5jb20iLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwiZmlyZWJhc2UiOnsiaWRlbnRpdGllcyI6eyJnb29nbGUuY29tIjpbIjEwMDIxNjY5
NjgzMjQ3MDQzMTUwNyJdLCJlbWFpbCI6WyJoYXJpdC5zdWJzY3JpcHRpb25zQGdtYWlsLmNvbSJdfSwic2lnbl9pbl9wcm92aWRlciI6Imdvb2dsZS5jb20ifX0.oWWug78iVJITZsJdA7npwjaG_CFnhQahwWCjnkz8Vi2famuTL61s8_Shx4oZVbKzju-L7ebEC4MSOvMc3HeEUwiwt9SunOo8JWfzwgpDbVzFTlnHu5OUeESssniXY4EyAF0uvI6jh1zoEz4SbPO-D87RXMNZYo69c6PFJVDYv--0sm4M7Ajmh7ynMmoEMH0pzjh-7l91yRguO5piQE9GQYwWe9-Jj8YlqWMnMa69M_jMrE14fMCB2mjoa9jJvZR1a-ao8LqO1U1FO64mzgf55yG8OS7aGVDN7gLxk1-RcqLxJogo0BDqsrdDykoeGHb1UflQP7dtazc47r3flELBGw
>
< HTTP/1.1 200
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< Cache-Control: no-cache, no-store, max-age=0, must-revalidate
< Pragma: no-cache
< Expires: 0
< X-Frame-Options: DENY
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Headers: origin, content-type, accept, authorization, bearer, x-id-token
< Access-Control-Allow-Credentials: true
< Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS, HEAD
< Access-Control-Max-Age: 1209600
< Authorization: Bearer eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJyeFo2bW9uNDBoYTdSeUg1aUhBT0h5MTdIa28xIiwiZXhwIjoxNTAyNTgyNDAwfQ.o3aw_ozg813jga6TdCvtV1mMJngO6f4Wgy2dYm4G7O2G6LvYADzIafXJn0Wmvw8-f5scDcmTf6wT_zyMHIDFRg
< Content-Length: 0
< Date: Thu, 03 Aug 2017 03:55:50 GMT
<
* Connection #0 to host localhost left intact
As you can see, the server sends back the Authorization header to the client.
On my Javascript application, my code to interact with server looks like
Api.js
let getAppToken = (idToken) => {
return fetch("http://localhost:8080/login", {
method: "POST",
headers: {
"X-ID-TOKEN": idToken
}
});
};
module.exports = {
getAppToken: getAppToken,
};
Which is used in my React component as
Login.js
console.log("idToken:", idToken);
getAppToken(idToken)
.then((response) => {
response.headers.forEach((val, key) => {
console.log(key, val)
});
});
When I run this React application, the only headers I see are
cache-control no-cache, no-store, max-age=0, must-revalidate
expires 0
pragma no-cache
Why don't I see the remaining headers sent by server? How can I fix this situation?
Thanks
The server must be configured to send an Access-Control-Expose-Headers response header that includes "Authorization" in its value if you want the browser to allow your requesting frontend JavaScript code to access the Authorization response header value.
If the response includes no value for the Access-Control-Expose-Headers header, the only response headers that browsers will let you access from client-side JavaScript in your web app are Cache-Control,
Content-Language,
Content-Type,
Expires,
Last-Modified
and
Pragma.
See https://fetch.spec.whatwg.org/#cors-safelisted-response-header-name for the spec on that.

Reading incoming HTTP headers with node.js

Now as example, I'm getting an response which has partially the key/values as an javascript object:
status: '200 OK',
'content-encoding': 'gzip'
I can easily read out and log the status message by: headers.status but when I try to log the content-encoding (which I need in this particular situation) it errors on:
headers.'content-encoding' <- obviously the quotes it doesn't like
headers.content-encoding <- obviously the '-' it doesn't like
How am I suppose to get/read/log it's content-encoding value?
Greets,
m0rph3v5
Javascript also supports square bracket notation for referring to properties so if headers is an appropriate object, you can use headers['content-encoding'].
JavaScript properties have names as you know. When the name is a legal identifier and you know the literal name you want when you're writing the code, you can use it with dotted notation.
var foo = headers.foo;
When the name isn't a legal identifier, or if you want to determine the name you're looking up at runtime, you can use a string:
var encoding = headers['content-encoding'];
or
var name = 'content-encoding';
var encoding = headers[name];
or even
var x = 'encoding';
var encoding = headers['content-' + x];
As you can see, it doesn't have to be a literal string. This is very handy for general-purpose functions that have to accept a property name as a function argument or similar.
Note that property names are case sensitive.
I think you should install the very good express framework. I really simplifies node.js webdevelopment.
You could install it using npm
npm install express
This snippet shows you how to set headers and read headers
var express = require('express');
var app = express.createServer();
app.get('/', function(req, res){
console.log(req.header('a'));
res.header('time', 12345);
res.send('Hello World');
});
app.listen(3000);
Curl from command line
$curl http://localhost:3000/ -H "a:3434" -v
* About to connect() to localhost port 3000 (#0)
* Trying ::1... Connection refused
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 3000 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.21.2 (i686-pc-linux-gnu) libcurl/7.21.2 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18
> Host: localhost:3000
> Accept: */*
> a:3434
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< time: 12345
< Content-Type: text/html; charset=utf-8
< Content-Length: 11
< Date: Tue, 28 Dec 2010 13:58:41 GMT
< X-Response-Time: 1ms
< Connection: keep-alive
<
* Connection #0 to host localhost left intact
* Closing connection #0
Hello World
The log outputting the header send via curl to node server:
$ node mo.js
3434

Categories