NS_BINDING_ABORTED only on first PUT attempt in Firefox - javascript

In my JS single page web app I have a reset-button that triggers 'onclick' and will use vanilla fetch() to PUT an empty JSON array to my API. Both are hosted on the same domain/server. When using Firefox (currently 86.0), the first time I push the reset button, the call is aborted. The console says NetworkError when attempting to fetch resource white the Network tab says NS_BINDING_ABORTED in the transferred column.
When I reload my app (F5) and push the same button again, it works. And also any time from now on. As the same code is executed, the failing and the working calls would send the same headers and payload.
Chrome does not show this behavior, there the first call works too.
Even stranger, this first failing PUT call in Firefox seems to only fail once per URL. The web app provides "areas" to users with the area ID in the frontend URL, e.g.
https://example.org/areas/#/myAreaA
and
https://example.org/areas/#/myAreaB
These will PUT to the API, which also has these IDs in their URLs:
https://example.org/api/areas/myAreaA/state/
and
https://example.org/api/areas/myAreaB/state/
For each of these URLs, the first PUT call fails with NS_BINDING_ABORTED but works thereafter. If I copy the URL for such an area into a new Tab or even close+open the Browser again, the Error does not appear again. The web app does not use any cookies.
The web app does a lot of other API calls to the same backend/areaID, no other show this behavior. However, this is the only PUT call, all other calls are GET/POST/HEAD/PATCH requests.
What could be the reason for the first PUT failing?

Following "NetworkError when attempting to fetch resource." only on Firefox I found the problem. It seems that Firefox' onclick event propagation interferes here with the fetch() call. As soon as I added
event.preventDefault()
in the onclick-handler before doing the actual fetch(), everything started to work again.

Related

consuming a Rest-API with javascript in the browser

I'm writing a sever application in go providing a Rest-API. If the server gets a GET without JSON-content-type header it serves an empty html-page having a javascript module in its head. This javascript code uses fetch to consume the Rest-API and populates then according the document.body with content fetched from the server. Each "link" in the content triggers further calls to the API and corresponding updates to the content.
So far so good. But I made two irritating observations.
(obviously) the "back" and "forward" buttons of the browser stay inactive. Which seems logical since there are no loaded URLs associated with the content changes.
If I come to my Rest-UI from an other page and hit the browser's back-button I get as expected the other page back but if I hit now the browser's forward-button I see the JSON-response from my initial fetch instead of my Rest-UI content. Reloading my page makes it all good again but I can't offer that behavior to any user :)
Are there common approaches to deal with this behavior? E.g. removing the browser controls completely, feeding the browser-history "by hand" with js-callbacks, caching directives, ... (I'm inexperienced with js)
The root of the problem is that I overloaded the response of a GET request on the server-side: if the GET-request accepts JSON the server returns JSON otherwise it returns a html-page with the javascript which consumes the JSON. I. e. the javascript fetch for the JSON is the last GET-response for a given URL and goes as such into the browser's cache associated with that URL. A solution to that problem which works for me is to send a header with the JSON response turning of caching and signalling the browser with the "Vary"-header that the response depends on the "Accept"-header. An other solution might be to add distinct endpoints to the server for the Rest-requests.

SoundCloud Api redirect confusion and Audio Api Streams

I am attempting to make a request to the SoundCloud API. Then when I get the response I set the stream_url as the source of an < audio > element.
This works:
http://matthiasdv.org/beta/
But not always... When you search for 'Bonobo' for example, you can play the first few tracks without any issue. But when you try to play 'London Grammar - Hey Now (Bonobo remix)' - the 7th result - it won't play. It throws no errors whatsoever.
I've been tinkering around with Chrome's webdev-tools and under the network tab I see the requests being made. I found that tracks that DO play have a short Request Url, like this:
https://ec-media.sndcdn.com/vR5ukuOzyLbw.128.mp3?f10880d39085a94a0418a7ef69b03d522cd6dfee9399eeb9a522029f6bfab939b9ae57af14bba24e44e1542924c205ad28a52352010cd0e7dd461e9243ab54dc0f0bba897d
And the ones that don't look like this:
https://cf-media.sndcdn.com/8PCswwlkswOd.128.mp3?Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiKjovL2NmLW1lZGlhLnNuZGNkbi5jb20vOFBDc3d3bGtzd09kLjEyOC5tcDMiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE0MzM0Mjc2MDN9fX1dfQ__&Signature=cD-XVhnvQnIATkfrBDDVy0Q7996C8DymwxRLwBBduab0~L0MynF1ftcMky~21T8Q-gCZ2~dMK8dz7uVxvJTIJgXPxEZvhNtbvescMK6iFMg-xSAty-4OhJYjrIZJ2j8NE4uNA4Ml7MWbWcQw4KtUtpZitOQuguS3DPFDII3VF-dvzb2L~xG-G8Uu3uOnI1WhnAAfhf1QWMO7swwB89HtcCiuVBmfluG28ELrJEq-au8mqIMB3sLTno6nUuTtpHXR2ayXBsYcYLLJVXa3Ul8p1rhLS5XWHKWXY8xug4jwey27~C5PVAomK6Z5lJx-mz-0zYs4riUYtl0zACbZ1OfwTQ__&Key-Pair-Id=APKAJAGZ7VMH2PFPW6UQ
Now at first glance I figured it was an encoding issue, but wrapping a quick encodeURI() around the ajax url did not work.
Furthermore I do not understand where these urls come from. In my code I am directing my ajax request towards, for example:
https://api.soundcloud.com/tracks/140326936/stream?client_id=5c6ceaa17461a1c79d503b345a26a54e
Thus, the request url in the GET request (as found under 'network' in Chrome's webdev tools) makes no sense to me. Is SoundCloud redirecting get requests to a CDN-host? One more thing I've noticed is that each time TWO requests are fired instead of one. The first one is always canceled and contains a 'Provisional headers are shown' warning. I believe this is because I am setting crossOrigin = "anonymous", otherwise certain browsers would not load the content.
What I guess may cause the problem is that when the url is set as the src attribute of the element an evenListener is fired in the dancer.js library, which handles the Audio Api and the playback (https://github.com/jsantell/dancer.js/). It may be that encodeURI() is required somewhere in the library.
I decided to ask the question anyhow because I don't understand how the Request Urls's above are formed and why two, instead of one, requests are being fired and why the first is always cancelled.
Any hints which my solve the playback issue are more than welcome too...
When you run the request for
https://api.soundcloud.com/tracks/140326936/stream?client_id=5c6ceaa17461a1c79d503b345a26a54e
you get a HTTP 302 Found response from the server, which is a URL redirect (http://en.wikipedia.org/wiki/HTTP_302). This will cause your browser to load from the new URL that the server returns, and thus the two requests you see. The server basically says "yeah, I know where to find that file, ask that guy over there".
The reason why one works and the other not, I'd think, is that https://ec-media.sndcdn.com has the Access-Control headers set while https://cf-media.sndcdn.com doesn't. This is an issue with the server configuration and unfortunately nothing you can control from the client side. Dunno if it's a deliberate move by soundcloud or if it's something you could ask them about.

Angularjs long polling

I am trying to perform a simple long poll request in Angularjs - I make a GET request and it hangs on till the server responds. Then I make the request again and wait for the next response - and so on.
However, for some reason the code is quite unreliable and misses around 80% of the responses sent from the server.
Below is my code:
main.messages=[];
...
main.poll=function(){
$http.get('http://localhost:8080/message')
.success(function(data){
console.log(data);
main.messages.push(data);
main.poll();
})
.error(...)
};
Is there something obvious that I am missing here?
The server can detect that the browser is connected, and the server does send a response but the code above does not get the response (no console output and no error). I tried making this request with postman (chrome extension) and the long-poll worked perfectly there so I think the problem is somewhere in here.
update: the problem occurs only on Google Chrome and only when there is more than one tab performing the long-poll simultaneously. There is some seemingly random behaviour on creating and closing new tabs with the long-poll.
I found out what was causing this. Chrome will only longpoll a given url one tab at a time. If a user has got multiple tabs open requesting the same longpoll, Chrome waits for the longpoll in the first tab to finish before starting the poll in the second tab.
I think that the browser looks at the long-poll request as a 'server that is not responding'. When you try to make the same request in a new tab, the browser does not actually make that same request again to conserve resources. If you look at the network tab, it will show a pending request. But that's a 'lie', the browser is actually waiting for the server to respond for the first tab's request. Once it gets a response from the server for the first tab's request, only then will it query the server for the second tab's request.
In other words, the browser (Chrome and Opera) will not normally make two long-poll requests to the same endpoint simultaneously - even if these requests are coming from two different tabs.
However, sometimes after a certain amount of time it decides to release request for the second tab as well. But I was not able to figure out any rule for this. If you have 3 tabs open with the same request, closing the first causes 2 simultaneous requests from the remaining two tabs. But if you have 6 tabs open, closing the first causes only 3 simultaneous requests and not 5. I'm sure there would be some rules governing this behaviour but I guess we have to write code assuming that the requests may or may not take place simultaneously and the browser may wait for one request to finish before working on the second.
Safari does not have this behaviour - it will make multiple requests via multiple tabs simultaneously. But Chrome and Opera do show this behaviour.
So rather than 'broadcasting' data simultaneously to all connected clients, I am now changing my code to use timestamps to figure out how much data a client needs and then send that data.

Anyway to change the "Name", or "Initiator", or add new tabs in Chrome Dev Tool's Network view?

I was wondering if there was anyway to change the name or initiator columns in the Network Tab in Chrome's Dev Tools.
The issue is that, currently I'm making a web app, and it makes tons of POST calls using jQuery. that's all fine and dandy, however, when I have 10+ calls, obviously the Network tab gets flooded with POST calls.
All calls are to the same PHP script, thus the Name column is all the same. Also, since I'm using jQuery, the initiator is set to jQuery. I was wondering if there was any way to customize this view so that I know what script is calling the POST without having to open each call and see it's properties.
It'd even be nice to see maybe a truncated version of values sent right in the list view. This way I can just look at each call and know exactly what function or script called it, or at least have a better idea, rather than 10+ entries of Name: " xxx.php".
You can add custom columns that show you the values of response headers by right clicking on the table header and selecting Response Headers > Manage Header Columns:
You can also hide columns via this right-click menu.
You can also add a query to the url you are posting to, with information about what function you are calling.
Example:
If you are posting to https://myserver.com/api it is the last part api that will be displayed as the name in the network tab.
So you can extend that url with https://myserver.com/api?whatever and you will see that in the network tab name. The back end server can and will just ignore that extra query in the url.

Twitter widget doesn't always populate

I am displaying a Twitter feed from their feed widget on my website. Sometimes the widget likes to not display any information. I figure this is because the API is overloaded. Regardless, is there any known way to display an error message in the event that Twitter can't load my feed? Has anybody else experienced these issues?
Firstly use a suitable http recording proxy for your OS (Fiddler2 is fantastic if you are on windows), shift F5 the page until you get the fault.
Filter the log for hosts widgets.twimg.com or api.twitter.com... This diagnoses the failure point because:
If the js (or css) request to widgets.twimg.com fails (Look for a 404 or truncated text), then the javascript failed to fetch. Unlikely since files should be static.
If the api.twitter.com request is missing, then the javascript failed to run.
If the api.twitter.com request occurs, but there is a failure in the response (bad response code or response looks whack) then the twitter api is failing to give you the feed.
For detecting 1 in javascript, you can detect the failure to load by using a timeout, and onload check that it loaded (simple check is that window.twttr exists - however not a great test because that gets set at top of javascript, so only confirms that javascript syntax was valid and started running). (Might need onreadystate to detect load for IE?)
<script src="http://widgets.twimg.com/j/2/widget.js" onload="twitterloaded()"></script>
For detecting 2, run page with debugger.
For 3, from a quick look at the code, looks like the code retries requests to the twitter api (you might want to look ath the configuration settings for the api) and it looks like there are api variables to check if everything is running e.g. TWTR.Widget.isLoaded _isRunning and _hasOfficiallyStarted.

Categories