I'm trying to authorize my application to integrate with Google Drive. Google documentation provides details for server based authorization and code samples for various server technologies.
There's also a JavaScript Google API library, that has support for authorization. Down in the samples section of the wiki there is a code snippet for creating a config and calling the authorize function. I've altered the scope to be that one I believe is required for drive:
var config = {
'client_id': 'my_client_ID',
'scope': 'https://www.googleapis.com/auth/drive.file'
};
gapi.auth.authorize(config, function() {
console.log(gapi.auth);
});
The callback function is never called (yes, the Google API library is loaded corrected) Looking the Java Retrieve and Use OAuth 2.0 Credentials example, the client secret seems to be a parameter, should this go into the config?
Has anyone tried this in JS, for Drive or other Google APIs? Does anyone know the best route for debugging such a problem, i.e. do I need to just step through the library and stop whinging?
Please don't suggest doing the authorization on the server side, our application is entirely client side, I don't want any state on the server (and I understand the token refresh issues this will cause). I am familiar with the API configuration in the Google console and I believe that and the drive SDK setting are correct.
It is possible to use the Google APIs Javascript client library with Drive but you have to be aware that there are some pain points.
There are 2 main issues currently, both of which have workarrounds:
Authorization
First if you have a look closely at how Google Drive auth works you will realize that, after a user has installed your Drive application and tries to open a file or create a new file with your application, Drive initiates the OAuth 2.0 authorization flow automatically and the auth parameters are set to response_type=code and access_type=offline.
This basically means that right now Drive apps are forced to use the OAuth 2 server-side flow which is not going to be of any use to the Javascript client library (which only uses the client-side flow).
The issue is that: Drive initiates a server-side OAuth 2.0 flow, then the Javascript client library initiates a client-side OAuth 2.0 flow.
This can still work, all you have to do it is use server-side code to process the authorization code returned after the Drive server-side flow (you need to exchange it for an access token and a refresh token).
That way, only on the first flow will the user be prompted for authorization. After the first time you exchange the authorization code, the auth page will be bypassed automatically.
Server side samples to do this is available in our documentation.
If you don't process/exchange the auth code on the server-side flow, the user will be prompted for auth every single time he tries to use your app from Drive.
Handling file content
The second issue is that uploading and accessing the actual Drive file content is not made easy by our Javascript client library. You can still do it but you will have to use custom Javascript code.
Reading the file content
When a file metadata/a file object is retrieved, it contains a downloadUrl attribute which points to the actual file content. It is now possible to download the file using a CORS request and the simplest way to auth is to use the OAuth 2 access token in a URL param. So just append &access_token=... to the downloadUrl and fetch the file using XHR or by forwarding the user to the URL.
Uploading file content
UPDATE UPDATE: The upload endpoints do now support CORS.
~~UPDATE: The upload endpoints, unlike the rest of the Drive API do not support CORS so you'll have to use the trick below for now:~~
Uploading a file is tricky because it's not built-in the Javascript client lib and you can't entirely do it with HTTP as described in this response because we don't allow cross-domain requests on these API endpoints. So you do have to take advantage of the iframe proxy used by our Javascript client library and use it to send a constructed multipart request to the Drive SDK. Thanks to #Alain, we have an sample of how to do that below:
/**
* Insert new file.
*
* #param {File} fileData File object to read data from.
* #param {Function} callback Callback function to call when the request is complete.
*/
function insertFileData(fileData, callback) {
const boundary = '-------314159265358979323846';
const delimiter = "\r\n--" + boundary + "\r\n";
const close_delim = "\r\n--" + boundary + "--";
var reader = new FileReader();
reader.readAsBinaryString(fileData);
reader.onload = function(e) {
var contentType = fileData.type || 'application/octet-stream';
var metadata = {
'title': fileData.fileName,
'mimeType': contentType
};
var base64Data = btoa(reader.result);
var multipartRequestBody =
delimiter +
'Content-Type: application/json\r\n\r\n' +
JSON.stringify(metadata) +
delimiter +
'Content-Type: ' + contentType + '\r\n' +
'Content-Transfer-Encoding: base64\r\n' +
'\r\n' +
base64Data +
close_delim;
var request = gapi.client.request({
'path': '/upload/drive/v2/files',
'method': 'POST',
'params': {'uploadType': 'multipart'},
'headers': {
'Content-Type': 'multipart/mixed; boundary="' + boundary + '"'
},
'body': multipartRequestBody});
if (!callback) {
callback = function(file) {
console.log(file)
};
}
request.execute(callback);
}
}
To improve all this, in the future we might:
Let developers choose which OAuth 2.0 flow they want to use (server-side or client-side) or let the developer handle the OAuth flow entirely.
Allow CORS on the /upload/... endpoints
Allow CORS on the exportLinks for native gDocs
We should make it easier to upload files using our Javascript client library.
No promises at this point though :)
I did it. Heres my code:
<!DOCTYPE html>
<html>
<head>
<meta charset='utf-8' />
<style>
p {
font-family: Tahoma;
}
</style>
</head>
<body>
<!--Add a button for the user to click to initiate auth sequence -->
<button id="authorize-button" style="visibility: hidden">Authorize</button>
<script type="text/javascript">
var clientId = '######';
var apiKey = 'aaaaaaaaaaaaaaaaaaa';
// To enter one or more authentication scopes, refer to the documentation for the API.
var scopes = 'https://www.googleapis.com/auth/drive';
// Use a button to handle authentication the first time.
function handleClientLoad() {
gapi.client.setApiKey(apiKey);
window.setTimeout(checkAuth,1);
}
function checkAuth() {
gapi.auth.authorize({client_id: clientId, scope: scopes, immediate: true}, handleAuthResult);
}
function handleAuthResult(authResult) {
var authorizeButton = document.getElementById('authorize-button');
if (authResult && !authResult.error) {
authorizeButton.style.visibility = 'hidden';
makeApiCall();
} else {
authorizeButton.style.visibility = '';
authorizeButton.onclick = handleAuthClick;
}
}
function handleAuthClick(event) {
gapi.auth.authorize({client_id: clientId, scope: scopes, immediate: false}, handleAuthResult);
return false;
}
// Load the API and make an API call. Display the results on the screen.
function makeApiCall() {
gapi.client.load('drive', 'v2', function() {
var request = gapi.client.drive.files.list ( {'maxResults': 5 } );
request.execute(function(resp) {
for (i=0; i<resp.items.length; i++) {
var titulo = resp.items[i].title;
var fechaUpd = resp.items[i].modifiedDate;
var userUpd = resp.items[i].lastModifyingUserName;
var fileInfo = document.createElement('li');
fileInfo.appendChild(document.createTextNode('TITLE: ' + titulo + ' - LAST MODIF: ' + fechaUpd + ' - BY: ' + userUpd ));
document.getElementById('content').appendChild(fileInfo);
}
});
});
}
</script>
<script src="https://apis.google.com/js/client.js?onload=handleClientLoad"></script>
<p><b>These are 5 files from your GDrive :)</b></p>
<div id="content"></div>
</body>
</html>
You only have to change:
var clientId = '######';
var apiKey = 'aaaaaaaaaaaaaaaaaaa';
to your clientID and ApiKey from your Google API Console :)
Of course you have to create your project on Google API Console, activate the Drive API and activate Google Accounts auth in OAuth 2.0 (really eeeeasy!)
PS: it wont work locally on your PC, it will work on some hosting, and yoy must provide the url from it on the project console :)
Related
I'm trying to implement the authorization through google Auth and Javascript SDK following the example given by google documentation:
<!DOCTYPE html>
<html>
<head>
<script src="https://accounts.google.com/gsi/client" onload="initClient()" async defer></script>
</head>
<body>
<script>
var client;
function initClient() {
client = google.accounts.oauth2.initCodeClient({
client_id: 'YOUR_CLIENT_ID',
scope: 'https://www.googleapis.com/auth/calendar.readonly',
ux_mode: 'popup',
callback: (response) => {
var code_receiver_uri = 'YOUR_AUTHORIZATION_CODE_ENDPOINT_URI',
// Send auth code to your backend platform
const xhr = new XMLHttpRequest();
xhr.open('POST', code_receiver_uri, true);
xhr.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded');
xhr.setRequestHeader('X-Requested-With', 'XMLHttpRequest');
xhr.onload = function() {
console.log('Signed in as: ' + xhr.responseText);
};
xhr.send('code=' + code);
// After receipt, the code is exchanged for an access token and
// refresh token, and the platform then updates this web app
// running in user's browser with the requested calendar info.
},
});
}
function getAuthCode() {
// Request authorization code and obtain user consent
client.requestCode();
}
</script>
<button onclick="getAuthCode();">Load Your Calendar</button>
</body>
</html>
However there are two things I don't get in this example:
What is the AUTHORIZATION_CODE_ENDPOINT_URI
What is the variable named 'code' sent in the callback function ? It is not assigned anywhere in this snippet
Initializing the client works fine, I get the pop up window with the required scope however I can not figure out how to make the second part work and it seems to be fairly new, so not much information about this is available.
YOUR_AUTHORIZATION_CODE_ENDPOINT_URI is where your identity is verified and where code is being sent to. See here for more information on Authorization Endpoint.
As for code. It's not this:
xhr.send('code=' + code);
It's this:
xhr.send('code=' + response.code);
We receive code as a response. That's how we get the value. See this line here.
callback: (response)
code is a part of the Google OAuth 2.0 process. It refers to the Authorization Code that Google sends you once you have verified your identity. Once you have received your code you can exchange it for a token that can be used to access Google APIs.
Any time the token expires, the application using has to request a new one via the access code. This provides a degree of separation from you and your passwords to the Google APIs.
See here for more info on Authorization codes.
See here for more info on Google using OAuth2.
See diagram below to see the full process.
Hope that helps.
I'm trying to migrating my authentication method from Power BI Master User to service principal.
on master user I'm using msal with authentication flow like bellow:
login to AAD --> request for AAD token --> importing pbix file with rest API using AAD token as credential
this is the code
$(document).ready(function () {
myMSALObj.loginPopup(requestObj).then(function (loginResponse) {
acquireTokenPopup();
});
Msal.UserAgentApplication
});
function acquireTokenPopup() {
myMSALObj.acquireTokenSilent(requestObj).then(function (tokenResponse) {
AADToken = tokenResponse.accessToken;
importPBIX(AADToken);
});
}
function importPBIX(accessToken) {
xmlHttp.open("GET", "./importPBIX?accessToken=" + accessToken + "&pbixTemplate=" + pbixTemplate, true);
//the rest of import process//
}
so there are two question:
1. what kind of flow would it be if I use service principal instead?
on my head and from the info which I read from microsoft document it would be simpler like:
request token using application secret key --> importing pbix file with rest API using token
is this correct?
2. what kind of code that I can use to do this on javascript?I think MSAL couldn't do token request by using service principal. would appreciate any info or tutorial for this.
bests,
what kind of flow would it be if I use service principal instead? on my head and from the info which I read from microsoft document it would be simpler like: request token using application secret key --> importing pbix file with rest API using token is this correct?
According to my research, if you want to use the service principal to get Azure AD access token, you can use the client credentials grant flow
The client application authenticates to the Azure AD token issuance endpoint and requests an access token.
The Azure AD token issuance endpoint issues the access token.
The access token is used to authenticate to the secured resource.
Data from the secured resource is returned to the client application.
Regarding how to get access token, please refer to the following steps
Register Azure AD application
Configure API permissions
Get access token
POST https://login.microsoftonline.com/<tenant id>/oauth2/token
Content-Type: application/x-www-form-urlencoded
grant_type=client_credentials
&client_id=<>
&client_secret=<>
&resource=https://analysis.windows.net/powerbi/api
2. what kind of code that I can use to do this on javascript?I think MSAL couldn't do token request by using service principal. would appreciate any info or tutorial for this.
If you want to implement client credentials grant flow with sdk, you can use adal-node. For more details, please refer to https://www.npmjs.com/package/adal-node.
For example
var AuthenticationContext = require('adal-node').AuthenticationContext;
var authorityHostUrl = 'https://login.microsoftonline.com/';
var tenant = 'myTenant.onmicrosoft.com'; // AAD Tenant name.
var authorityUrl = authorityHostUrl + '/' + tenant;
var applicationId = 'yourApplicationIdHere'; // Application Id of app registered under AAD.
var clientSecret = 'yourAADIssuedClientSecretHere'; // Secret generated for app. Read this environment variable.
var resource = ''; // URI that identifies the resource for which the token is valid.
var context = new AuthenticationContext(authorityUrl);
context.acquireTokenWithClientCredentials(resource, applicationId, clientSecret, function(err, tokenResponse) {
if (err) {
console.log('well that didn\'t work: ' + err.stack);
} else {
console.log(tokenResponse);
}
});
thanks to Jim's answer, I've tweaked my code a little bit and the token authentication process went smoothly.
As my apps using javascript at front-end and python as its back-end, I decided to do the process at python and used python msal library instead.
the code is just like :
authority_host_uri = 'https://login.microsoftonline.com'
tenant = 'myTenantId'
authority_uri = authority_host_uri + '/' + tenant
client_id = 'myClienId'
client_secret = 'myClientSecretKey'
config={"scope":["https://analysis.windows.net/powerbi/api/.default"]}
app = ConfidentialClientApplication(client_id, client_credential=client_secret, authority=authority_uri)
token = app.acquire_token_for_client(scopes=config['scope'])
once again thanks to Jim for helping me on this one.
bests,
I am stuck with the uploading of large blobs to Windows Azure Blob Storage.
I followed this link and, locally, everything goes fine.
Then I loaded my website, fully coded in HTML5 + JS, and the file upload feature does not work anymore. When I try to upload the first chunk, the browser stops the request because of CORS.
I followed this link, hoping to solve the issue, and the CORS rule in Windows Azure Blob Storage now says::
Allowed origins: *
Allowed methods: Get, Put
Allowed headers: *
Exposed headers: x-ms-*
Max age (seconds): 3600
So i think that every request, from any origin, to my BlobService should not be rejected due to the CORS. Am I wrong?
To describe my setup:
I have a Windows Azure Mobile Service which generates a SAS for me. I wonder why the SAS baseUrl is an http instead of https.
I have a website, protected by a certificate, that runs using the https protocol. Everything goes on https.
The website queries the service and it successfully gets a valid SAS.
If I use the SAS as is, with the url with the http protocol, the browser blocks the request because it is an "Active Mixed Content" (an http request issued by a https web site).
If I change the SAS url, forcing it to use the https protocol, the browser blocks the request because of the CORS.
My Azure Mobile Service Back-end is in NodeJS and I have not found a way to let it create an https SAS.
The code which upload a chunk of a file is the following (JS):
function uploadSlicedChunk(data){
//Block id padding
var num = '' + UploadManager.blockIdCounter;
while(num.length < 10){
num = '0'+num;
}
var blockId = btoa("block-"+num);
//URI and data creation
var uri = UploadManager.submitUri.substr(0,4)+"s"+UploadManager.submitUri.substr(4, UploadManager.submitUri.length)+"&comp=block&blockId="+blockId;
var requestData = new Uint8Array(data);
//AJAX options
var ajaxOption = {
url : uri,
type : "PUT",
data : requestData,
processData : false,
beforeSend: function(xhr) {
xhr.setRequestHeader('x-ms-blob-type', 'BlockBlob');
xhr.setRequestHeader('Content-Length', requestData.length);
},
success : function (data, status) {
//Upoad successfull -> update tracking variables.
UploadManager.blockIds.push(blockId);
UploadManager.blockIdCounter += 1;
UploadManager.bytesSent += requestData.length;
UploadManager.currentFilePointer += UploadManager.maxBlockSize;
UploadManager.totalRemainingBytes -= UploadManager.maxBlockSize;
if(UploadManager.totalRemainingBytes < UploadManager.maxBlockSize){
UploadManager.maxBlockSize = UploadManager.totalRemainingBytes;
}
//Prepare next upload;
var percentComplete = ((parseFloat(UploadManager.bytesSent) / parseFloat(UploadManager.fileLen)) * 100).toFixed(2);
$("#content-saving-logger").append(percentComplete + " %<br />");
if(UploadManager.totalRemainingBytes > 0){
var fileContent = UploadManager.file.slice(UploadManager.currentFilePointer,
UploadManager.currentFilePointer+UploadManager.maxBlockSize);
UploadManager.reader.readAsArrayBuffer(fileContent);
} else {
UploadManager.commitBlob();
}
},
error : function(jqXHR, status, error){
alert(status + ", "+error+": "+jqXHR.responseText);
}
};
$.ajax(ajaxOption);
}
I do not know what to do now to solve the issue... The only solution I found is to prepare a service on my Azure Mobile Services which receives the chunks and uploads them to the Blob Storage, making it a sort of proxy. And, at the end of the process, it commits the blob. But, basically, it will makes the service slower and it requires a lot of more work... while this solution should work!
Thank you for your patience and for your help,
Ric.
Has anyone successfully used the AWS SDK to generate signed URLs to objects in an S3 bucket which also work over CloudFront? I'm using the JavaScript AWS SDK and it's really simple to generate signed URLs via the S3 links. I just created a private bucket and use the following code to generate the URL:
var AWS = require('aws-sdk')
, s3 = new AWS.S3()
, params = {Bucket: 'my-bucket', Key: 'path/to/key', Expiration: 20}
s3.getSignedUrl('getObject', params, function (err, url) {
console.log('Signed URL: ' + url)
})
This works great but I also want to expose a CloudFront URL to my users so they can get the increased download speeds of using the CDN. I setup a CloudFront distribution which modified the bucket policy to allow access. However, after doing this any file could be accessed via the CloudFront URL and Amazon appeared to ignore the signature in my link. After reading some more on this I've seen that people generate a .pem file to get signed URLs working with CloudFront but why is this not necessary for S3? It seems like the getSignedUrl method simply does the signing with the AWS Secret Key and AWS Access Key. Has anyone gotten a setup like this working before?
Update:
After further research it appears that CloudFront handles URL signatures completely different from S3 [link]. However, I'm still unclear as to how to create a signed CloudFront URL using Javascript.
Update: I moved the signing functionality from the example code below into the aws-cloudfront-sign package on NPM. That way you can just require this package and call getSignedUrl().
After some further investigation I found a solution which is sort of a combo between this answer and a method I found in the Boto library. It is true that S3 URL signatures are handled differently than CloudFront URL signatures. If you just need to sign an S3 link then the example code in my initial question will work just fine for you. However, it gets a little more complicated if you want to generate signed URLs which utilize your CloudFront distribution. This is because CloudFront URL signatures are not currently supported in the AWS SDK so you have to create the signature on your own. In case you also need to do this, here are basic steps. I'll assume you already have an S3 bucket setup:
Configure CloudFront
Create a CloudFront distribution
Configure your origin with the following settings
Origin Domain Name: {your-s3-bucket}
Restrict Bucket Access: Yes
Grant Read Permissions on Bucket: Yes, Update Bucket Policy
Create CloudFront Key Pair. Should be able to do this here.
Create Signed CloudFront URL
To great a signed CloudFront URL you just need to sign your policy using RSA-SHA1 and include it as a query param. You can find more on custom policies here but I've included a basic one in the sample code below that should get you up and running. The sample code is for Node.js but the process could be applied to any language.
var crypto = require('crypto')
, fs = require('fs')
, util = require('util')
, moment = require('moment')
, urlParse = require('url')
, cloudfrontAccessKey = '<your-cloudfront-public-key>'
, expiration = moment().add('seconds', 30) // epoch-expiration-time
// Define your policy.
var policy = {
'Statement': [{
'Resource': 'http://<your-cloudfront-domain-name>/path/to/object',
'Condition': {
'DateLessThan': {'AWS:EpochTime': '<epoch-expiration-time>'},
}
}]
}
// Now that you have your policy defined you can sign it like this:
var sign = crypto.createSign('RSA-SHA1')
, pem = fs.readFileSync('<path-to-cloudfront-private-key>')
, key = pem.toString('ascii')
sign.update(JSON.stringify(policy))
var signature = sign.sign(key, 'base64')
// Finally, you build the URL with all of the required query params:
var url = {
host: '<your-cloudfront-domain-name>',
protocol: 'http',
pathname: '<path-to-s3-object>'
}
var params = {
'Key-Pair-Id=' + cloudfrontAccessKey,
'Expires=' + expiration,
'Signature=' + signature
}
var signedUrl = util.format('%s?%s', urlParse.format(url), params.join('&'))
return signedUrl
For my code to work with Jason Sims's code, I also had to convert policy to base64 and add it to the final signedUrl, like this:
sign.update(JSON.stringify(policy))
var signature = sign.sign(key, 'base64')
var policy_64 = new Buffer(JSON.stringify(policy)).toString('base64'); // ADDED
// Finally, you build the URL with all of the required query params:
var url = {
host: '<your-cloudfront-domain-name>',
protocol: 'http',
pathname: '<path-to-s3-object>'
}
var params = {
'Key-Pair-Id=' + cloudfrontAccessKey,
'Expires=' + expiration,
'Signature=' + signature,
'Policy=' + policy_64 // ADDED
}
AWS includes some built in classes and structures to assist in the creation of signed URLs and Cookies for CloudFront. I utilized these alongside the excellent answer by Jason Sims to get it working in a slightly different pattern (which appears to be very similar to the NPM package he created).
Namely, the AWS.CloudFront.Signer type description which abstracts the process of creating signed URLs and Cookies.
export class Signer {
/**
* A signer object can be used to generate signed URLs and cookies for granting access to content on restricted CloudFront distributions.
*
* #param {string} keyPairId - The ID of the CloudFront key pair being used.
* #param {string} privateKey - A private key in RSA format.
*/
constructor(keyPairId: string, privateKey: string);
....
}
And either an options with a policy JSON string or without a policy with a url and expiration time.
export interface SignerOptionsWithPolicy {
/**
* A CloudFront JSON policy. Required unless you pass in a url and an expiry time.
*/
policy: string;
}
export interface SignerOptionsWithoutPolicy {
/**
* The URL to which the signature will grant access. Required unless you pass in a full policy.
*/
url: string
/**
* A Unix UTC timestamp indicating when the signature should expire. Required unless you pass in a full policy.
*/
expires: number
}
Sample implementation:
import aws, { CloudFront } from 'aws-sdk';
export async function getSignedUrl() {
// https://abc.cloudfront.net/my-resource.jpg
const url = <cloud front url/resource>;
// Create signer object - requires a public key id and private key value
const signer = new CloudFront.Signer(<public-key-id>, <private-key>);
// Setup expiration time (one hour in the future, in this case)
const expiration = new Date();
expiration.setTime(expiration.getTime() + 1000 * 60 * 60);
const expirationEpoch = expiration.valueOf();
// Set options (Without policy in this example, but a JSON policy string can be substituted)
const options = {
url: url,
expires: expirationEpoch
};
return new Promise((resolve, reject) => {
// Call getSignedUrl passing in options, to be handled either by callback or synchronously without callback
signer.getSignedUrl(options, (err, url) => {
if (err) {
console.error(err.stack);
reject(err);
}
resolve(url);
});
});
}
I'm in the process of designing an API in PHP that will use OAuth2.0. My end goal is to build a front-end application in javascript (using AngularJS) that accesses this API directly. I know that traditionally there's no way to secure transactions in javascript and so directly accessing an API isn't feasible. The front-end would need to communicate with server code that in turn communicated with the API directly. However, in researching OAuth2 it looks as if the User-Agent Flow is designed to help in this situation.
What I need help with is implementing the OAuth2 User-Agent Flow in javascript (particularly AngularJS if possible as that's what I'm using for my front-end). I haven't been able to find any examples or tutorials that do this. I really have no idea where to start and don't want to read through the entire OAuth2 spec without at least seeing an example of what I'll be looking at doing. So any examples, tutorials, links, etc would be greatly appreciated.
The Implicit Grant flow (the one you're referring to as User-Agent Flow) is exactly the way to go:
The implicit grant is a simplified authorization code flow optimized for clients implemented in a browser using a scripting language such as JavaScript.
To understand the flow, the documentation from Google for client-side applications is a really good place to start. Note that they recommend you to take an additional token validation step to avoid confused deputy problems.
Here is a short example implementation of the flow using the Soundcloud API and jQuery, taken from this answer:
<script type="text/javascript" charset="utf-8">
$(function () {
var extractToken = function(hash) {
var match = hash.match(/access_token=([\w-]+)/);
return !!match && match[1];
};
var CLIENT_ID = YOUR_CLIENT_ID;
var AUTHORIZATION_ENDPOINT = "https://soundcloud.com/connect";
var RESOURCE_ENDPOINT = "https://api.soundcloud.com/me";
var token = extractToken(document.location.hash);
if (token) {
$('div.authenticated').show();
$('span.token').text(token);
$.ajax({
url: RESOURCE_ENDPOINT
, beforeSend: function (xhr) {
xhr.setRequestHeader('Authorization', "OAuth " + token);
xhr.setRequestHeader('Accept', "application/json");
}
, success: function (response) {
var container = $('span.user');
if (response) {
container.text(response.username);
} else {
container.text("An error occurred.");
}
}
});
} else {
$('div.authenticate').show();
var authUrl = AUTHORIZATION_ENDPOINT +
"?response_type=token" +
"&client_id=" + clientId +
"&redirect_uri=" + window.location;
$("a.connect").attr("href", authUrl);
}
});
</script>
<style>
.hidden {
display: none;
}
</style>
<div class="authenticate hidden">
<a class="connect" href="">Connect</a>
</div>
<div class="authenticated hidden">
<p>
You are using token
<span class="token">[no token]</span>.
</p>
<p>
Your SoundCloud username is
<span class="user">[no username]</span>.
</p>
</div>
For sending XMLHttpRequests (what the ajax() function does in jQuery) using AngularJS, refer to their documentation of the $http service.
If you want to preserve state, when sending the user to the authorization endpoint, check out the state parameter.
There's an example of Authorization Code Grant approach to get a token from OAuth server. I used jQuery ($) to make some operations.
First, redirect user to authorization page.
var authServerUri = "http://your-aouth2-server.com/authorize",
authParams = {
response_type: "code",
client_id: this.model.get("clientId"),
redirect_uri: this.model.get("redirectUri"),
scope: this.model.get("scope"),
state: this.model.get("state")
};
// Redirect to Authorization page.
var replacementUri = authServerUri + "?" + $.param(authParams);
window.location.replace(replacementUri);
When one gave authorization pass to get token:
var searchQueryString = window.location.search;
if ( searchQueryString.charAt(0) === "?") {
searchQueryString = searchQueryString.substring(1);
}
var searchParameters = $.deparam.fragment(searchQueryString);
if ( "code" in searchParameters) {
// TODO: construct a call like in previous step using $.ajax() to get token.
}
You could implement the Resource Owner Password Credentials Grant in the same manner using jQuery or pure XMLHttpRequest and don't make any redirects - because on each redirect you'll loose state of your application.
For me, I used HTML5 local storage to persist state of my application for data which were not likely to pose a security threat.