Linkey update w/c 18/02/2013

It’s been a long time since I last gave an update on Linkey so I’ll try to summarise what has happening recently.

In late Janaury, Chris Brown the manager for the Access and Identity Management (AIM) programme visited us here in Lincoln. We spent a few hours looking over the case study that I’ve been working on and Joss and I discussed how the work we’re doing on Linkey fits into the eco-system that LNCD have developed. A number of actions came out of the meeting:

  1. Get in touch with a number of contacts that Chris has at Kent and Newcastle to discuss a number of AIM related projects and what we can learn from them.
  2. Get in touch with contacts at Warwick University who have a number of APIs and tools that are secured with OAuth 1.0. If possible I would like to include their work in the case study.
  3. Speak to JISC InfoNet to see if a toolkit could be created out of the case study.

I’ve spent a few months trying to find an appropriate conference to go to so that I can engage with the AIM community and discuss the project and it’s outputs. After much searching for the right conference I’m happy to say that I’m going to be going to the 16th Internet Identity Workshop in Mountain View, California. Being in the heart of Silicon Valley I’m really looking forward to meeting developers and evangelists from companies such as Google, Facebook and Microsoft, all of whom are trying to be “our” online identity provider.

As part of my research for the case study I have been looking into the many use-cases of OAuth. Recently I’ve been experimenting with securing an RDF SPARQL server to only respond with triples that an access token (sent in a HTTP header) can access. I will blog about this later once I’ve got a bit further with it.

Since January I’ve been working on the OAuth code which has had some great contributions from some guys from the US. It is now finished and running in production in several places (more on this in another blog post). In addition, now that the code is complete I can start developing a plugin for the SAML bearer grant.

Finally, the Linkey project will officially finish on the 25th June.

Life, Identity and Everything – a chat with Tim Bray and Breno de Madeiros from Google

Last night I took part in the Life, Identity, and Everything Google developer chat with Tim Bray and Breno de Madeiros.

Tim Bray is the Developer Advocate, and Breno de Madeiros is the tech lead, in the group at Google that does authentication and authorization APIs; specifically, those involving OAuth and OpenID. Breno also has his name on the front of a few of the OAuth RFCs. We’re going to talk for a VERY few (less than 10) minutes on why OAuth is a good idea, and a couple of things we’re working on right now to help do away with passwords. After that, ask us anything.

Here is the video of the chat:

To summarise the event they said that Google are committed to OAuth 2.0 and OpenID. They intend on all Google APIs supporting OAuth 2.0 and they believe it has helped developers develop for Android because support for the OAuth dance is baked into the Android APIs making it very simple to integrate with it.

They also feel that Mozilla Persona is a promising (but complicated) protocol; however they feel that on mobile devices it will cause problems in terms of the user flow.

I asked two questions which Tim and Breno answered:

”What do you think the future of OAuth is? Eran Hammer publicly quit as the editor and claiming the v2.0 specification is “more complex, less interoperable, less useful, more incomplete, and most importantly, less secure”. Will there be OAuth 3?”


“Could a ./well-known/oauth file published at each providers endpoint help with interoperability? This file would describe their implementation responds with json or text, which grants are supported, etc”

Hawk: a new HTTP authentication scheme

Eran Hammer (formerly editor of the OAuth specifications) has introduced a new HTTP authentication scheme called Hawk.

Hawk is described as a HTTP authentication scheme using a message authentication code (MAC) algorithm to provide partial HTTP request cryptographic verification. It is related to his other new project Oz which is a web authorisation protocol promoted as an alternative to OAuth. Eran has made it clear that he doesn’t want to write a new specification after the leaving the OAuth working group and so he is intending to describe Hawk (and Oz) through code and leave it up to someone else to write a formal specification.

What is Hawk?

An overview of Hawk is provided in the README file in the repository:

Hawk is an HTTP authentication scheme providing a method for making authenticated HTTP requests with partial cryptographic verification of the request, covering the HTTP method, request URI, and host.

Similar to the HTTP Basic access authentication scheme, the Hawk scheme utilizes a set of client credentials which include an identifier and key. However, in contrast with the Basic scheme, the key is never included in authenticated requests but is used to calculate a request MAC value which is included instead.

The Hawk scheme requires the establishment of a shared symmetric key between the client and the server, which is beyond the scope of this module. Typically, the shared credentials are established via an initial TLS-protected phase or derived from some other shared confidential information available to both the client and the server.

The primary design goals of this mechanism are to:

  • simplify and improve HTTP authentication for services that are unwilling or unable to employ TLS for every request,
  • secure the shared credentials against leakage when sent over a secure channel to the wrong server (e.g., when the client uses some form of dynamic configuration to determine where to send an authenticated request), and
  • mitigate the exposure of credentials sent to a malicious server over an unauthenticated secure channel due to client failure to validate the server’s identity as part of its TLS handshake.

Unlike the HTTP Digest authentication scheme, Hawk provides limited protection against replay attacks which does not require prior interaction with the server. Instead, the client provides a timestamp which the server can use to prevent replay attacks outside a narrow time window. Also unlike Digest, this mechanism is not intended to protect the key itself (user’s password in Digest) because the client and server both have access to the key material in the clear.

To clarify, MAC (Message Authentication Code) is a cryptographic string that is sent alongside a message (such as an HTTP request) to detect tampering and forgery. Both the sender and the receiver will use the same shared secret to generate and verify the MAC. The OAuth 1.0 specification used MAC signatures but these were removed in OAuth 2.0.

Protocol Example

To demonstrate how Hawk authentication works there is a protocol example:

The client attempts to access a protected resource without authentication, sending the following HTTP request to
the resource server:

GET /resource/1?b=1&a=2 HTTP/1.1

The resource server returns the following authentication challenge:

HTTP/1.1 401 Unauthorized
WWW-Authenticate: Hawk

The client has previously obtained a set of Hawk credentials for accessing resources on the “”
server. The Hawk credentials issued to the client include the following attributes:

  • Key identifier: dh37fgj492je
  • Key: werxhqb98rpaxn39848xrunpaw3489ruxnpa98w4rxn
  • Algorithm: hmac-sha-256

The client generates the authentication header by calculating a timestamp (e.g. the number of seconds since January 1,
1970 00:00:00 GMT) and constructs the normalized request string (newline separated values):


The request MAC is calculated using the specified algorithm “hmac-sha-256” and the key over the normalized request string.
The result is base64-encoded to produce the request MAC:


The client includes the Hawk key identifier, timestamp, and request MAC with the request using the HTTP “Authorization”
request header field:

GET /resource/1?b=1&a=2 HTTP/1.1
Authorization: Hawk id="dh37fgj492je", ts="1353832234", ext="some-app-data", mac="/uYWR6W5vTbY3WKUAN6fa+7p1t+1Yl6hFxKeMLfR6kk="

The server validates the request by calculating the request MAC again based on the request received and verifies the validity and scope of the Hawk credentials. If valid, the server responds with the requested resource.

Security Considerations

There are a number of security considerations to consider which can be seen here

Hawk vs. OAuth

The Hawk documentation says that is can be used for client to server authentication but I think it would also make an easier to implement (than OAuth) authentication protocol if you are implementing machine to machine authentication.

For delegating access to protected resources then the documentation recommends Oz as alternative to OAuth. I will review Oz in another post.

PHP Implementation

I’ve had a go at implementing Hawk in PHP –

Client Usage

Assume you’re hitting up the following endpoint:

And the API server has given you the following credentials:

  • Key – ghU3QVGgXM
  • Secret – 5jNP12yT17Hx5Md3DCZ5pGI5sui82efX

To generate the header run the following:

$key = 'ghU3QVGgXM';
$secret = '5jNP12yT17Hx5Md3DCZ5pGI5sui82efX';
$hawk = Hawk::generateHeader($key, $secret, array(
                'host'    =>  '', // you must set this
                `port`  =>  443, // you must set this
                'path'    =>  '/user/123',
                'method'  =>  'GET' // could be POST/DELETE/etc

You can also pass in additional application specific data with an ext key in the array.

Once you’ve got the Hawk string include it in your HTTP request as an Authorization header.

Server Usage

On your API endpoint if the incoming request is missing an authorization header then return the following two headers:

HTTP/1.1 401 Unauthorized
WWW-Authenticate: Hawk

If the request does contain a Hawk authorization header then process it like so:

$hawk = ''; // the authorisation header

// First parse the header to get the parts from the string
$hawk_parts = Hawk::parseHeader($hawk);

// Then with your own function, get the secret for the key from the database
$secret = getSecret($hark_parts['id']);

// Now validate the request
$valid = Hawk::verifyHeader($hawk, array(
        'host'    =>  '',
        'port'    =>  443,
        'path'    =>  '/user/123',
        'method'  =>  'GET'
    ), $secret); // return true if the request is valid, otherwise false

Update 26th November

Last week, after finally getting access to one of the new UAG servers I’ve begun working through Microsoft forefront UAG 2010 Administrator’s Handbook which details how to configure the software and hook different applications up to it. I’m not quite in a position to write anything more detailed than this at the moment but hopefully later in the week there will be more to say.

Open Athens LA is being installed by ICT. This will improve the flow of metadata between the university and services. It will also reduce the number of authentication screens for users.

Last week I started writing the case study about institutional uses of OAuth. I’ve written about 1800 words so far and I anticipate writing in the region of 5000-6000 words. I’m intending to have a first draft finished by the end of January when Chris Brown the programme manager for JISC AIM will be visiting us for the day to look over it and help us plan the last few months of the project.

This week I’m going to review Hawk, which is a new HTTP authentication standard by Eran Hammer. It is basis of another standard, Oz which Eran is proposing as an alternative to OAuth.

Update 15th November

It has been a busy few weeks lately culminating in a work trip to the USA last week. This trip was quite beneficial as, when one is stuck on a plane for 9 hours each way, or completely disconnected from the world 8300ft up the Colorado Rockies in a 1970s cabin with no phone signal, TV or Internet it allows for lots of thinking.

I’ve come up with a list of things that I want to achieve for Linkey by the end of Semester 1 (end of January ’13):

  • Complete the first draft of a case study looking at institutional uses of OAuth.
  • Invite our program manager, Chris Brown, for a site visit and spend the day looking over the case study and planning the last few months of the project.
  • Formally release the code I have been working on, including documentation, unit tests and tutorials.
  • Look at developing a .well-known standard for describing OAuth endpoints and enabling discovery (more on this in a future post).
  • Work with my colleagues in ICT to start implementing a joint OAuth and Microsoft UAG web single sign-on implementation.

I’ve been asked to work with the Orbital team to try and develop an authentication endpoint that will allow University of Lincoln staff and Siemens staff to securely sign-in to applications that have been developed by that project, but with each member of staff using their own institutional/company credentials.