OAuth2 Sender Constraint Support: DPOP and MTLS with Brian Campbell

Media Thumbnail
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, OAuth2 Sender Constraint Support: DPOP and MTLS with Brian Campbell. The summary for this episode is: On the first episode of Identity. Unlocked, host Vittorio Bertocci, Principal Architect at Auth0, is joined by Brian Campbell. Brian joins the show to discuss sender constraint and associated specifications. Like this episode? Be sure to leave a five-star review and share Identity, Unlocked with your community! You can connect with Vittorio on Twitter at @vibronet, Brian at @__b_c, or Auth0 at @auth0.
What is sender Constraint?
01:12 MIN
What are the current standards with sender constraint?
00:46 MIN
Campbell explains mutual TLS profile for OAuth.
00:44 MIN
Campbell explains DPoP.
01:09 MIN
If DPoP blossoms like some anticipate, is there any reason to continue with mutual TLS?
00:44 MIN
Campbell's opinion on how DPoP can be rolled out.
00:52 MIN

Vittorio: Buongiorno everybody and welcome. This is Identity, Unlocked and I'm your host Vittorio Bertocci. Identity, Unlocked is the podcast that discusses identity specs and trends from a developer perspective. Identity, Unlocked is powered by Auth0. In this episode, we focus on sender constraint and associated specifications. Our esteemed guest today is Brian Campbell, Distinguished Engineer at Ping Identity and author of many IT aspects. Welcome, Brian.

Brian: Thank you, Vittorio. Pleasure to be here.

Vittorio: Thank you for joining me today. Can we start with how you ended up working in Identity?

Brian: Sure, we can try. It was sort of by accident. I graduated from college with a bachelor's of arts in computer science, which is as self- contradictory as it sounds right around the time of the dot-com bust things were a little bit interesting. I bounced around from a few different developer jobs and was that a place for a while? I was happy with it, but I was growing sort of impatient with the work, wanted to look for something different and Ping Identity was early stage just about to get some funding and mentor of mine, a former mentor of mine that I had worked with as an intern a few years prior was there and mostly I just wanted to follow him and go work with somebody that I liked and learned a lot with previously. So not thinking a whole lot about the actual company or the industry, I followed a guy that I respected and wanted to learn some more from over to Ping. And 16 and a half years later, I find myself still working for Ping. And while I still liked that guy, he left less than a year after I joined. So my reasons for being there, maybe weren't real focused on Identity, but in the long time I've been there since Identity's become the main focus of my work.

Vittorio: Wow. That's great. And I can see that bait and switch of a journey there. When I moved from Italy, the guy that hired me within one month changed the role and, whup, disappeared and all my furniture was already on a ship coming through-

Brian: Right, right.

Vittorio: But that's great. That's great. I'm so glad after this happened because your contribution in this space has been, as we'll see really, really important. Let's see one thing that I like to ask all our hosts, given that we are all a bit of silver foxes here and especially me here is do you remember how we met? That could we go really a long way, right? Like how did we first collide?

Brian: So to be honest, I don't remember exactly. You have always been at least in my view, a fixture of the Identity industry, as long back as I can think, working in this field, I've known who you were and over time we seem to have just become colleagues and friends, but I can't pinpoint the exact time. I'd be guessing it was probably at a Cloud Identity Summit somewhere maybe early teens, but I can't say for sure. Do you know?

Vittorio: I have a super vague feeling that, that's exactly how it went. Like you are always around in this space and I think it just happened very organically. Like the first meaningful interaction that I recall was during, either sender constraints or token exchange. Might've been token exchange when we were discussing that they act as a semantic was the opposite as WS- Trust. It's the first time that I remember that we had a discussion and we exchanged opinions, but I'm sure that we'd been rubbing shoulders for many years.

Brian: I think you're right. I think that was the first professional interaction, but I think we rubbed shoulders at least in conferences or on Twitter before that. I guess I just have to say I was pleased and flattered that I ended up in the acknowledgements of one of your books. It was sort of my have arrived moment and I was especially appreciative that you didn't thank me so much for my contribution to the industry or the technology, but more for my trash talking. So I really did appreciate that. I thought you captured it perfectly, so thank you.

Vittorio: Fantastic. All right, great. So that's a great introduction. Let's jump into the meat of the discussion today. And the topic of today is sender constraints. So what is this about? What is the problem that we are trying to solve with sender constraint?

Brian: Sure. So in typical OAuth and the OAuth, we all know and love and probably use almost every day. The tokens that fly around are what are called bearer tokens. What that means is that whoever has the token can use it and that makes things simple. It's a nice simplicity feature of the protocol, but it also means that if a token falls into the wrong hands somehow, and we've seen numerous published attacks that have managed to compromise OAuth implementations and deployments. So it does happen any of those things where a token is either lost or stolen or somehow falls into the wrong hands, that token is as good to the attacker as it is to the person or system to which it was intentionally issued. And so what sender constraining does is try to add a significant additional layer of security by constraining the legitimate sender of that token to the client or software to whom it was issued and meaning that it constraints who's allowed to use it. When you think about the term, it's sort of confusing. But again, sender constraint is constraining the sender who is in turn allowed to send the token and use it. And typically this is done by the issuer of the token, the authorization server embedding some kind of reference to a public key inside or referenced by the token itself, which is basically saying, whoever sends me this token must also demonstrate in some fashion possession of the corresponding private key that's corresponding to the public key within this token. And if they can't do it, then that are not allowed to use the token. It's illegitimate to be sent by somebody that doesn't possess this key. And so in sender constraint tokens, then the client itself typically shows the authorization server it's key, somehow proves possession of the key to the authorization server, the authorization server issues the token with a reference to the public key in it and when the client turns around to use the token, it then has to do something to prove that it possesses the private key associated with the public key in that token. And if it can't do that, then the token is considered invalid and rejected. And this ensures that if a token is stolen by whatever means, as long as that attack factor doesn't also compromise the associated key pair, whoever steals the token is unable to use it because they cannot simultaneously demonstrate proof of possession of the corresponding key. And typically the bearer tokens fly around, they're sent in the clear, whereas a key pair like this is typically much better protected on the client, oftentimes the private key can or will be stored within secure hardware so it can never even physically leave the device. So really in summary for the long rambling statement, the idea is to bind a token to a particular key and only allow its use from somebody that can prove possession of that key thus keeping it from being used by malicious actors. And you've heard me say proof of possession a lot, and that's why sometimes you'll also hear sender constraint tokens called PoP tokens, which is for proof of possession or sometimes also holder of key tokens.

Vittorio: And that's terminology it comes from SAML, right?

Brian: Holder of key, I think goes back to SAML and maybe even WS- Star, although I don't know the exact origins of it, but yeah, holder of key, you see a lot on SAML and sender constraint was maybe not coin, but popularized in some of the OAuth work early on and you hear PoP a lot as well, probably because it's convenient and kind of fun to say.

Vittorio: Makes sense. That's fascinating. This is great. Okay. It looks like it eliminates one entire class of men in the middle of tax, because if better people always worry about that, then at this point, if a key never really leaves the client, then you achieve enforcement of the sender. It sounds fantastic.

Brian: I do want to say maybe that the man in the middle or depending on the nature of the man in the middle is oftentimes difficult to protect against even with certain proof of possession mechanisms. It depends on the specific characteristic of the proof of possession mechanism and other protections in place, particularly the security of the TLS layer between the two, but depending on what you mean by man in the middle that may or may not pan out to be true. Really, I like to think about PoP and sender constraint as this generalized security property that an illegitimate acquirer of a token would not be able to use it regardless of how they acquired it. But unfortunately, there are a few little caveats on exactly how strong those protections are depending on the other factors in play and generally the expectation is that server side authenticated HTTPS or TLS is working and not broken to ensure the other guarantees made available by this stuff. So sender constraint is not in any way a replacement for HTTPS, it's still uses or is sent over HTTPS, but provides significant additional security benefits mostly about the possibility of a token being stolen or leaked via other means, then that secure channel.

Vittorio: That makes sense. Great. That's the theory and it's a great segue to what's actually happening right now in terms of what specifications are out there and what's your involvement with it? I think that there are two specifications that are often coming together and sometimes even pitted against each other in terms of the options like which one to use. Can you tell us a bit about this?

Brian: Sure. But maybe I should mention a little bit of the prior art and history as well. I mentioned earlier, and you said that's the theory and it's great. It turns out though that the actual piece of proving possession can be rather cumbersome and difficult. And so there are two sort of current active specifications in the space, but there's a somewhat long history of standards and attempts at standards before. And they've run into various problems largely because it's just a hard, hard problem space. There was some early sort of pseudo HTTP signature work where the signature over the HTTP message would be used as a proof mechanism. There was work around a family and specifications called conveniently token binding that has sort of followed by the wayside due to lack of adoption, but there's a lot of history and lessons learned from those things. Currently, what I think of as two main... and as you mentioned, sometimes competing specifications because they accomplish the same thing in different ways, or at least largely the same thing. The first one is the mutual TLS profile for OAuth and it does a number of things, including some client authentication options and other things. But for the purposes of this discussion, it's largely about binding an access token to the client certificate used in a mutually authenticated TLS connection over HTTPS. And that document has gone through the IETF, through the OAuth working group and was published earlier this year as an RFC. Strangely, I'm going to have to look it up. Yeah, it's RFC8705.

Vittorio: And you are an author on it, right?

Brian: I am an author on it. Yes. I was interested in the work. There was a lot of fairly time constrained and serious requirements coming out of open banking type and PSD2 type deployments, particularly in Europe, but other parts of the world where they wanted to, based on governmental regulations requirements, make available banking APIs, open them up to clients and do so in a standardized way that didn't involve password sharing or screen scraping, but they needed something stronger than just bearer tokens and given the ubiquity of TLS and even though it's sometimes a major pain in the butt, if I can say that here to deploy mutual TLS, it is a fairly well deployed and stable technology. So the MTLS profile of OAuth was sort of born out of the more or less immediate need for some of those deployments to have something stronger and thus works through requiring some sort of mutually authenticated TLS connection between the two components, both between the client and the authorization server, as well as the client and the resource server, and then binding the issue to access token to the client certificate, actually a hash of the certificate. And that was presented by the client. And thus, if somebody were to somehow get a hold of an access token, they couldn't use it because they don't have the client certificate that would be needed to establish the connection.

Vittorio: So to summarize this, this spec was urgently needed. It relies on a well established with technologies such as a client TLS. So existing stuff, as opposed to the token binding what you mentioned earlier that there was dreaming big, but was using things that didn't exist yet, but those existing technologies happen to be, as you call it fully, put a pain in the butt to deploy.

Brian: Correct. And that stuff actually, the development of that standard largely happened during a time when people were still myself included very hopeful about the future of token binding. So it was viewed at least by me, and I think others as sort of a stop gap, kind of, you can do this now solution, that would be useful but hopefully hold over until token binding could be more largely deployed and we could rely on that.

Vittorio: So of all your usability chips were bad thing on token binding, but token binding didn't come and so I guess that's how we get the debate DPoP part of the story.

Brian: Exactly right. So the other specification, which is not actually a specification, it is a working group draft in the OAuth, working group of the IETF, which means it's been accepted as something that the group is working on and actively working on and hoping to move towards becoming an RFC, but that doesn't always happen. It's not a guarantee and is just a draft and currently in the process of being worked on. But what DPoP is, and it's a sort of acronym, backronym meant to stand for Demonstrating Proof of Possession at the application layer. And DPoP works differently than the mutual TLS stuff by trying to do some limited signature work at the HTTP application layer to do that proof of possession in a way that is easier to do and to deploy then token binding was. Token binding crossed a lot of layers and required, pretty deep integration and often time API hooks that weren't even available into the TLS layer. DPoP aims to avoid that kind of thing by being something that uses relatively well known technologies, such as JWT and places them at the HTTP layer, where the average developer has access to these places in the stack and access to libraries that can implement J-W-T or JWT. But then doesn't also come with the same kind of baggage of difficulty and deployment and maintenance of mutual TLS as well as difficulty in potential usability problems of trying to use mutual TLS from the browser. And if you've ever gotten one of those, please select your certificate browser notification prompts. I think even people like you and I that work in this industry are often confused and baffled by those and if you think about what the average type of browser user would think of a certificate selection pop up, it's really not intuitive at all. And it's something that the browsers haven't invested money in because it's not used and it's not used, they don't invest in it and it's a very, very difficult user experience. So DPoP was an idea sort of born out of the need for something to do sender constraining tokens that could be deployed in a relatively easy fashion. None of this stuff is easy, but it's a lot easier than mututal TLS or something like token binding that could be done at the application layer. So in JavaScript from a browser or from whatever sort of platform and language coding environment you have from the server side, it should be relatively easy and possible to implement this stuff, both client and server side, in whatever environment you have. And hopefully relatively simple and deployable is something that we could get out and get working for people in the real world.

Vittorio: That's great. So now you already know what I'm going to ask, but I'm going to ask it anyway.

Brian: Do I?

Vittorio: Yes. So we have two specifications that both give to your resource calls, they sender constraint property, MTLS and DPoP. MTLS today is already RFC, is already the grand standard and similar, but it suffers from challenging deployment practices, whereas DPoP doesn't. Now, let's assume if DPoP follows the arc of IETF and blossoms into an RFC of it's own, would I have any reason to do MTLS at all? Can I put all of my eggs in the DPoP basket?

Brian: I think in timing maybe is one. As of today, as you said, MTLS is real. It has an RFC number. It is demonstratively interoperable with other deployments and because it's an RFC and RFCs never change, there's no risk of breaking changes being introduced in the protocol. That said, it is relatively niche, and as we've established has some real difficulties in deployment. So I think it's a viable option for certain near term high security needs and it can be deployed, it can work. I don't want to overstate the difficulties, but it certainly has its drawbacks and its difficulties. And we are in the position, unfortunately, of having sort of these two different ways to accomplish the same things, but they are different enough that I think both the technical differences and the timeline of development differences justify having two, I know you might use a different word, but it's where we're at. And I think given that the one is already standard and the other one is in development, the need still persists and it's going to take longer to develop a new sort of application level protocol than it would be to reuse the existing stuff of MTLS. So DPoP offers a lot of promise, I think and relative simplicity and broad deployment ability, but it does have the potential risk of changes being introduced in that arc of the IETF process, as you mentioned, which does bring some risks to developers. There's always a risk in developing against draft standards that things may come down to change. But as we work on it and refine it, hopefully that risk will narrow as we move forward, will narrow in on the specifics. And I do think it's relatively mature as is right now so I know of at least a few implementations and deployments that are actively playing with it and deploying it now and having some success with it. So I think I've stumbled around and kind of avoided your question at large, but...

Vittorio: No, I think you gave a good answer, which is very conservative and I'll just decorate it a bit and see what's your reaction. I think that you brought up a good point, which is services work in progress and so if people are based in implementation or work in progress, they should be prepared to have this thing break, but at the same time, you also said this thing is reasonably mature, and so I will bring a historical examples of, for example, the token exchange, which was not done for a long time and yet in number of vendors, picked a draft and implemented the draft and got the good business value out of implementing the scenario that this thing implemented. Sure. They didn't have any chance of interoperating with anyone else, but if people stayed within their world garden, they were pretty well off. So yeah, you're right. Do want the people working against drafts, but at the same time when I hear your service is fairly mature then I am hopeful that people can already start to take advantage of this, knowing that they might not interoperate.

Brian: Yeah, that's a good example. And I think there's been other examples of things that were implemented against early. In fact, the MTLS work did go into deployment in some of the open banking environments prior to actually being ratified as an RFC. There were a number of deployments of the original OAuth specification that picked a draft number and went against it. So there is risk there's also reward, and I would also say that those implementations that come out early, not only do they derive some business value from it, but there's also an opportunity to feed back really valuable information from the implementation perspective, back into the standard and improve it based on that experience. So I don't mean to push people away from implementation, just want to be realistic about the potential risks and benefits of doing so prior to standardization.

Vittorio: And that's very fair. Now, just before we close, if someone owns an authorization server or an SDK or a client SDK, DPoP touches all of those, right? So like in order to close a scenario, you need to have all of those parts participating. What's your take on that?

Brian: Yes and no. And it's something that's being discussed right now is specifically how to ensure that DPoP can be rolled out in phased type environments where not each individual component fully supports it, but avoiding sort of a big bang rollout type deployments and allowing for phase stuff where different components pick up support one at a time. In general, to get the benefits of it, yes, they all need to be updated. And so that comes down to a few. It's really not that significant on each piece. Basically what the client needs to do is on each request, both to the authorization server as well as the resource server, it creates and signs a little JWT that contains its public key and a little bit of information about the HTTPS requests that it's going to be sent with and then sends that as an HTTP header. And that's enough information to basically prove possession of its private key with respect to that particular request. So the request URI and method is there as well as an identifier and some time bounding. And so if somehow that particular what's called the DPoP proof leaks, it couldn't be reused or couldn't be reused except for a very, very tight time window. But that's all there is to it from the client perspective is just sending this additional header with a sign message proving possession. The key on the authorization server side, it needs to verify that validated at all and if all that is all well and good, then encode a hash of the public key into the access token that it issues and send that back to the client along with an indicator that in fact it is a DPoP token rather than a standard bearer token. And then to use the token, the client sends that token to the resource server, the token itself, but it also sends one of those DPoP proof signature JWTs that I was talking about. So it sends out proof showing that it has access to the key and sends the token, which is bounded the key with it. And the resource server both needs to do its normal validation and checking the access token, but also validate that signature of the proof and make sure it matches up to the key in the token itself. So ideally all three components are updated at the same time to get the full benefits out of all this, but a client could send a DPoP header to an authorization server that doesn't yet support this and it would just ignore it because it's an unknown header, or if you've got the authorization server and the client working, you could issue DPoP bound tokens, but sending them to a regular old resource server that accepts bearer tokens almost certainly will work and that gives you an opportunity for sort of a phased or even mixed token style deployment over time, which hopefully will help ease the burden of rollout and allow for things to be rolled out piecemeal, but still eventually get to a nice and in place.

Vittorio: That's great. What about refresh tokens? Can I bind refresh tokens?

Brian: Oh, an excellent question. So in general, from the very base, OAuth standard, a refresh token is bound to the client whom it was issued to and so if that client is a so- called confidential client and has any kind of form of client authentication, that token is already effectively sender constrained. It's constrained to only be used by that client and that client must always authenticate itself to do anything. So in order to use the refresh token, that client has to authenticate and that in effect, I'm talking in circles, but that constraints the token to the client. Now there is one area that's been popular in OAuth that doesn't have such a sender constraint, which is public clients, clients running mobile native apps or in browser JavaScript styled apps are typically public clients, which is they have a client ID, but no associated credentials. And that means that those refresh tokens, if they're issued to that kind of client are effectively bearer tokens, they are sender constrained in that they're required to be presented by that same client ID, but that client ID doesn't have any form of authentication so there's no real protection there. What DPoP does is it offers DPoP binding of refresh tokens only for those public clients. So for refresh tokens that would otherwise be unconstrained, it adds this additional layer of constraining the refresh token and binding it to the DPoP public key. So it fills in the sender constraint gap for refresh tokens only in the small area where it's not currently possible.

Vittorio: Wonderful. Thank you. That's perfect. So that basically means that we are now granting to the public clients the same powers that before we had only for confidential clients. We allow public clients to have a protected user of the refresh tokens, which is fantastic.

Brian: It's a nice addition. Yes.

Vittorio: Very nice. Wonderful. Brian, thank you so much for your time. This was incredibly interesting. I hope that you'll come back on the show in the future because you have your hands in so many different jars and I'm hoping to extract at the same level of wisdom also for other areas.

Brian: Well, thank you for having me. It's been a pleasure and I'd be happy to come back sometime.

Vittorio: Wonderful. I'll take you on your words for that. Thanks everyone for tuning in and until next time. Thanks everyone for listening. Subscribe to our podcast on your favorite app or at identityunlocked.com. Until next time, I'm Vittorio Bertocci and this is Identity, Unlocked. Music for this podcast composed and performed by Marcelo Wiloski. Identity, Unlocked is powered by Auth0.


On the first episode of Identity. Unlocked, host Vittorio Bertocci, Principal Architect at Auth0, is joined by Brian Campbell. Brian joins the show to discuss sender constraint and associated specifications.

Like this episode? Be sure to leave a five-star review and share Identity, Unlocked with your community! You can connect with Vittorio on Twitter at @vibronet, Brian at @__b_c, or Auth0 at @auth0.

The mechanism described by OAuth2 for using access token to access resources, as defined in the bearer token usage specification (RFC6750), simply entails attaching a token to the request to the API. The approach is extremely simple to implement, but token leaks can have disastrous consequences: nothing stops an attacker from using a stolen bearer token to successfully access a resource.

“Sender constraint” indicates a series of techniques that bind tokens to a particular sender, for example by forcing the token to travel on a specific channel or requiring proof of knowledge of a given key. The goal is to guarantee that only the legitimate sender can successfully use a token to access resources, thus making it impossible for an attacker to access a resource using leaked tokens.

There are two different specifications in the OAuth 2 family offering viable sender constraint capabilities today: OAuth 2.0 Mutual TLS Client Authentication and Certificate-Bound Access Tokens  (MTLS, RFC8705) and OAuth 2.0 Demonstration of Proof-of-Possession at the Application Layer (DPoP).

MTLS is robust and stable, but not easy to implement in various important scenarios. For a while the identity community worked on an alternative, a set of specifications under the general token binding moniker (main one: RFC8471), however support from key industry players disappeared or never materialized, making token binding non viable.

DPoP quickly emerged as an easy to implement alternative that could fill that gap, and although it is still in draft state, is already very popular and making quick progress.

Brian touches on all those specs during the episode, walking us through the trajectory that led us to today’s situation and taking the time to dig deeper on the trade-offs, strengths and attention points of the various approaches. The episode concludes with important practical considerations to keep in mind when planning an implementation strategy for sender constraint in your solutions today.

Make sure to subscribe and join us for the next episode where Aaron Parecki (Senior Security Architect) talks about what’s new with OAuth2.1.

Music composed and performed by Marcelo Woloski.

Today's Host

Guest Thumbnail

Vittorio Bertocci

|Principal Architect, Auth0

Today's Guests

Guest Thumbnail

Brian Campbell

|Distinguished Engineer