Daniel Fett on privacy-preserving measures and SD-JWT

Media Thumbnail
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Daniel Fett on privacy-preserving measures and SD-JWT. The summary for this episode is: <p>In this episode, Dr. Daniel Fett, expert cryptographer, returns to the show to discuss the landscape of privacy-preserving measures (such as selective disclosure, zero-knowledge proofs or ZKP, etc.) that are emerging to augment existing technologies and enable new scenarios. The discussion gets very concrete when Daniel describes selective disclosure JWT, or SD-JWT, a new IETF specification he is coauthoring that offers a simple and easy-to-adopt approach to produce JWTs capable of supporting selective disclosure. Here at Identity, Unlocked, we are huge fans of this new specification, and we hope this episode will help you get started!</p>

Buongiorno everybody and welcome. This is Identity Unlocked and I'm your host Vittorio Bertocci. Identity Unlocked is the podcast that discusses identity specifications and trends from a developer perspective. Identity Unlocked is powered by Auth0 in partnership with the OpenID Foundation and IDPro. In this episode I wanted to talk about SD JWT or selective disclosure JWT, which is a very promising new specification that was just adopted as a work item by the of working group at ITF. But in fact, I wanted to take this opportunity to zoom out a bit and discuss many of the things that are hard or impossible to do with the technologies that we are used to today. Like Jot itself, OpenID, the various flows that we use for obtaining identity, like they have a really good properties which allowed us to bring identity to a place it is today. But they do have some intrinsic limitations and regardless of whether those limitations are actually a big deal or not, it doesn't matter. I just wanted to flash those out and just clarify it a bit. And today this, I could not think of anyone better than Dr. Daniel Fett, security specialist at yes. com and all the acquaintance of the show because we already had the honor to have Daniel on the show to explain to us with security BCP a couple of seasons ago. And also Daniel happened to be the author, one of the core authors of the SD JWT specification. And so I'm really looking forward to tap his brain. Welcome Daniel.

Daniel Fett: Thank you Vittorio. Thank you for inviting me again. Obviously the last episode wasn't too bad, so happy to be here again.

Vittorio Bertocci: The last episode was fantastic. Thank you for being willing to come back to the show and given that we already had you on the show and we already had a pleasure to hear your story this time, we can just go straight into the topic. And I would love to actually start from the big picture. If we zoom out a bit and we think about traditional protocols and how they work and can we touch on the things that we know might be interesting but are very hard to do or maybe sometimes impossible to do with a traditional approach? What do you think?

Daniel Fett: So in the traditional approach, let's say OpenID connect as an example. So from a high level it's a relatively simple protocol. You have the relying party that wants to get some data about the user and then you have the issuer or the OpenID provider and the OpenID provider has data about the user. And essentially what you want to do is the relying party wants to get some kind of document saying, okay, this is the user, these are some attributes or claims about the user and this is probably signed by the issuer, the OpenID provider. And so the relying party knows who the user is. So it's really simple. It's really just sending data from A to B. And this is, I mean this is obviously very successful, so it's being used a lot on the internet almost everywhere where you lock in and so on. So usually successful, but there are some scenarios that you cannot cover with that. And we have seen more and more of those in the last years. Usually what you cannot do is you cannot decouple these two steps, the provider saying, hey, this is the user and handing out some kind of document and the relying party getting this document. So this is usually one step in OpenID. Connect and in similar protocols as well. And we have seen some instances where it would be really useful to decouple these two steps. For example, say you have your smartphone and you want to for example, put your driver's license on your smartphone and then at the later point in time present this driver's license to some relying party or verify as they often called to show that you actually have a driver's license or maybe to prove your age or something. And with OpenID connect, you cannot easily do this because you would need to be online all the time to in the moment where you want to present your credential to also talk to the OpenID provider to get the credential. So it's not decoupled. And I think this is really the big picture, so we want to or not we want to, but there are many very good use cases where you want to decouple these two steps.

Vittorio Bertocci: So just to summarize, basically the problem that you identified is that in OpenID connect, in order to mimic what happens in real life when you present your driving license, you must have line of sight to an active provider that can be ready to serve you this document in real time at the moment which you needed. Great. So yeah, the offline scenarios are harder to achieve. And I would also add probably for the joy of our privacy advocates listening, is that in real life when you present the driving license, the Department of Motor Vehicles doesn't know to whom or when you are actually doing this presentation. Whereas in the scenario that you described, it looks like the provider, we know where you are going.

Daniel Fett: Exactly. So maybe you have used your Microsoft Facebook or GitHub account somewhere online to log in. Then of course GitHub, Microsoft or Facebook knows where you logged in. And you don't always want that. And especially when we are talking about important and universally used documents like the driver's license or your passport or something, you don't want everybody to know that you've just used your driver's license to buy alcohol for example. So there's also a very strong privacy point in decoupling this.

Vittorio Bertocci: And I like what you said, how you placed it, which is a typical of a scientist that you are rather than the Zillow, which is you don't always want that. It's easy for people to get polarized and say, Okay, given that business privacy preserving, I always want to hide where I'm going. And in fact as practitioners, we know that there are a number of situations in which we want business rules to run at the identity provider and those business rules require to know where you are going so that they can decide what goes inside of the document. But if there are times in which you want to do that and just as you pointed out, there are times in which you do not want this. So what we are thinking of here is to extend the things that we can do not substitute. Would you say it's a fair clarification?

Daniel Fett: Exactly. I mean the success of OpenID Connect and other similar protocols shows that they're clearly suitable for a lot of use cases, but there are some use cases where you value privacy more than the relative simplicity of OpenID connect or some other advantages. This coupled approach, yes. If you want to, and we'll get to that later, if you want to decouple these things, you also have to jump through quite a number of hoops to get some of the properties that you may take for granted in OpenID connect. So it's also about the effort required to issue credentials or to run such a protocol that also differs a lot between OpenID connect and the decoupled stuff, we'll get to in a moment. I guess.

Vittorio Bertocci: That makes a lot of sense, but I guess if a scenario requires it then it will be all worth it. And so before we dig right into this, there is another aspect which you hear very often about when these voice scenarios are discussed and that it is a notion that as with classic stuff, it's all or nothing. Let's say that once you get the document you described, your only choice is to present it in its entirety because it's signed. Instead, there are scenarios in which you might want to disclose less than that. Do you want to expand a bit on that particular aspect?

Daniel Fett: Yes, and I think that's probably the most important aspect. When you're using OpenID connect, it's clear because you're talking to the identity provider, OpenID provider and the relying party almost at the same time. It's clear that the OpenID provider can issue this document, which could be an ID token for example, exactly for the use case. So the OpenID provider can say, okay, I will include in this, say the name, a unique user identifier and age maybe, but this relying party doesn't need to know the address. This relying party doesn't need to know the nationality for example. So this is really simple in OpenID connect. You can define the claims that are needed for the use case. Now if you have a decoupled flow where the issuance happens before the presentation, so the issuance happens once and then the same thing, credential, is being presented to many relying parties or verifiers as they are then called. You don't know upfront which details you want to release to each of the relying parties. So going back to the driver's license example, you get the driver's license and then you want to present it maybe to prove your age. And then of course only your age is relevant. Maybe at a different place you want to present it to show that you're living somewhere or just that you're allowed to drive the vehicle. Then you need other data from the same credential, but you don't need that data to prove that you're above a certain age. So you cannot create a credential that only contains your age or your name and age but not your nationality because you want to have that included on the same credential. And that's one of these things that are really easy to do in OpenID connect and harder to do when you do decoupled things.

Vittorio Bertocci: Makes sense. So then here the keywords that often are heard in this context are selective disclosure and the other buzzword is zero knowledge proofs. Can you tell for our listeners what they should think when they hear those two terms?

Daniel Fett: Yes. So selective disclosure means that you have the credential, the driver's license for example. And when talking to a verifier you can release only parts of that credential. So you can say and you being the holder of that credential, you can say okay, this verifier only gets say my address or this other verifier only gets my name and a third verifier gets my name and address. So that's on a high level selective disclosure. You want to be able to strip out everything from the credential that's not relevant to the use case for privacy reasons of course. And this also means that maybe you can use different credentials. So in one instance you can use your driver's license to prove your age and another instance you could use your passport, but in both cases the relying party only gets the relevant data and doesn't learn anything about you that it's not supposed to learn. So that's selective disclosure. And the other thing is zero knowledge proofs. These two are often connected because you will see they are somewhat similar in what they achieve. A zero knowledge proof means that you, having the credential, can prove something that is contained in the credential and therefore comes from the issuer without telling the verifier more than it needs to know in a very strict sense. So you can prove that you are above a certain age, for example 21 without releasing your actual age to the verifier. So there's a mathematical proof protocol going on between you, having the credential, and the verifier. Where the verifier at the end of the protocol will learn that you are above 21 without having learned that you are say 23 or 64, doesn't matter, you only release this property that you have of 21 using that proof.

Vittorio Bertocci: Fantastic, thank you. So adjusted to send... Bouncing it back at you to make sure I understood. In the case of selective disclosure, I have a document with a list of attributes and I can choose the subset of attributes that I share. So on the example of your age, if I have my birth date among with lots of other attributes, I can share just my birth date. Whereas with zero knowledge proof, I can go even farther and not only select the fact that I only want to talk genetically about age, but I can prove that I am above a certain threshold rather than actually revealing the attribute that I have in my document, which in this case is my birthdate. Would you say it's a fair characterization?

Daniel Fett: That's very fair and in a sense the zero knowledge proof can be used also for selective disclosure because it also means that you only release parts of the credential, but the term zero knowledge proof is, or refers to, a special technique, special cryptographic technique to achieve that on a very, very fine grain level.

Vittorio Bertocci: That makes a lot of sense. Now I have a pet peeve about this thing because we like to use case scenarios to clarify and make things easier for people to understand, but the funny part is that for zero knowledge proof, pretty much the only case that I hear talked about is age and exactly if you want to prove properties about other attributes, it seems a bit harder. For example, I might have a... I don't know, an address and I could prove that these address contains a certain city rather than disclosing the entire thing. But in general it seems like that age is the main scenario. I heard others saying I can prove that I voted without disclosing what I voted for, but I just wanted to highlight of the fact that it's incredibly powerful thing, but it also can be pretty exotic. Not everything is age. So I'm still curious to see some of the use cases where the zero knowledge proof, which is incredibly powerful thing, but potentially expensive is really necessary let's say.

Daniel Fett: Yeah. I agree. It's a very powerful tool, a very powerful hammer, but not everything is a nail and the one biggest nail seems to be the age thing. And oftentimes when you have a concrete problem, selective disclosure already goes a very long way towards solving this. There are also techniques where you can do a poor man's zero knowledge proof using selective disclosure when you expand claims. But yeah, not going into the details here, but yeah, I agree.

Vittorio Bertocci: Yeah, that's really interesting. Okay, so before we dig farther into one specific flavor of selective disclosure, which is a title of the episode, I was curious to talk a bit about the magic which can be used to make those scenarios possible and what are the options out there? Let's say that I know that in parallel to OpenID and ITF and a number of other entities have been working on this problem and so there might be keywords floating around. So if you can just give a high level of what are the things that are used to make those new properties viable and some keywords that people might have heard. For example, JSON LD is one of the things that come to mind as in what is it? Who does it? All of that stuff.

Daniel Fett: Yeah, so first of all, many of these approaches are much older than the SDR we'll be talking about later on and a lot of work went into that and it's a complex landscape, so oftentimes you can combine parts of the one thing with the other thing and so on. So just on a very high level, I think one very popular mechanism, you're already mentioned LD Proofs or JSON LD and the other one, Anoncreds is also very popular. I think let's start with Anoncreds. Anoncreds is a format that is based on so- called CL signatures and CL signatures is essentially a signing format or signing algorithm, which allows you to do things like selective disclosure. So as you can see from what I just said, the credential format is very closely tied to how this all is signed and how selective disclosure is being done. So this is one of the formats. The other format is W3C verifiable credentials based on JSON LDs. JSON LD is a specific JSON format for so- called link data where it can essentially link data from multiple documents. So it is a very specific syntax for JSON. And on top of that you can do so called LD proofs where you can issue a credential in this format and then later on prove that this was issued by a certain issuer using algorithms called BBS or BBS plus, which again, so this is the equivalent to the CL signatures in this format and it's a lot of crypto again, doing selective disclosure. I think also your knowledge proofs man, a bit fuzzy there, but again, very closely tied to the credential format being used. There are also other formats and which in principle allow to decouple this. Another format would be classic, you probably know X509. So you can just essentially create an X509 document signed by the issuer and then present that document somewhere. That should also be on the list, but it doesn't necessarily support selective disclosure and your knowledge proof and so on. So it's a form to do that, but yeah.

Vittorio Bertocci: Yeah, not very fashionable.

Daniel Fett: Not very fashionable. Yeah. There's an ISO effort called mobile driver's license, obviously connected to the driver's license example that we just talked about.

Vittorio Bertocci: Yeah. We had Andrew Hughes on the show and we did one show on the mobile driving license, which is an instance of a more generic categories with that you are expanding here.

Daniel Fett: Exactly. So they're also working on that. And then there's SD drop that we are working on, and this is actually the youngest of all the mentioned formats.

Vittorio Bertocci: Thank you for making that list. I'm sure that just like me, a lot of people are confused by all the options and it was helpful that you made that list. And now you mentioned a number of times different algorithms and crypto and I guess were different key formats. So all those things probably require crypto stacks that are let's say non- traditional, like if I go on any operating system, which out of a box has APIs that help me to do LSA 156 or similar, will I find the necessary algorithms and key formats or do I need to pull in some extra libraries that teach to my system how to do the new crypto?

Daniel Fett: That is exactly the problem that we are seeing in this space. So in many instances, not all of those that I mentioned, but in many of them the features like selective disclosure or your knowledge proofs are enabled by advanced crypto. So cutting edge cryptographic algorithms that were developed often specifically for that purpose, which also means that you have to create the credential in this specific format and you have to verify it using a specific algorithm. And of course you as a holder or your software managing your credentials, which we then call the wallet. The wallet also has to know this crypto stuff. And what we see in the space is that this stuff is hard, which often means that you have one or two implementations of a specific algorithm. So not really a diversity, not really a choice between languages and frameworks and so on because this stuff is just hard to implement. I mean cryptography has a tradition of being hard to implement, but oftentimes today you don't have to implement it yourself. I think we are, we all know that you're not supposed to write your own AES or RSA implementation for very good reasons. Somebody has to implement this stuff, understand the stuff, implement this stuff, and somebody should audit this stuff as well. And as long as that's not happening, nobody knows whether it's secure or not. So the more advanced you get with the cryptography, the harder it is to implement and while you get a lot of nice features, like very good selective disclosure properties, you have the cruel zero knowledge proofs and so on. This stuff is really hard to implement and we only see a couple of implementations and that's not good for the ecosystem.

Vittorio Bertocci: I love these because I think it's a perfect segue for the main meat of the episode, which I think is SD JWT. One thing that I really love of these thing that you and Christina came up with is its simplicity. Because again, as identity experts, privacy advocates, we look at these landscape, we look at those properties and we say, oh absolutely, those properties are very important. But in fact the reality is that we don't know what the market really wants and what people really want. Back in the day, now I'm going to date myself, but back in the WS star days in which everything was message based security, we had this property like non- repudiation property, which was enabled by all the message based security that we were using that you don't achieve if you use SSL and better tokens. And we thought, of course, who doesn't want non- repudiation? And it turns out that very few people wanted and the vast majority of people preferred the simplicity of SSL. And so WS star basically died in a fire and SSL and the better tokens are flourished. And now that's what we have. So the thing I love about your idea is that it's so simple that we really, I believe we have a shot as an industry to implement it and to actually put it in the hands of people without the challenges that you described as in, I don't have the right crypto stock on this platform. Instead like this stuff... Actually I'll stop blubbering and I'll let you describe the idea and the mechanics of this new spec.

Daniel Fett: Yeah, thank you. One thing I'd like to add, it's not only about having this to implement yourself but also sometimes about being able to implement something. Because depending on your use case, you might want to have your keys stored in hardware protected maybe by biometrics and so on. And this can be really hard when your keys are in a like or using different algorithms and so on. So if you want to run this stuff on hardware, you need hardware support. And then it's best if you have simple mechanisms with traditional crypto. And also sometimes we have seen that government authorities don't even allow the advanced crypto algorithms. They want something very, very well tested, well audited, so this can be really an implementation blocker as well. But getting to SD JWT, as you said, we wanted to create something simple, something that's easy to understand where you read maybe two pages of the spec and then you have a very good idea of what's happening here and hopefully you need to read the spec only twice or so to implement it. So that's like roughly what we are aiming for. And I think there are two important parts to this. The first one is that we decided not to use advanced crypto, but a hash based approach. I guess we'll get to that in a moment. And the other thing is that we wanted to have something that connects well into the OpenID connect world and the data formats people already know. And when you look at data formats people are using, they prefer plain text formats where you can just see the stuff that's going on. We see that everywhere in our industry. People, I mean of course the binary format can be very space efficient, can have cool features and so on. But we see people sticking to plain text things. It's always humans developing this stuff and humans love to see what they're doing even if it's not relevant to the product shipped to the user. So SD JWT is based on JWT and JWT is a very easy to understand format. You can, at least most parts of it you can just open up in an editor and just read what's going on. And that's what we tapped into. We wanted to have a format that works well with OIDC that's based on JWT because JWTs just work and they are used a lot. And then with crypto that is easy to understand for anybody who has some experience or basic knowledge.

Vittorio Bertocci: Fantastic. So as the name implies, the property that you are aiming to achieve is a selective disclosure. How does it work?

Daniel Fett: So imagine that you get an ID token from your issuer. The ID token contains all the claims and when you send it to a verifier, the verifier will look at the claims and see everything but can also verify everything because it's signed by the issuer. That's not selective disclosure. Now what you can do is, or what the issuer can do is instead of putting the clear text values into the token, the issuer could put the hashes of each individual claim instead of the claim value into the token. And then the holder of that credential when it sends the credential to the verifier sends the credential plus those plain text values that it wants to release. So for example, when you get the hash of my given name, it's just some hash and you obviously cannot go back from or not easily go back from the hash to the plain text value. But if I say, okay, my given name is Daniel, then you can just hash my given name and get to the same hash and that hash is signed by the issuer so you can verify it. Good thing. Now that's one extra step that you need to go. The problem is the verifier of course sees the whole token including all the things it's not supposed to learn from these things that only sees the hashes. Now as I said, it's not easy to go back from hash to the plain text value, but it's possible or especially if you have a limited number of possibilities that the plain text value could be. For example, if you hash true or false and you get one hash, you just have to try, is it true or is it false that was hashed and it will always be the same of course. If you have a birth date, it's not very hard to just iterate through all the possible birth dates and check the hash. So you need to do something against guessing attacks. And what you usually do is to solve the hash. That means that the issuer when creating the token not only hashes the given name but a salt value, which is just a random string together with the given name and now the verifier gets the thing and for those claims it's supposed to learn, it gets not only the given name for example, but also the salt value. So it can do the same calculation of the salt value and the given name and then get to the same hash and it's signed by the issuer and so on. But for all the claims it's not supposed to learn, it will get the hash value but it will not get the salt value. And the salt value contains enough entropy that it's almost impossible to guess. So the verifier cannot just guess values, even if it's just true or false, it would need to guess the salt value as well. And yeah, that's practically impossible. So the salt protects against guessing attacks.

Vittorio Bertocci: So that's such a simple and clever idea. So again, let me as usual summarize just to make sure that I got it. So you get a JWT, which looks very similar to the one that we normally get, but the list of claims, instead of having human readable values, they have something that looks like garbage and that garbage is just the hash of the values. And then separately the holder, what we call the client nowadays, well not in OpenID connect.

Daniel Fett: Or the wallet in this.

Vittorio Bertocci: Right. Yeah, I was trying very hard not to say wallet, but let's say the entity, like the user they're using whatever software is necessary to do this trick receive both these lists, the signed list of nonsensical values with the types of all the values, but you also get another list with all the salt which is necessary to reverse the hash whereabouts to calculate the hash so that you find out the value and then when it's time to present this stuff to a verifier, you send the signed list with all the values, but those values are all opaque. So the receiver, the verifier cannot do anything with them apart from the ones for which you choose to reveal what is the content. And you do so by including the salt of a corresponding claim type so that if I have my complete passport and I want to disclose only the name, I'll send the entire passport with all the redacted values plus the salt of just the name. And then the verifier will be able to actually still check the signature. So this passport is actually coming from the issuer they expect then actually extract the value only of the salt. So as the name implies selective disclosure, is that a fair summary?

Daniel Fett: Exactly, exactly. And that's what's called the salt hash approach. We're not the first one. We didn't come up with that approach. For example, it's also used in a mobile driver's license, but this is, as far as I know, the first time that this has been used to create credentials based on the popular JWT format.

Vittorio Bertocci: Fantastic. And I have to say that this has been probably one of the fastest accepted spec in the working group. When this was presented in Philadelphia last month and when there was after your presentation, it was a test call for saying, what do you guys think? Pretty much all the hands went up as in like, yeah, this is great, this is great. Let's add a couple of details. One thing that I can think of which looks different from the traditional stuff is that now that list of hash values has no audience, right? Normally we get an ID token where the token says this is for client X, whereas in here you have no audience.

Daniel Fett: Exactly. It's not a drop in replacement for the ID token because of those things. So the issuer will create the credential and send it to the holder or to the user. And of course the issuer doesn't know whether this will be presented. So there's no audience claim in there.

Vittorio Bertocci: That makes sense. And in order to pull off the trick that you described, I guess that now this time we do need to say the word wallet. Normally the client would just be a pipe that he's the resource, he gets redirected and you just like enter HTML and you don't need to be particularly smart. But in here you needed to save every stuff somewhere. You needed to decide what to disclose and you actually needed to pick and choose to do it like a format. So it's a new thing, right?

Daniel Fett: Exactly. It's not an ID token that also doesn't follow the format and there needs to be some knowledge at the client or wallet to do something with this. But we aim, what we aimed for is that this format essentially can be distilled into something that looks like an ID token at the verifier. So this is called SD JWT, it's not called SD ID tokens. We want to have a mechanism that works on any JWT and at the end of the day, the verifier when it gets the credential plus whatever claims were released to the verifier. When it has done all the checks, so whether the released salted and plain text values match and whether the thing is signed and so on, what drops out of that verification looks very much like an ID token, maybe missing something slightly different, but it's very similar. So at that point you can put this into the algorithms that would normally take the data out of an ID token. That was one of the design choices that we made. So this is really a JWT thing. It does have some different formats, especially on the way between the issuer and the verifier, but at the end of the day, what you get out of this is very similar.

Vittorio Bertocci: That's a great point. Here, I think that there is a bit of a bias when we talk about this thing which is induced by the paradigmatic scenario that we use to explain this, like the driving license, which suggests that it's something that you want to be able to reuse across multiple scenarios. And so people often mention we think about the wallet, which is this hypothetical piece of software that runs on the client, which takes care of the saving and using those credentials. But now correct me if I'm wrong, but given what you call this thing a JWT level, like it's lower level thing. So technically I could have my mobile app that decides to use this format for its own uses and it doesn't require it to call anything external like a wallet. Like if I'm using a library, an SDK, which is capable of using this format, technically my app could just use this format without necessarily relying on an external app of what we call a wallet. Would you say it's fair?

Daniel Fett: Absolutely. Absolutely. Yeah. I also imagine because of this that we will see other use cases for this format that we don't imagine today. So wherever JWTs are used, you can use this format. That's also why we brought it to the ITF. We're also in this working group also JWT was standardized, because we hope that there will be other use cases beyond maybe completely beyond identity use cases. Let's see.

Vittorio Bertocci: Yeah, and I love that this is simple enough that you can actually do this. The investment that people will love to do in a playing with the stuff is relatively low. There is no advanced crypto stock you need to bring, you just need a library that supports this. But now we just want to add a tiny bit of complication, which is one of the things that happened often in this space is that on top of all the flows that you described for select disclosure, where is this thing in which the holder has its own keys and whenever it does the dance, what you described, and that's the presentation, they also use their own key on top of all the things that was described so that the verifier can actually prove that the caller, the holder is actually the entity to which that credential was issued. Do you want expand a bit on how it works?

Daniel Fett: Yeah, that's a property that's in some use cases important, not in all use cases, but it can be important that, yeah, as you said, the verifier wants to know that whoever presented this is actually the entity this was issued to. And it's actually quite simple what, or the flow is quite simple. So the issuer includes information about the holders public key in the credential. So this will be signed over by the issuer of course, and the verifier gets this reference to a key held by the holder in the document and then the holder will of course whatever sense to the verifier will of course be signed using that key. So in the transaction, and there can also be some transaction specific data Anons or something in there. This means that in the transaction, the verifier can be sure, okay, this was additionally beyond the signature by the issuer, this was signed also by the holder with the key attested by the issuer. And that can be quite important, because this can also mean that depending on your use case and ecosystem that you're in, this can mean that the issuer has, for example, made sure that this credential is bound to your hardware. So maybe the issuer used some attestation framework or something to ensure that the key that is included in the credential is hardware bound and then the verifier gets the stuff. And again, depending on the ecosystem use case and so on, can be sure that this not only comes from the holder, but whatever key this was signed with is hardware bound at the holder. So the likelihood that this has been copied to a different device is rather low.

Vittorio Bertocci: See I love this spec for so many reasons, but one of them is that we are able to talk about these particular scenarios and mechanisms in a very use case oriented way, because the thing that you described about the harder being able to use a key to secure representation is the quintessential SSI scenario. It's the scenario that the SSI people of itself, so false present in which we say, okay, here as a user you want to have a complete control about your keys. And your keys might be Captain Ledger and you use them to prove. But here we came to that scenario from a different angle, as in we want to be able to use a key in the context of this presentation because of, for example, what you just said, tying this to the hardware without necessarily instead having that other part as the highest order bit. Which is not always very intuitive, because the case of driving license in the end, what makes or breaks the scenario is whether the issuer actually issued the document to you and what the issuer says. It doesn't really matter all that much that you have control over the key that you use for protecting your presentation. If the issuer would, for example, say your driving license has been revoked. So I love that the SD JWT gives us the opportunity to explore the use of this key without necessarily coloring it with any particular scenario just as pure capability. And on that note, we are almost out of time. So as you can hear, I'd love it to speak for hours about this, but unfortunately the time is what we have. So if you were to issue a call to action, it's very early days, but this thing already works. It's not like there is anything missing. If people want to achieve the good properties that you described, if they implement SD JWT as it exists today, they can already do it. So what would you want to see from the community on as action on this new spec?

Daniel Fett: So we have the spec and it's in very good shape. We do have reference implementations. We would be happy to see people actually using this. So we have four implementations. One is an implementation that we actually used to create all the examples and specs. So we have running code as one of the first things we did and three other independent implementations. So I would love to see people using this, giving us feedback. Also identifying the use cases they'll be using this in. We are thinking about adding some features. So if you go to the GitHub page where we have the spec, you will see the open issues, but it's really not much. But we are still thinking about tweaking the spec a bit, but it's in a very good shape and I would like to get feedback, how people use it, what they like, what they don't like about this, and yeah, see this used in the first use cases.

Vittorio Bertocci: Wonderful, fantastic. And of course we'll add all the links in the description of the episode. So Daniel, thanks again for being a guest and for going into these very interesting, very important topics with us.

Daniel Fett: Thank you. Thank you for having me. It was a pleasure. I could have talked another hour more or so about this.

Vittorio Bertocci: Great. So maybe once we have more new information about this then we can have another episode. I'm pretty positive that this scenario will grow in importance and paradigmatic initiatives like this one thinks that it can actually have code like rubber hits the road. I think that they will grow in importance as people move from the hype phase to the actual, let's see what can be done here phase. Thanks everyone for listening. Subscribe to our podcast on your favorite app or at identityunlocked. com. Until next time, I'm Vittorio Bertocci and this is Identity Unlocked. Music for this podcast is composed and performed by Marcelo Woloski. Identity Unlocked is powered by Auth0 in partnership with the OpenID Foundation and IDPro.


In this episode, Dr. Daniel Fett, expert cryptographer, returns to the show to discuss the landscape of privacy-preserving measures (such as selective disclosure, zero-knowledge proofs or ZKP, etc.) that are emerging to augment existing technologies and enable new scenarios. The discussion gets very concrete when Daniel describes selective disclosure JWT, or SD-JWT, a new IETF specification he is coauthoring that offers a simple and easy-to-adopt approach to produce JWTs capable of supporting selective disclosure. Here at Identity, Unlocked, we are huge fans of this new specification, and we hope this episode will help you get started!

Today's Host

Guest Thumbnail

Vittorio Bertocci

|Principal Architect, Auth0

Today's Guests

Guest Thumbnail

Daniel Fett

|Security Specialist