Exploring Financial-Grade API (FAPI) with Torsten
Vittorio Bertocci: Buongiorno everybody and welcome. This is Identity, Unlocked, and I'm your host Vittorio Bertocci. Identity, Unlocked is the podcast that discusses identity specifications and trends from a developer perspective. Identity, Unlocked is powered by Auth0. This season is sponsored by the OpenID Foundation. In this episode, we focus on Financial-grade API, better known as FAPI. Our esteemed guest today is Torsten Lodderstedt. I apologize for however way I butchered your last name. CTO at yes.com and all-star contributor to the IETF and the OpenID Foundation. Welcome Torsten.
Torsten Lodderstedt: Yeah. Welcome, Vittorio, and thank you for having me here. You did a great job in pronouncing my name. I have heard worse than that, so thank you.
Vittorio Bertocci: Wonderful. Thank you. And thanks for joining me today. As it is tradition, can we start with how you ended up working in identity?
Torsten Lodderstedt: Yeah, sure. To start with my background is in software engineering and software architecture. After my study, I worked as an IT consultant. And after quite some time, I think 10 or 11 years in 2007, I was hired for a project at Deutsche Telekom's product development unit in Germany to help them to develop and operate their identity management system. I didn't have a real clue of identity, but I had some security background at that time, which helped. The identity group there does consumer identity management, and most people don't know even though Deutsche Telekom is Germany's one of the biggest landline and mobile operator. They also have a huge set of services, feature services, and they've got a central user identity management system with a web SSO experience and so on. Quite interesting stuff, high volume, high scale. And at that time, when I joined it, web SSO based on proprietary protocols, they had done experiments with SAML and Liberty Alliance, but their proprietary protocol seemed to better fit their expectations. It was simpler. One of the first projects or one of the first things I did with Deutsche TeleKom was, I designed their token service, because at that time, around 2007, '08, '09, they built the first third party APIs for developers and with the advent of the iPhone, mobile applications needed backing by APIs, and we built that, a security token service. Conceptually, it was based in Kerberos, because I had Kerberos experience from previous project, and I really liked the concept of self contained tickets/tokens and audience restricted tokens/tickets.
Vittorio Bertocci: You didn't do WS-Trust? You did your own thing?
Torsten Lodderstedt: No. We did our own thing. It was really actually based on SOAP, but we used SAML as the token format at that time. I think it was a bit simpler because WS-Trust, especially the token service is really complex thing. Really generic. It was really complex. Yeah.
Vittorio Bertocci: Oh, yeah.
Torsten Lodderstedt: From time to time, kidding with my colleagues about that time.
Vittorio Bertocci: You know my license plate on my car is still WS-Star.
Torsten Lodderstedt: Oh, really?
Vittorio Bertocci: Oh, yeah.
Torsten Lodderstedt: Okay.
Vittorio Bertocci: To remind myself of those times.
Torsten Lodderstedt: The reason that we base that conception in Kerberos also might explain why I still am a real fan of self contained access tokens, because we learned that those give you incredible advantages in terms of scalability and performance. Also, audience restricted access tokens, and I've always built systems that use that pattern. We then over time adopted OAuth 1.0. I didn't like OAuth 1.0 very much, because of the complexity caused by the application level signature. Developers really struggled to implement it correctly. The problem always was, if it goes wrong, the mechanisms just tells you, well, something is wrong, but not what. It also explains why I am all of my professional career, tried to circumvent application level signatures. Yeah. But in the end, we adopted OAuth 1.0 but we use it as a refresh token mechanism in addition to a proprietary mechanism. I think around the end of 2009, one of our architects approached me and told me, "Hey, Torsten. There's a new specification." That looks like something we do, and I took a look on that and it was the OAuth WRAP specification. I don't know whether you remember that one?
Vittorio Bertocci: Oh, absolutely. I interviewed Dick in last season, and we mentioned OAuth WRAP very briefly.
Torsten Lodderstedt: Yeah. We realized, I mean, that's very similar to what we do. Yeah, let's go and contribute, and potentially we can learn something. And so in the beginning of 2010, I joined the OAuth Working Group, and yeah, I learned a lot since then, and I also started to contribute. My first contribution was, I did the OAuth security threat model and security considerations for OAuth 2.0 which was also the basis for the security considerations in RFC 6749. Yeah. Since then, I authored several other drafts and contributed to OpenID Connect as well. In 2017, I joined yes.com. The focus of my standardization work, and also my work changed a bit, because now I'm more working in the context of identity assurance, high level identity assurance, and clearly financial grade APIs.
Vittorio Bertocci: That's fantastic, which is a great, great segue to the star of the day which is FAPI. Can we start with what is FAPI? And why now, assuming that I know nothing, which is kind of true, and tell me a high level what FAPI stands for.
Torsten Lodderstedt: Okay, Vittorio. I'm trying my best to explain it. FAPI first of all, is what we call a security profile. I would say it's a security and interoperability profile for OAuth, mainly intended to be used for open banking scenarios. In that context, we also have defined and incubated new specifications as they were needed. Open banking itself, what does the term stand for? Open banking basically means that the financial institution you're banking with allows you to use the data with the bank, your transactions, the capability for payments to use with third party applications. It opens up all those capabilities and assets, and this clearly means there is a need for APIs, and APIs typically means also, these days OAuth. What we had to learn when we started to work on open banking is there were two challenges with OAuth when it comes to open banking. The first challenge is security. Traditional OAuth, when I say traditional OAuth, I mean RFC 6749, something like that.
Vittorio Bertocci: The core specification.
Torsten Lodderstedt: The core specification, and the way it's used today, had some security issues, and they were discovered around 2015, '16. I think you talked about those issues with Daniel Fett in one of the sessions, that makes up the tech, for example, code replay, and so on. You have to make sure in an open banking scenario even more, and then in other scenarios that these attacks are coped with, because otherwise people can access your account data, which are very sensitive, potentially, attackers could initiate payments on your behalf. I think that's something you don't want.
Vittorio Bertocci: It doesn't sound good at all.
Torsten Lodderstedt: Yeah, exactly. And that's why we had to curb with the security problem. And the other aspect is OAuth is a framework, which means it's a tool set. You can build great solutions based on it, and they all look similar, but they are not the same. Meaning, if you have two different OAuth deployments, it's very likely that they do not work the same way, so you have to adjust your code in order to make them work for those two different deployments. And now put that in the open banking scenario. In the European Union, open banking took a huge leap forward with the so called Payment Service Directive 2, which was put into effect in 2018. Under this directive, 6,000 banks are becoming API providers. 6,000 banks. Just imagine the situation if those 6000 banks do completely different things. No one can really afford to integrate with all of them, and that's why interoperability is a really, really important aspect. And that's why we have FAPI.
Vittorio Bertocci: This is a point that I believe is important stressing. OAuth is nice and super useful, but is underspecified. If A and B both use OAuth, that alone is not guarantee that A and B can talk to each other out of the box. And so part of what you're doing with FAPI is to guarantee that if A and B both support FAPI, then their ability to interoperate out of the box increases. Is that a fair summary?
Torsten Lodderstedt: Yeah, that's a great summary. Yeah.
Vittorio Bertocci: Fantastic. That's great. That's great. Double clicking, where in terms of concrete steps, what do we find inside of this FAPI thing?
Torsten Lodderstedt: I would like to mention that we've got two versions of FAPI. We have FAPI version one, and the development of FAPI version one started in 2016, around 2016. We now have a new version under development, which is called FAPI 2.0 which is the next evolutionary step. I'm going to explain FAPI-1.0 to start with. The approach taken is very different between the different versions. In 2016, as I said, there were security analysis that showed there were issues in the OAuth protocol, and at that time, the FAPI Working Group decided to patch those issues using existing OpenID Connect mechanisms, because the rationale was at that time, there are products in the market, and for the sake of time, we use what's already there to patch those holes and build a security profile based on that. That explains how v1 works, right? For example, to use this ID token as detached signatures to protect the authorization response just to give an example.
Vittorio Bertocci: Right. And so just to clarify, because a lot of people see the difference between OAuth and OpenID Connect as a blurry thing. OpenID connect as an extra mechanisms like use of nuances that are similar, that help to protect message exchanges, that OAuth doesn't have. When you say that you use what's already there, are you saying that you use the some of those OpenID mechanisms in scenarios that are more typically OAuth, like you're calling APIs, running events?
Torsten Lodderstedt: Yeah, that's correct. I mean, OpenID and OpenID Foundation is focused around identity, right? Building protocols to allow parties to exchange identity data, whereas OAuth is about API authorization. The FAPI Working Group is very special, because we're building profiles for API authorization, but as you pointed out correctly, in the first version, we use native OpenID Connect mechanism to reinforce the API authorization.
Vittorio Bertocci: Great. And you mentioned, ID token and detached signature. Can you spend a few moments to expand on what a detached signature is?
Torsten Lodderstedt: Yeah, sure. OpenID Connect has a special response type. The response type is the mechanism in OAuth to specify what comes back from the authorization endpoint. And what people typically use these days is the authorization code, which is the response type code. In FAPI version one, we use the code response type code ID token, which causes the authorization server to also add a JWT, the ID token to the response. That JWT typically contains identity data about the user. But when it is sent through the front channel, it also contains hashes of other parameters of the response, especially a hash of the code, a hash of the state, and so on. If you put this all together, then this means the ID token, which is a one of the parameter is a signature object, which includes references to other requests or response parameters. And that's why it is a detached signature. It's a bit complex, but in the end, it prevents injection attacks. For example, if an attacker tries to inject the code that does not belong to that response, the application can detect that.
Vittorio Bertocci: Now, it is complicated, but you explained it really nicely. If I get my ID token, it comes down. This ID token contains the hash of the code. If anyone messed with the code, then when they do check the hash that is inside of the signed token, and I see a discrepancy, then I now can detect that someone injected whereas without that ID token with the signature, I wouldn't have been able to.
Torsten Lodderstedt: Exactly. That was one of the problems in OAuth at that time, when there was no mechanism natively built in OAuth to detect this kind of attack. There is another attack the markup attack, and the ID token also helps to detect this attack. Because as a countermeasure, I need to understand what authorization server sent the authorization response. The ID token has a claim which is called issuer "iss", And that claim is used to determine what authorization server sent a response. Both together, the detached signature and the issuer claim help to get rid of a lot of attack angles that existed at that time with traditional OAuth.
Vittorio Bertocci: Nice. Very nice. Great. As a casual observer, if I open the fapi.openid.net I see that it looks like v1 is subdivided in some macro areas. Can you tell me a bit more about what those areas are?
Torsten Lodderstedt: Yeah, sure. We have two different profiles for different security levels. One was what we used to call the read profile and still on the website it's read, and the other one is read/write. The difference being that the assumption is that and keep bear in mind, we initially are focused on protecting our financial APIs. Now it's called financial grade APIs because the mechanism can be used for other contexts as well as eHealth and so on. But going back to the regions, and the assumption was that accessing read only APIs requires less security than accessing read/write APIs, such as APIs for initiating payments. The read in v1 is a really, really basic profile. It uses PKCE for code replay detection. It uses exact redirect URL matching for preventing leakage and impersonation. It recommends use of OIDC or OAuth metadata for having a robust mechanism to determine all the endpoints. Whereas the read/write is much more comprehensive, it is restricted to confidential clients. It uses signed request objects to prevent tampering of the request.
Vittorio Bertocci: A sign request object is not something that is very common, it's not part of the core, so what is it?
Torsten Lodderstedt: A sign request object, in traditional OAuth, all the request parameters are sent as URI query parameters. They are just added to the URL and are sent to the authorization endpoint. This means an attacker that poses as a user of the application can modify that strings. For example, swap scope values, and inject scope values that refer to payments of another person. In order to prevent that, the signed request objects puts all those data in a JSON object, which is a JWT and that JWT is signed, and then sent over the wire through the browser instead of the unsigned data.
Vittorio Bertocci: Great. This is all in its own specification right?
Torsten Lodderstedt: When we started, the signed request object was part of the OpenID Connect core specification. Meanwhile, the authors, Nat Sakimura and John Bradley, brought this part to the IETF. It is a draft. I think that's in Working Group. No, shortly before publication at the IETF. It's called JWT Security Authorization Requests.
Vittorio Bertocci: Everything was in JARM. Okay, great. We'll add a link to the show notes to make sure that people want to know more about how this works. That's great. Fantastic. This is one more manifestation of your higher security that is offered by this profile. You can't mess with the request because now the parameters are signed, so if you try to add the scope, a signature doesn't work and the authorization server knows. Fantastic. Great. Please continue. Sorry for the interruption.
Torsten Lodderstedt: No, no. No problem. I could talk about FAPI for hours and cannot stop talking, so it's good if you interrupt me.
Vittorio Bertocci: No, no. It's good - all things that we find in the read/write profile, those are really useful things.
Torsten Lodderstedt: Let's assume the request and then hit the authorization server on the way back. We also want to detect and prevent modifications, and tampering and injections. And that's why we used the before mentioned, ID token as a detached signature. In the same way, as we protect the request with a signature, we also protect the response with a signature. We've got two options for that. Either the application uses the ID token from Open ID Connect, or there's another mechanism, which is called JARM which stands for JWT Secured Authorization Response Mode.
Vittorio Bertocci: All right. Where does v1 live in? Is it OpenID or is it in IETF?
Torsten Lodderstedt: It lives in the FAPI Working Group. It lives in the FAPI Working Group, and we haven't brought it to the IETF yet. We are considering that, but for the time being, it lives in FAPI and is part of the read/write profile. It's a bit simpler than ID token because it just puts in the same way as JARM puts the request in a JWT. It puts the response in a JWT. That's basically it is. No hashes to be calculated. You just take the JWT, you put all the response parameter in it. It's part of our attempt over the evolution of the security profile to make things simpler for developers which ultimately ended up with v2 which is much simpler than what we see today in v1.
Vittorio Bertocci: That makes a lot of sense. I know that in these contexts, you have also other measures like sender constraint, or the private JWTs. Can you mention some of that as well?
Torsten Lodderstedt: Yeah, sure. As I mentioned, the read/ write profile is restricted to confidential clients, which makes a lot of sense, because in those security sensitive scenarios, you want to be damn sure that you're talking to the right client. In that context, FAPI read/ write also recommends or requires the client to use public key based cryptography for authentication. But you either use MTLS, or you use private key JWT to authenticate which in turn also means the AS or secrets cannot leak at the AS which is kind of a non repudiation functionality. Also, based on public key crypto, the access tokens that are being issued are bound to the public and the private key under the control of the client, which is called sender constraint access tokens. It's a really nice feature because that means if an access token is used to request a certain resource, the sender, or the client needs to demonstrate possession of the private key towards the resource server. If it is unable to demonstrate that possession, the resource server will just refuse to process the request, which means if an access token leaks, an attacker cannot use and abuse that access token without also getting access to the private key that is well protected with the client.
Vittorio Bertocci: That's really powerful. This is one of the things that high risk customers always ask for. One of the reasons why we share, a lot of people looked at OAuth with suspicion, because they didn't have this feature. But now, thanks to the mechanism you described, we do. We did explore the sender constraint in general, with Brian Campbell in the very first episode of the show. For people that are more interested in deeper look, in this aspect, I'd encourage them to check out that episode. But just to do my basic summary, the idea is that we are restricted to confidential clients, hence confidential clients must have credentials associated. And the traditional, as you say, grandfather clients would normally just use a string, a shared secret, but in here, we raise the bar by saying no. In order to authenticate yourself as a client of an authorization server, you must use public key cryptography. That comes with a lot of guarantees as in the key never really travels, but only the effect of having used the key, and then all the nice potential consequences such as binding the to the access tokens as well. Fantastic. That sounds really, really powerful. I heard in some discussions that there is this new thing, DPoP which we also mentioned with Brian, which is another way of binding tokens to a client. Do you think it's possible that in the future, that might be one mechanism that FAPI also recognizes alongside MTLS?
Torsten Lodderstedt: Yeah, definitely. Definitely. I mean, we were struggling for a couple of years to really come up with a suitable, simple and broadly supported mechanism for sender constraining. There was never a discussion. Well, that's need it, right. But you might remember, token binding back in the days, so we all bet on token binding, when FAPI 1.0 was designed. Now, you didn't bet on that?
Vittorio Bertocci: Well, I was famous in Microsoft for being a downer, because every time it was betting on token binding, I was always the one to say, "Guys, you are expecting way too many planets to align, and it won't happen." Don't tell John Bradley. He is really emotionally invested in token binding. But I never believed it would happen. Whereas first time I heard about the idea of DPoP, that interest it got. I was like, "Oh, yeah. This is it. This is going to be it."
Torsten Lodderstedt: Yeah. I mean, even though John was a real evangelist of token binding, he also was part of the team that did the MTLS specification. I at the time used to refer to that as poor man's token binding, but it works, right? That's the difference. There are not so many parameters that need to really have the right value to make it happen, but we also need to admit that MTLS is for some deployments, really hard to use. We use it at yes.com and we are really, really happy with that, but DPoP is a good complement. We have a discussion in the FAPI Working Group to adopt DPoP as part of FAPI-2.0. I think it makes a lot of sense to have private key JWT as the authentication mechanism with application level signature, and to have DPoP as an alternative to MTLS for doing similar constraining. Yeah. That makes a lot of sense.
Vittorio Bertocci: That's great. That's fantastic to hear. There was this third component of FAPI, which has this nice acronym, CIBA. Can you tell me a bit more about what CIBA is? How it came to be all of that?
Torsten Lodderstedt: Yeah, sure. CIBA stands for Client Initiated Backend Authentication.
Vittorio Bertocci: Very impressive.
Torsten Lodderstedt: Yeah. CIBA goes way back to the MODRNA in our Working Group at the OpenID Foundation that initially was set up to build or to provide mobile network operators that want to become identity providers with the respective specifications. And that was in a project together with GSMA called Mobile Connect. CIBA was designed to address use cases that are different than the usual web redirect based flow. For example, let's imagine you're calling the call center of your mobile operator or your financial institution, and the agent wants to authenticate you. That's a scenario where you can't use obviously, that browser flow. Instead, the agent could initiate a flow, and then you get a callback on your own device on a mobile app, and you see content screen and you can confirm or refuse that request. That's basically what CIBA does. There are other scenarios, and that's the reason why the FAPI Working Group did a profile of the CIBA spec, because in the financial area, there are scenarios like points of sales payments that you could conduct with your smartphone. The transaction is initiated on a POS terminal, and then you get a notification on your mobile phone, and then you can approve that transaction or kiosk scenarios. ATM like machines where you initiate a transaction, but you want to conduct the authentication on your device, because you do not trust that this kiosk really conducted the transaction well. It's not somehow hacked or something like that. That's the scenario CIBA is built for. Yeah, it's a great complimentary grant type to the universal code grant type for special scenarios. People need to keep that in mind because the security characteristics of CIBA is not as good as of OAuth code flow, simply because you lose the binding to a certain session on a device because you have a split device scenario, right? The so called consumption device and an authentication device. That's what people need to keep in mind. That's one of the reasons why there is some mechanics in the protocol to provide that binding to the user in order to prevent phishing. That's one area where the FAPI Working Group added extensions to the CIBA protocol to make sure that CIBA is not used for phishing attacks.
Vittorio Bertocci: Another peculiarity that I know stimulates the imagination of people is that CIBA added an extra endpoint to an authorization server, right?
Torsten Lodderstedt: It did, yeah.
Vittorio Bertocci: What is it? And what does it do?
Torsten Lodderstedt: The new endpoint is used to initiate a CIBA transaction because what basically is going to happen declined, somehow kicks off the transaction with the authorization server, and in the next step, it either gets an access token or an error. But between those events, time elapses, right? I mean, this is one event, then something happens somewhere at the device of the user, and then either you get a message back, an event back, or you're pulling for the result. By no means, this is by no means a blocking request, because it would be a blocking a request, you would need to wait four minutes, potentially to succeed. That's why you need two different interactions with the AS. The second interaction with the AS is a standard token request because in the end, it results in an access token being issued. That makes a lot of sense. It's just a new grant type. The first one is special, because it has special parameters that are only used for CIBA, for example, an identifier for the user. Because since this is a backend request, and we have a split screen scenario, you need to somehow identify the user towards the AS, because the AS has no mechanism to ask the user for user name, which is quite different to the code flow. In the code flow, you don't need to know who is going to log into the AS because it just redirects and that the AS will find out. In CIBA this is different, so you need this parameter. You need other parameters, and from a design perspective, it does not make sense to overload an existing endpoint. Because what endpoint should you have been used? The token endpoint? Well, the token endpoint typically gives you an access token. You don't want to get an access token, you want to get a handle for the transaction. The authorization endpoint, well, the authorization endpoint expects a redirect URI, a nonce, a code challenge. Those are parameters that are required for securing the flow in the browser. They are not needed for a CIBA back channel request. That's the reason why the decision was made to make a new endpoint.
Vittorio Bertocci: It makes complete sense, and I think it was a right decision. Yeah. Overloading, that stuff, we don't need. It would have been messy for no reason.
Torsten Lodderstedt: I mean, I was part of that discussion initially, and John Bradley was as well, and we were really concerned to use the authorization endpoint. I pointed out, what do we do with the redirect URI? That doesn't make sense at all? I mean, people are afraid of introducing new endpoints, I understand that, but I mean, this is the way extensibility works in OAuth and it doesn't hurt.
Vittorio Bertocci: No, absolutely. And also, it's just cleaner as someone who owns a product that needs to do a job. The least amount of overload, means that it's more maintainable, and it's clear. It's easier to detangle the various code paths. I think it was a good decision. On that note, the obvious was other things like you mentioned it through the lens of v1, but you mentioned that v2, your guys are already working on v2, right?
Torsten Lodderstedt: That's correct. Yeah, that's correct. We won it basically in '16 '17 '18, and also helped different organizations in the open banking space to adopt it. Namely, open banking in the UK, for example, and others, such as the CDR in Australia. In 2019, we leaned back and did an analysis, because around that time, there were a lot of existing implementations of different competing standards in the European Union. We tried to gather what are they doing? And what can we learn from that? I gave a presentation about that at Identiverse last year. The result was, first of all, there was a gap in FAPI 1.0 because I mean, the security was okay, but we learned that especially in open banking, the authorization requests typically contain very complex data. In traditional OAuth, you've got simple scope values. Read, write, email, something like that. In open banking, the relying party or the client typically asks for access to a number of accounts, and they really give the numbers of those accounts. They ask for read access to the balance of one account and read access to transactions for another account. When it comes to payments, even more complex because you have to say, " Okay, what's the beneficiary? What's the currency? What's the value? What's the reason?" and so on. That is quite complex, but it is required by regulation. What we learned is that most people know none of the implementation we analyzed uses scopes for that. Which is not a surprise, because if you want to encode that in a simple string, you're going mad. All of them use JSON of one kind or the other, some use resources that are latched with AS. Others just send the JSON in the authorization request, and we talked, okay, if we really get one to come up with interoperability for the authorization, we need to somehow also define mechanisms for conveying this kind of rich authorization. That was the first learning. The second learning was, why don't we get started to work on FAPI? The OAuth Working Group had started to work on the OAuth security best current practice. The OAuth Working Group tried to find a way to cope with the security issues that had been found in 2015, '16 in a way that's native to OAuth, right? We wanted to include those mechanisms, those simple or native OAuth mechanisms in FAPI as well. And the third learning was, well, the profile we had developed was useful and interesting for people in other sectors than the financial industry, namely eHealth, eGovernment, and so on. The bottom line is, we thought it would be a good idea to develop v2 of FAPI that is simpler to use for developers, that is more comprehensive, by also covering complex authorization transactions and grant management lifecycle, or lifecycle of grants. What we did is we changed the baseline profile or the read profile, and made it a single profile that fulfills all the requirements for the usual or typical open banking application. And what we did is we removed the signed request object that we talked about earlier, and replaced that by a simpler mechanism, which is called pushed authorization request. What's the difference? The difference is, if you send a traditional authorization request to the authorization server, you just add UI query parameters. Simple. Pushed authorization of request uses exactly the same encoding, but sends the request to a backend interface via TLS protected connection. That's really simple to implement, much simpler than a signature. It uses the same encoding, but the security effect is dramatic, because there is no way to tamper with the request content in the front channel because the content is not going through the front channel. You see that that random number and you refer to that data, data package in the request in the front channel. That was the first significant change. Got rid of the signed request object and replaced it by something that is much simpler to use. You can use it with your postman. You don't need to have a crypto library.
Vittorio Bertocci: Very nice.
Torsten Lodderstedt: Yeah. It also gives you another feature that most people oversee. You can authenticate the client before the user interaction starts. Which means you can be really sure in the user consent, that you're talking to the real client, because you already authenticated at the client.
Vittorio Bertocci: As part of that first step. It makes a lot-
Torsten Lodderstedt: Exactly. We haven't really experienced that in my career very often, that you replace something complex with something simpler, that is more powerful.
Vittorio Bertocci: It's like, my favorite metaphor for that kind of stuff is Roman numerals versus Arabic numerals. It's as soon as you introduce, as an Italian, I had to play with Roman numbers in school, and trying to do operations with the roman numbers, it's so hard, whereas it's super easy with positional stuff. But anyway, this is super interesting. I think we might consider having another episode specifically about v2. But unfortunately, our time is running out. Before we part ways, I wanted to ask you, what do you think the call to action for listeners should be? What should people do with v1? What should people do with v2? Are there things like, if I want to implement it, what are the things that can help me? All of that stuff.
Torsten Lodderstedt: Yeah, first of all, just to complete that, what we also added is a mechanism for JSON based authorization request data, which also allows to really fix that gap, right? When it comes to the different versions. If you're looking for security profile, then your options are to use V1 which is really mature, which is supported by a lot of products. We have a conformance test suite, so most of those products also are self certified, which is really great. We want as being adopted by not only OB UK, but FTX and CDR in Australia, which is a real great success. On the other hand, v2 is much simpler to use as we just revealed. From my perspective, the people responsible for making the decision in a certain deployment or certain scheme or whatever, should take a look on those and then just make a decision based on the code and the pros and cons that I just illustrated. However, when it comes to more complex authorization requests and grant management, v1 doesn't have a solution, so you would need to implement your custom solution. If you want to have a solution, take a look on what v2 provides you with. At Yes we are just adopting that because we are running not only identity services, but also use OAuth for authorization. We first implemented the solution v1 style with a proprietary solution for the authorization request data, and we decided one year ago that we saw the first version of FAPI 2.0 baseline. Well, this is so much simpler, let's go for it. I'm so happy we made that decision one year ago, because now we have the product in the market, and it's so super easy, and you can implement it on top of existing products. That's my take on v1 versus v2. Whatever people do, whether they choose or pick v1 or v2, they can be sure everything that's in the profile is based on the collective experience of a lot of really, really bright people. That's very important, I think.
Vittorio Bertocci: Yeah, that really makes a big difference. I guess, if I can try once again, to summarize, if people need to interoperate with an existing system, then we should probably look at what that system supports, but if instead - like you described, you own your API, you are exposing that, you are not consuming that, then you are in a position to pick the latest and greatest, which is, of course v2 better than v1. I guess that other people that own exposing APIs can follow your advice as well.
Torsten Lodderstedt: Yeah. In the end, it's the people that own the API that can make that decision, right? Yeah.
Vittorio Bertocci: All right. Fantastic. Thank you so much Torsten for your time. It was really, really interesting and so much to unpack in there. I am pretty sure that they will ask you to come back because you appear to be working on all the most interesting things that are going on right now. Expect one extra email from me, not too long from now. Thanks again.
Torsten Lodderstedt: Yeah. Thank you very much for having me here. It was really a pleasure to discuss with you and yeah, I'm waiting for your email.
Vittorio Bertocci: Thank you. Thanks everyone for tuning in. Until next time. The OpenID foundation is a proud sponsor of the Identity, Unlocked podcast. Since its formation in 2007, the foundation is committed to promoting, protecting and advancing the OpenID community and technologies. Please consider joining the foundation and contributing to current Working Groups. To learn more about the OIDF, please visit www.openid.net. Thanks, everyone for listening. Subscribe to our podcast on your favorite app or at identitynlocked.com. Until next time, I'm Vittorio Bertocci, and this is Identity, Unlocked. Music for this podcast composed and performed by Marcelo Woloski. Identity, Unlocked is powered by Auth0. Copyright 2020, Auth0 incorporated, all rights reserved.
DESCRIPTION
In this episode of Identity. Unlocked, principal architect at Auth0 and podcast host, Vittorio Bertocci, interviews Torsten Lodderstedt. Torsten is the CTO of yes.com, and is an all-star contributor to the IETF and the OpenID Foundation. The interview centers on Torsten’s work on Financial-Grade API (FAPI) WG.
FAPI is a security and interoperability profile for OAuth, and it was originally intended for use in open banking scenarios. Torsten explains how FAPI navigates two challenge areas of using OAuth in open banking, what one may find within the FAPI working group initiatives, and the differences between FAPI versions 1 and 2. Further, Torsten delves into some specific macro areas of FAPI, and discusses JARM (JWT Secured Authorization Response Mode). He details cryptography measures such as MTLS and their relation to FAPI, his thoughts on the future of FAPI, prominent features in the specifications (such as CIBA, or Client Initiated Backchannel Authentication), and helps listeners interested in FAPI to determine what version might best suit them. Of course, if listeners have to integrate with another system, then they must see what that system can support. But for the listener who owns their own API, Torsten’s general recommendation is to consider FAPI version 2!
To learn more about the FAPI working group, how to participate, and information about the specification, visit https://openid.net/wg/fapi
To learn more about OpenID Foundation’s Global Open Banking initiatives, visit https://fapi.openid.net
Season 2 of Identity,Unlocked is sponsored by the OpenID Foundation.
Like this episode? Be sure to leave a five-star review and share Identity, Unlocked with your community! You can connect with Vittorio on Twitter at @vibronet, Torsten at @tlodderstedt, or Auth0 at @auth0.
Music composed and performed by Marcelo Woloski.