There is a critical business challenge when it comes to sharing data between entities, such that any one person or firm should not see another’s information. An example is a bank that operates an electronic market but wants to assure its customers that it does not see their data. Another example is an insurer that needs to share information about fraudulent claims without breaching confidentiality rules.
Traditionally, data is only encrypted at rest and in transit, but not while it is being processed.
In an interview with Brian Pereira, Editor-in-Chief of CISO MAG, Richard Gendal Brown, CTO, R3 explains how his company has overcome this challenge through its confidential computing platform. R3 is a leading authority on distributed ledger systems and architectures. Earlier this year, R3 launched Conclave, a confidential computing platform that secures sensitive business data while it’s being used.
Previously, Brown was the Executive Architect for Banking and Financial Markets industry Innovation at IBM UK. His previous roles with the company, for whom he worked for almost fifteen years, included Lead Account Architect for a global Investment Banking client and a consultant for IBM software products. Brown is a Chartered Engineer, holds an MBA with distinction from Warwick Business School and a first-class degree in Mathematics from Trinity College, Cambridge.
Edited excerpts from the interview follow:
What are the key challenges with data that enterprises across the world are grappling with right now?
If we think about our lives as individuals, when we interact online with companies or your third-party services and through our browsers, we feel like we are secure because we’ve all been taught to look for that padlock to expect the TLS connection to be in place. And that’s a process over the last 10-20 years of consumer education. What I think people have missed or never really thought about here, in the consumer realm is, that security promise is limited. It’s promising you that you really are talking to who you think you are. It says Facebook.com and there’s a padlock, so you know, you really are communicating with Facebook. But what it doesn’t say is anything at all about what they can do with your information when they receive it. You know you’ve sent your information, to that third-party, but the technology gives you no protection at all when it comes to the question of what they can do with your information.
And then, if we move to the corporate realm, where you asked your question, businesses have the same problem as well. If I am a bank, for example, I have an obligation to the government or the regulator to scan my customers’ transactions for signs of fraud or financial crime and money laundering. I have that obligation but, of course, many frauds are committed by sophisticated individuals or criminals, who disguise the fraudulent criminality across multiple banks.
As a bank, if all you have is the view of your own customers’ transactions — you don’t see enough to be able to spot these bad things and to protect your customers. And so, what banks would like to do is share that data with some central processor for the analytics firm, for example, who can absorb that data from multiple firms and then analyze it to spot these patterns that would otherwise be invisible. But, of course, that same problem that I just outlined from the consumer realm applies in the corporate realm as well, in the second that high value critical and sensitive customer data that’s resent from a bank to another firm. They know for sure that it has got to that of the firm; they know it’s safely been transported. But they know nothing about what happened to it at the other end. All they have is contract reputation, maybe GDPR in the European Union.
That is a soft sort of reputational and legal protections, they’re not technological protections.
We see companies sharing data with other firms, either for analytics purposes or maybe because that firm could process it for them or could help them match trades. And in doing so, there’s a risk because they can’t technologically control how that data is used. In some cases, they simply don’t risk sending it in the first place, because the thought of the horror at what might happen if it were lost, is just unthinkably awful.
There are lots of situations where a company really would like to send and share data with legitimate and valuable reasons, with other firms. But until now, there’s been no good way to technologically control what happens to that data once it leaves your premises.
But aren’t regulations like GDPR, which imposes stiff fines like 4% of global turnover, supposed to take care of that problem?
Oh, for sure! The intensity of those regulations is entirely to the benefit of consumers. But if you think about a firm. So, let’s imagine you are a well-run compliance ethical firm that wants to comply with GDPR, and just imagine you’re a banker or an insurer and your firm who has customer data as a legitimate part of your business, you may want to share that information with some of the firms. GDPR rightly imposes legitimate constraints on what you can and can’t do. And you don’t look at the technological toolbox, you have to comply with that, and that toolbox is lacking — there’s nothing there until we get to confidential computing, which we will come to in a moment.
What do you have? You have audit, you have contracts, you have the legal system. But (as a consumer) there is nothing technological you can do; there’s no sort of padlock or sort of box, you could put that data in, to control what happens to it when it arrives at the other firm. And so, it’s really hard to comply with these regulations and even when you are a legitimate and honest ethical firm, and just trying your best to do so.
How does a Confidential Computing Platform help to solve this challenge?
The microprocessor and cryptography communities have been working to solve this problem for decades. There are software techniques such as zero-knowledge proofs, homomorphic or fully homomorphic encryption, secure MPC. There are various cryptographic techniques that attempt to solve parts of these problems. There’s also an approach based on hardware, known as Confidential Computing. Intel has a variant called Software Guard Extensions (SGX), AMD has a variant, so does ARM and so does IBM on their mainframe systems. They all work slightly differently.
So, what do they offer?
Imagine you are a service provider. Perhaps you designed an algorithm to spot patterns of financial fraud across different transactions for different banks. It would help drive down crime in the market if you were able to process and analyze lots of bank customers’ transactions. It would be a valuable and important service. So, your challenge now, as a service provider, is that you need to persuade the banks to send these transactions to you, and to believe it would be safe to do so.
What confidential computing gives you, as a service provider, is a new capability that you didn’t have before. As a service provider, you can show the blueprint of your algorithm, and you can show the banks your code. They can review it to say that your algorithm looks legitimate, and does what it says it does. It doesn’t steal the customer data and it doesn’t inadvertently leak the information. Their auditors or their chief security officers can validate that the algorithm is legitimate.
Confidential computing allows you to cryptographically prove to those banks that that is the algorithm that is running, and which will process their data and that no other algorithm will be able to access it.
It’s as if you, as a service provider, are giving your customers an X-Ray view into your systems. They can now see what algorithm is running and satisfy themselves that is legitimate and safe. Encrypt their data with a key that only that algorithm knows and sends it to you. Now we have seemingly achieved the impossible. Now you can aggregate data from multiple sources. What each of those data providers is safe in the knowledge that the only thing your system can do is execute the logic that you’ve promised and nothing else — not even you or your employees or your data center operatives can subvert the execution or see what is happening. Part of this is a promise coming from the underlying hardware. It is an offering from the hardware that allows the chip on which this code is running, to remotely convince third parties that this is the algorithm that’s running, and this is the only thing that can see your data.
About the Interviewer