Episode 6
ZK, TEEs and Verifiable Compute w/ Vanishree Rao (Fermah)
December 4, 2025 • 1:13:05
Host
Rex Kirshner
Guest
Vanishree Rao
About This Episode
Rex talks with cryptographer Vanishree Rao, founder of Fermah, about what verifiable compute actually unlocks—and why the real story isn’t “ZK eats the world” just yet. They get into ZK vs trusted execution environments (TEEs), how oracles really fail in practice, why restaking got ahead of real demand, and what a universal proof market like Fermah is doing differently. It’s a grounded look at the compute layer under rollups, oracles, and AI agents—and where the next wave of applications might actually come from.
Transcript
Rex (00:34.028)
Vanishree, welcome to the Signaling Theory Podcast.
Vanishree Rao (Fermah) (00:45.525)
Amazing. Great to be here, Rix.
Rex (00:48.058)
Awesome. So I cannot wait for this conversation about ZK and distributed proving and all of the things that really kind of underlie the next generation of blockchain and distributed compute. before we jump right into it, can you just give a really brief introduction to who you are, what you're working on and what brings you to this conversation?
Vanishree Rao (Fermah) (01:09.067)
Yeah, sounds good. my name is Vanashree. I'm founder of Fermat. Fermat is a universal proof market. And more about my intro is that my background is in cryptography, PhD in cryptography, focused on ZK, PC, etc. architected blockchains, including the Mina protocol. And now we are focused on
Rex (01:28.182)
Awesome. All right, great. So let's just jump right in. The first thing I kind of have been like kind of cooking on and can't wait to ask someone who's just like, you know, arms deep into the ZK space is like, frankly, like the big hairy question that's in front of us, which is I, for years now have been so bullish on this idea of.
ZK provides verifiable compute and we can kind of unpack like the different types of ZK, the zero knowledge properties versus the verifiability properties and blah, blah, blah. We can unpack all the different aspects of ZK. But at the end of the day, I have always been bullish on this idea that ZK allows us to essentially compress computation, project it into a blockchain and then verify it so that you effectively.
can project arbitrarily complex compute into a blockchain. And I thought that was gonna unlock the next cycle or the next whole craze of blockchain because for the first time ever, we were not gonna be able to do just these incredibly computationally simple things like balance changes or maybe X times Y equals K computations. could do real modern computations.
And like, I'm looking around here at the end of 2025 and like, I'll just be frank with you. Like aside from ZK just getting better and better at doing the base, like proving blockchain, transactions, like I don't really see any of this, next generation unlocking that I thought we'd saw when we, these primitives became so locked in and so performant. so I guess my question for you to like, kind of put a bow on this is like,
Verifiable compute seems like such a strong value prop, but like frankly, where are the applications?
Vanishree Rao (Fermah) (03:25.28)
Well, that's a great question. think it hits at the, it hits at what is the right other piece of puzzle. CK is one piece of the puzzle. What are those, what these, this other piece is not probably everything in the world, but there are some things and what are those things? So your question is, what are those things?
So it comes down to effort versus value, right? You want to have your effort this much and value this much only then, then this thing makes sense. When your effort becomes this much and your value is this much, then that, that, that effort does not make sense. So ZK, so ZK is effortful in proving it takes time in proving.
There have been advancements in proving speed and proving cost. with that advancement, that for it goes lower and lower and things that are here also become amenable for, for CK. Now, what are the use cases where it does make sense is, it, it's a product question, right? Like somebody comes with a product, somebody comes to achieve something.
on chain and they come with a preset UX and preset set of constraints for the product to be successful. proving time adds to user experience, proving cost adds to user experience. When they come with it, ZK may or may not make sense, right? It does make great amount of sense in roll up case and in cases where, and similar or just in use cases where you have a
bunch of micro transactions and on-chain work is expensive because every person in the validator pretty much does the same amount of work and micro transactions, batching them and proving them makes a lot of sense because even when your effort is high, the effective effort of the network decreases because everybody doesn't need to that re... Everybody doesn't need to do the verification of those every single micro transactions.
Vanishree Rao (Fermah) (05:48.98)
There is one person who does some extra effort to prove our namely, some extra effort. Everybody else only need to do constant tiny amount of effort to verify that proof. So it makes sense there. There are also, I think this is the use case that makes the most sense to me. I do not believe that CK is the right tool for AI. I do not believe that CK is right.
tool for off-team data access, for example, which is a very recurrent operation in pretty much all useful apps. They need price data. They what is the health of some credit that is given to a person. All these things are off-team data access or accessing beyond the local state of the application.
So all these things, ZK is not the right solution. Why? Because it's either, even at this level, it's either too slow, too expensive, or overkill, just overkill. There are other solutions that fit. So you come with a product question. It's a constraint system and you need to solve that constraint system. The solution of that constraint system is not ZK at that time. The solution, I think, is TE.
Why? Because TE gives you enough level of security for those use cases. And it hits the constraint in time and cost perfectly. This is why I think, for example, off-chain data access. Anyone who is running Oracle networks, these Oracle networks, I think they'll go a long way if they adopt. There are so many gaps in current Oracle networks, but
They can actually solve a lot by deploying their machines that do off-chain data access, URL access on a TE and give attestations and have that verified. You can get that verifiability level, TE attestation level verifiability is enough for these things. All these attacks that are happening in TE, they are mostly around privacy of TE. They're not around...
Vanishree Rao (Fermah) (08:14.205)
integrity guarantees of t. So this is why I do believe that t is a good solution. So all in all, my answer to summarize my answer is I said this middle model of effort versus value and I think it keeps changing. And I do think that zk makes sense for some and t might make sense for others. There are other modes of verification. Also, zk is only one way of verification.
verifiable of computation on chain execution of computation is important. CK is only one. And it's great, it works great for some.
Rex (08:52.8)
Yeah, no, I mean, I think that is definitely the right way to like set the stage for the conversation, which is at the end of the day, ZK is just a tool of which multiple tools can provide the same value, which is how can we take the output of computation and verify that that output came from, you know, running computation without like manipulating it or in some way, affecting
what the outcome is. Like you know the code that's supposed to be run and you know what the answer was and you can verify without having to actually be the operator that that answer came from that code. And that can happen through ZK, which is essentially like happening entirely through software, or that can happen through TEs, which is partially through software, but also has a hardware component. And both, you this entire...
world of verifiable compute, I think one of the worst things that happened to ZK, right, is this name Zero Knowledge, because Zero Knowledge almost has nothing to do with verifiable compute. It just kind of comes out of the same research or, I don't know, mean, maybe you as a researcher might like push back a little bit on that, but the point is, is that what we're really interested in the blockchain space is less about how do we...
use zk and you even use tees for this privacy and secrecy enforcing aspect and much more about how do we allow What I like to say is how do we allow resource constraint? Sorry resource constrained systems of which blockchains are the ultimate resource constrained systems to run like arbitrarily complex compute, without how without actually running it themselves and I think that that's
What we're talking about here with verifiable compute is that. And so we'd love to hear if you have any more riffs on that. And I want to bring the conversation eventually back to what are the actual applications that would use this. let's spend a few cycles on ZK versus TEEs. Because look, I'll be honest with you. When I look at TEEs, I think this is the
Rex (11:13.846)
almost anathema to blockchains. Like this is like, let's go into a space that is using cryptography to say that we don't need to trust anyone and then add in this explicitly like it's in the name, right? Trusted execution environments, like add an explicit trust back into the system and then like have some ZK to like kind of remove or sorry, it has some, cryptography or some encryption to try to remove that trust again. And so, yeah.
respond.
Vanishree Rao (Fermah) (11:44.605)
No, I think that's a great point to unearth this. think there is a narrative that has that there is a narrative around cryptography being the Holy Grail and he being awful. I think there is so much about it that we
on the face value, take cryptography as a godsend and T as untouchable. Right. Why? I think so. I think so because
because of all the privacy hack, because of all the ways people have figured out to break privacy of TE. But it still remains the case that TE gives you that level of security for integrity. So TE also, just like ZK, like you explained really well, ZK gives you two things, privacy and integrity. TE also gives you two things.
privacy and integrity. Privacy of TE is questionable. Integrity of TE is not as strong as ZK. there can be, however, TE comes at low cost and low speed. There are applications where low cost and low speed is important and the value that you're protecting. So here, I think this, this I wanted to say. Here's another model, right? Right, Rex. The model is
The amount of money you need to spend to break a fort has to be more than the amount of money it's securing. Only then will someone break it.
Rex (13:37.966)
I mean a lot of people just like to watch the world burn.
Vanishree Rao (Fermah) (13:41.907)
Okay, there is that too, There is that too. There is that too. Well, if you are to associate a value to them by doing that, if we can quantify that, the model still holds. let's see. So if...
So they are very comparable. CK and TE are very comparable because they offer similar things. But TE offers one more thing that you can associate a notion of identity to the source. You can see that this in a network of AI agents around the world, one of the things that is very important in the future is having a notion of identity to an AI agent. Why? Because it'll help.
you to have SLAs without AI agent. You can say, put money, you give me some economic security, if you do something, I'll slash you. And they can also on a positive side, they can build reputation. So it's all tied to identity and T's give you that identity, T's doesn't add the base level. You can add a notion of identity on top, but still.
there, T gives you that. I think, I, I, I, my, my point is the point point I'm trying to get, get out is T at once. think we should not, completely keep them aside because there are use cases where they can be useful, where they actually met where they actually hit all the constraints.
Rex (15:31.406)
So what, and again, maybe I shouldn't be like kind of belaboring the point, but just from first principles, right? Like when I think of what is the value of a blockchain, and we'll just use Ethereum here, right? Like it is the idea that I, as a relatively non-technical person, can participate directly in the network, and by participating directly in the network, I am...
both contributing to and guaranteeing the credible neutrality of the network. And that's like the fundamental purpose of every blockchain, whether or not they're even like trying to achieve it, right? But if you're not doing that, you're essentially creating financial applications on AWS and then wrapping it in some sort of like blockchain wrapper for like VC or regulatory reasons. You have a...
Vanishree Rao (Fermah) (16:20.562)
That's a great point. yes, that's a great point. So you're right that we shouldn't have one T that is doing everything. We should still have permissionless access to the service, blockchain service, and we should have a notion of...
a convincing model that will tell us despite these limitations of TEs, you still have a convincing model that this is trustless. Let's get back to this, but there is one point about cryptography, which is that there is still an assumption that it is, you're relying on an assumption. You're relying on an assumption that nobody has figured out how to factor product of crime numbers. There is an assumption that
People don't have better than very difficult algorithms, algorithmic solutions to finding a discrete log of a number in a large field. There are these assumptions. We don't know whether maybe somebody has it. We just don't know, right? Same is for TE. Nobody has figured out how to, beyond certain level, how to break integrity of TEs. That's an assumption.
It's all about the history. There isn't something so special about math here. It is just that in the history, people have tried it and they haven't found a solution. It's the same thing. At a basic core level, it's the same thing. And now having said that, coming back to ZK,
Sorry, I bought that. What was that? no, no, no, no. I remembered. I remembered. Sorry. So with the notion of TE, you're not letting go of any access to something. You're not letting go of permissionless access to something. You're not letting go of the level of security you can give to a system.
Rex (18:17.42)
No, no, no worries. I'll just, I'll pick us back up here.
Vanishree Rao (Fermah) (18:45.203)
for certain applications. You're not letting go of it as long as you architect the system correctly. I'm, what you're, you're 100 % right. You should definitely not say, okay, here is a T, it'll give you enough security. We have deployed it on AWS and that's it. So that's, I agree with you. That is against the direction we all are going. Instead, think about something like this. Thought experiment.
Think of something like this, right? A network of validators where they also have TEs for certain use cases, and those TEs will run certain operations. Now what happens there is something that runs for too long does not need to run on chain.
And this is exactly why people are using ZK, something that is too expensive, not need to run on chain. You create a ZK proof. What I'm saying is do the same thing for TE, something that runs too long, put it in a TE, an attestation. People will reach consensus on the attestation being correct. So there is that level of security still. Not one TE, network of TEs, network of elevators, doing consensus on execution, but you reduce the overall effort of a network.
by putting some operations inside a T.
Rex (20:15.03)
I hear you. think so because you know, I have this very strong belief that at the end of the day, like it is really, really hard to get people to run validators, right? Whatever kind of validators they are. Right. And I'm an Ethereum home staker. Like even though it's not that big a deal, like it just sucks. You know, like there's there's really no upside to it. And and
Like you kind of just have to be like a fanatic that really believes in crypto and like the credibly neutral idea. you know, Ethereum struggles so much with just like maintaining this validator set and really nobody else is able to maintain any sort of credibly neutral, let alone like home staker validator set. I really have this theory that, you know, when we look forward 10 or 20 or 50 years, like we're not going to have all of these networks of people running nodes. We're going to have.
one network of people running nodes. And like why I kind of fell in love with the Eigen layer idea when it first came out and why I'm a little bit disappointed at the like turns it's taken is because what I understood Eigen layer to be saying is that we understand that long-term there's going to be one network of computers that run distributed software and we are going to allow them to like, we're going to basically use the Ethereum validator set.
and allow them to opt into more and more services based on the amount of compute they bring to the table. And so I think in that future, I can see this world where some of the Ethereum validators may have enough money to buy specialized hardware, like TEs, that can run these high.
highly intense, computationally intense things off chain and then put it back on chain using attestations. And maybe that's how these things works and that's fine. But to me, like the best case scenario is like TEs are really just a stopgap solution while the performance of ZK and the cost of ZK like isn't quite there. And like longterm we should, like the ideal case is the ZK is
Rex (22:36.236)
you know, just like very minimal overhead can be run on basically any computer. You know, we see like the extended Moore's laws, like improvements we've been getting on ZK. Like, you know, in that world, do you think like, do TEEs really have a role in a world where ZK doesn't really require that much overhead? Or like, do you still see that there are fundamentally different things that ZK will be good for?
and TEs will be good for and like that is not gonna change over the long term.
Vanishree Rao (Fermah) (23:12.132)
I think ZK will start eating more and more and then it will stop. With ZK advancing, it is eating more and more of computation that it can handle, but then it'll stop. The reason why it'll stop is I think that there is this fundamental limitation of what you can do with ZK optimization. These are complexity theoretic limits.
Rex (23:16.141)
Mm-hmm.
Vanishree Rao (Fermah) (23:41.307)
I don't think the exact limits we know and we have reached that optimal limits of ZK, but there is a limit. And the reason why there is a limit is ZK works in this, the way we get benefit of ZK is by saying that, people, Rex does some computation.
I need to verify it. How do I really reduce my work is by making Grex do a little bit more work to make his explanation super easy for me. You have to do that little bit more work. In complexity theory, have to do that extra work. And that extra work might make sense for certain things and might just not make sense for other things.
Off-chain data access is such a good example. It is such a simple operation that ZK proving this entire thing can be too much, even at the levels of getting super close to the computation itself. That's what I think.
There are limitations on how much you can go.
Rex (24:58.093)
Yeah.
Rex (25:01.666)
Yeah, I hear that. think like the counter to that would be like, you know, look back at what they were saying. The limitations of compute would be in the nineties. And like, I think like the one thing that we're good at is like just kind of destroying the like theoretical limitations and just like getting better and better to the point where sure, maybe those limitations exist, but like, because we can process them so fast, like they are not important to blocking.
progress forward. And I guess like kind of where I'm trying to, the thesis that I want you to ultimately respond to is that I understand this concept of like, okay, maybe TEEs are the appropriate way to implement a lot of this functionality that we care about in the short term and then in the long term, we can go back and replace that with ZK cryptography. And, know, one I would like to hear if like that's.
for many of these things, the path that you see us going down. And if so, my question is, why are we bothering to do the TEs in the short term? Frankly, I just look around in this crypto space and we talk about things like crypto economic security, and then we don't actually value it. We talk about
how important it is to have X amount of capital staked in Eigen layer and then like, okay, there's never been a slashing event. And so these concepts that seem like they're so important to add these levels of security, they turn out to not actually be important today. And so I guess what I'm nervous about is like, are we going down this trusted centralized route and building in...
like vendor lock-in or incumbent lock-in into TEs as a stopgap solution when really like that's going to cause us problems down the road for value that is like really unclear today.
Vanishree Rao (Fermah) (26:58.546)
Hmm.
Vanishree Rao (Fermah) (27:07.858)
There are situations that have happened in just the recent past that makes me think that having a T based security is worth it for
These products that don't even have any level of security beyond some degree of repetition. Think of Oracle networks. What do they do? Typical Oracle networks. What do they do? You ask for price feed. They have chosen set of maybe 10 machines. They do a median, and they give it to you. Now, what is the guarantee that they will
actually give us the actual value and not still value. What is the kind of SLA they'll give us in terms of if something goes bad? What kind of resolution they are going to provide? How do I know, how do I even know that this is exactly the time it was accessed? How do I know?
that this is even accessed by a machine that is.
What if it is saying that it is making up a little bit of volatility by itself, has some model, has some volatility model and creates volatility within that area for every one minute and then gives me approximate values that way without actually doing the work? We don't know. So Moonwell recently lost a million dollar. The reason
Vanishree Rao (Fermah) (29:07.825)
Stream Finance lost a ton of value recently. 90 something million dollars of user funds, right?
These are not like one-off things. A million is significant. Upwards of 90 million is significant. And these are not one-off situations. These happen over and over again. This off-chain reliance, where there is no clarity on what kind of security they're giving in off-chain networks, that is what I'm saying. So I'm not saying that these things are already run by validators. These things are not run by validators. These things are run by off-chain networks.
and they don't provide any level of decent security. This is why these hacks keep happening. Things break not because, okay, there is this consensus protocol somebody built and they had a chain quality value delta wrongly set and that is why chain, no, we barely hear about anything like that. All we hear is a bot mishandled something, an article went down.
It's always the off-chain thing because chains are built to be secure. They're unfortunately made to rely on off-chain networks. Off-chain networks don't give that level of security. They can get a lot farther by adopting TE and by having validators do that. Why is it good for validators? You made a great point. It's very hard to get people to run validators because there is OPEX and
capex and the revenue that they're going is not matching. You can increase revenue opportunities by doing more operations on chain and you can also have that level of on chain security for those operations on the other hand and for these
Vanishree Rao (Fermah) (31:15.857)
We of course are not going to ask them to have a machine sitting aside all the time for some long running tasks that'll come every now and then. So there are two questions. One is, is this a direction good enough for the space? And the second is, can we architect a solution where it'll make sense for parties involved? So these are two solutions, two things, right?
Rex (31:40.526)
Mm-hmm.
Vanishree Rao (Fermah) (31:43.25)
with respect to restaking protocols, they have demand and supply. If they increased the supply so much that they don't have enough demand, of course it doesn't make sense for them. So it's about maintaining this. high number of, high amount of money inside this restaking does not give you, does not give these people enough value if there isn't demand.
So it's a product's goal then to keep this aligned at every single point in time to increase the utilization rate of every machine involved so that they are actually making money at any given point in time, really making money with respect to Opex and CapEx. And I do think there is a separation of sorts between these two.
This is why I think that there is a direction where we can go and product has to figure this out.
Rex (32:49.492)
Mm-hmm. Yeah. So, okay, before we like just move on to the restaking the supply and demand piece, which, you know, I think is important. Like I think it's impossible not to acknowledge just that Eigenlayer got, and then all the follow-ons got way overhyped. Everyone wanted to put their capital in and then there wasn't enough demand to pay real yield against that capital. And that has caused like kind of the like over, you know, the bubble like nature around.
So we'll get there in a second, but just on this example that you used of an Oracle network, right? Like when I look at an Oracle network, like it is totally true that today how it works is off chain, a single computer will bundle up a bunch of inputs, maybe from third parties, maybe from their own work or whatever, right? They'll bundle up a bunch of inputs.
then they'll find some way to consolidate all the inputs into one thing, whether it's the median or the average or whatever, right? And then they take that value and then they post it on chain, right? And then like, you are correct that you have no idea did that, like.
So anyway, that's the whole flow. Then I think what you're saying is like, we can introduce a TEE into this architecture by saying, okay, Oracle endpoint, we want you to have a TEE so that we know that you're actually taking in all the inputs from all the places that you say that you're taking in. You're running it against the code that you said that you're gonna be running it against. And because you use the correct chip that...
like gives us an attestation and you've given us a copy of the source code, we are able to verify that, okay, based on the inputs you got, you actually ran this code and you got this output and now it's on chain.
Vanishree Rao (Fermah) (34:48.237)
Exactly.
Rex (34:49.496)
But I guess my question to you is, what's the value of that? Because at the end of the day, all of these hacks or all of these problems, really, aren't they upstream problems? How often is there an issue that someone actually hacks into the code of the Oracle network and then is able to manipulate the outputs and put in bad data that leads to losses? That's not really what's happening.
Vanishree Rao (Fermah) (35:21.336)
I think it is about knowing...
integrity even before you use the info. Let's say you get some info and you use that info, right? You get info from these articles. Let's say they look at, let's not consider price feed because it can be very time sensitive and there are applications that you may not even have time to verify this. Instead, let's think of a use case where you're a prediction market.
Rex (35:29.838)
Mm-hmm.
Vanishree Rao (Fermah) (35:55.493)
where you want to see who won in this election. And there is an article that goes and fetches this info. Now, Venezuela election happened. there was an issue with how Polymarket resolved the market because of this off-chain reliance on an external Oracle network. Now,
Why this can be significant is this issue of relying on external networks creates this bigger issue of misaligned incentives between your core protocol and an external protocol. What is the goal of Ooma protocol? Make money.
What is the goal of Polymarket in this situation? Resolve the market correctly.
And they're clashing. They are working together. There is a possibility of them clashing, is what I'm saying. There's a possibility of them clashing. We're conflicting. We're exactly that situation was created in PolymerKit and Ooma Protocol's collaboration at that time.
What I'm trying to say is that in the event where this off-chain data access is not time sensitive, you can take that input from a TE instead of relying on an Oracle that, instead of relying on, even if the Oracle network, change the Oracle network, change the change UMA protocols architecture to having these machines running on
Vanishree Rao (Fermah) (37:55.544)
a TE and actually doing some access and then giving an attestation and then Polymer using it. That already changes the game. You reduce occurrence of such fiascos already there. Because
Rex (38:08.214)
Why? Because like in this example, what you're kind of gesturing to is like, the P the inputs of Ooma protocol might want to give a false answer because they are getting paid to give a false answer. Right. And so in that case, like, it doesn't really matter if it goes through a TE or not, it's still going to run, like they can run a bad answer.
Vanishree Rao (Fermah) (38:23.065)
Yeah. Yeah.
Vanishree Rao (Fermah) (38:31.471)
Polymarket can verify, but Polymarket can verify before taking this input. They gave input earlier, they are giving input now, but without T, they didn't have a way to verify. Now they have a way to verify before accepting it.
Rex (38:48.28)
But they're really only able to verify did that answer come from the Ooma network, right?
Vanishree Rao (Fermah) (38:54.007)
No, can already, you can, can, the attestation. great question. Gets into the actual value proposition of this direction, right? The attestation says that the code that you ran actually access this URL, which is an official place that, that announces the person who won. So that's where they're accessing. These articles need to access that URL to figure that out.
And that's where they accessed.
Rex (39:26.604)
I guess in that case though, if there is an official URL where you can access this information, why does Polymarket need UMA at all? Why don't they just access it?
Vanishree Rao (Fermah) (39:34.288)
you you hit the biggest gap of current blockchains all at once they're all synchronous they cannot do of change of access themselves that is exactly why that's it we needed a band-aid in the name of oracles and became a huge industry why because it's such an repeated operation this is why we have our calls they just can't
Rex (39:58.253)
Mm-hmm.
Rex (40:01.678)
So I guess what you're saying is that it's not that Poly like so you can say the smart contract that polymarket deployed is able to verify that this Uma like whatever result Uma is putting down came from a specific location and that's not verifiable
Vanishree Rao (Fermah) (40:21.645)
because of problem in UMA. There is zero problem in Polymarket. Polymarket has its own architecture, but UMA protocol is not, whatever they are doing, they don't have any guarantees on them doing the work correctly.
Rex (40:25.027)
Mm-hmm.
Rex (40:35.598)
Yeah, no, no. Yeah, that makes sense. I think, you know, the other ways that people have solved it, before that, like, you know, chain link tries to solve that, by staking link tokens, and then it's a slashable offense if, if they can attribute like malicious intent. so, you know, I think that is probably the good segue to talking about like why restaking is so important in this because like verifiability by itself is
doesn't mean much, right? Like you need to be able to verify and if you verify that something was incorrect, like there needs to be consequences or else like what are we even doing here? And so, you know, I think one consequence is, well, you can just like reject the answer and then, but I think what we've been seeing with restaking is, hey, let's put some economic value behind your promises and then if the promises are verifiably incorrect, like we,
have something to punish you with. one, do you think that's a good kind of explanation of how all these things work together? two, I'd love to just kind of get your take on the evolution of the resaking space. And as it exists in 2025, do you think that we're delivering against the promises that we kind of had while these ideas were so new? Or what do you think, how do you?
What's the lay of the land of restaking and then verifiable compute in 2025?
Vanishree Rao (Fermah) (42:07.587)
I think there is such a big gap between restaking and verifiable compute. I would consider that to be two separate things. I think there might be a way to go from restaking to verifiable compute, but here is the more fundamental relationship between verifiable compute and something else. Verifiable compute and economic security. You can get economic security either through restaking or just economic security. So there is economic security.
So economic security, it comes down to how do you design your mechanism of a market. And the design of mechanism can be strong in some cases where people who deliver some work put some stake. I think there might, yeah.
In my view, there is a strong correlation between verifiable compute and economic security as one of the tools to make this work. It's just one of the tools to make this work. It's a mechanism of an entire market. It's one of the components. I risk taking is completely a side. I think of this as a different narrative.
Rex (43:30.594)
Sure. so I get, I, I take your point that economic security is really the tie to verifiable compute. while restaking is one type of crypto economic security, there's other types, right? Like insurance. I don't know. I can off the top of my head, the insurance is what I thought of. And so I guess the, like, when you think about verifiable compute, how much do you think?
Vanishree Rao (Fermah) (43:38.499)
Yeah. Yeah.
Rex (43:59.927)
Like there, the, need both sides of this coin, both the verifiable commune and the economic security to build systems that are, you know, like providing real value above and beyond. had these technologies before, or do you see like verifiable compute is like most of the prize here. then, economic security is really only important for super high value, maybe like financial, types of applications or.
I guess like how much do you understand these to be like two things that are locked into one story versus like two things that just have a relationship?
Vanishree Rao (Fermah) (44:37.782)
I see. I think of this as like an assurance. Think of assurance as a box. Now you need to fold up, if you want to take somebody's service, you need to see the box filled. And there are different ways of folding the box. One is completely put ZK, fold the whole thing with ZK. You don't need any economic security. The fact that ZK
has soundness property that you can, it doesn't matter how you did it, how you did the work. As long as I can verify the proof, I really don't care how the work was done. So there is that ZK is that assurance. You don't need economic security there at all. You bring R the other extreme is I have no idea to verify the compute that you're doing. no idea. Please put money behind it.
That's the other extreme. And there are middle grounds you can have. Like we talked about TE. Maybe there is not enough confidence in just TE. Put a little bit of money. You can associate some probability with which it can get broken and you associate money filling part of that box with money, crypto economic security. Now, the...
Other ways are doing the same work over and over again and giving it to multiple people and taking randomly and then seeing. think Hyperbolic has this protocol called proof of sampling and they their core idea there is in to verify whether an AI compute was correct or not. is hard to ZK, ZK provided is hard to have a whole full blown consensus on that because if you have everybody do the same work, is hard. They
They have this random choice of different people. They do the work and then you match the work and you see whether they are the same or not. You can do that. think all of these, think my mental model, the way I like to say this, look at this as a quiver of all arrows you can use depending on what the situation is. Maybe you need two of them. You need one of them. They are all arrows.
Vanishree Rao (Fermah) (47:04.65)
economic security, ZK, TE, repetition, consensus, KYC, the person doing something who is putting not money but repetition behind. These are all different ways of achieving assurance and compute being correct. And we can choose, pick and choose depending on whatever is required, right?
Rex (47:27.938)
Yeah, I like, I mean, I think everything in computer science, kind of like falls into this Lego metaphor where like we all build the blocks and that's what abstraction is the ability to like stack blocks on each other. so I guess like we'll use this as a little bit of a pivot to talk about like what you're building with Fermat. and like the positions you take and, and how this space is going to evolve. Like, you know, feel free to kind of correct my understanding, but I think kind of
both the vibe I'm getting from my understanding of what you're building plus this conversation is that you see the verifiability of compute and like especially with TEs to be one of these Lego blocks that will become like more and more important as we build, you know, more modern applications. And I'd love to hear kind of what that means to you, whether that's like more applications that are able to access digital.
Vanishree Rao (Fermah) (48:22.284)
Yeah.
Rex (48:26.846)
economies natively, or maybe it's as we build more applications that leverage AI or any of these things. And what you're really focused on is building that verifiable compute layer so that people can bring their own opinions about things like economic security or consensus or all these other puzzle pieces to build the applications they want. I guess I just asked you in a very leading way, but I would love for you to
kind of talk through what you're building with Fermat and like how that is positioned to like be a big player in this next generation of compute.
Vanishree Rao (Fermah) (49:05.902)
We started off with proof market where the idea was that we will build this open market of poor on one side of the double-sided marketplace and people who want that proofs to be done. we built this mechanism which would match the seekers, we call them the seekers who seek for proof, seekers and poor.
in a way that is optimal for these machines, to be on our network. And, this is, this is, this is, this is our main focus. This has been our main focus. We are expanding and that the way we are expanding is not yet public. And, I kind of, in a lot of our conversations, I piece that out here and there mainly because of our belief in how this
this area should progress. And this area should progress for having compute that gives integrity to, how do you achieve integrity of computation that is run for a person, whether it's on chain, it's pushed off chain, whether it is off chain and then on chain, attestation in this whole mix.
Where should we go? We talked about it and we are expanding from this focus of proof market to beyond. And about Furma, more about Furma is that we have ZK Sync Scroll and these amazing projects in ZK as our core customers. And one of the coolest parts about Furma is that
Like you were saying, ZK is just progressing at a huge pace. Matalabs came up with this proof system that is ZKVM that is unbelievable called Airbender. We started using those using Airbender to submit proofs to ETH proofs and the proving time is incredible. Real-time proving already hit. Now to keep pace with this advancement and for a user,
Vanishree Rao (Fermah) (51:35.0)
to not really marry the current status of a proof system, you need to be able to adapt, upgrade to newer proof systems. And that's exactly what Forma offers. Forma is this one stop infrastructure for CK proofing, where you can keep up with the advancements of CK space and not be tied to one specific CK VM vendor.
without having to change. So if you switch from one ZKVM vendor to another ZKVM vendor and use their core networks, what happens is you will have to switch to your entire experience, the way things get priced, the way things get allocated. Everything changes and not to mention the integration cost. For us, universal proof market, amazing mechanism design that checks off all the desirable properties of a mechanism design. And on top of it,
You are not tied to a proof system. You can choose any proof system you want with proof systems changing. You can integrate a new proof system very quickly. Very, very quickly. I took the liberty of showing a quote from Matlab's team saying, said that, damn, you guys integrated so quickly and we are so impressed. They built AirBender and we integrated it and it was so quick.
We have built a tooling that makes integration with any proof system so quick for any customer. Even they can do it over a of hours for any complex proof system. That's how we did it. So that is our focus all in all, proof system and then expansion.
Rex (53:24.556)
Yeah, no, no, that makes a lot of sense. and I guess I would love to hear you talk through. like, you know, one way that you could architect a system, like, let's say I have a, fine. Like I want to create like a brand new Oracle network and I want it to be ZK proven. And, let's say that I. Like have found like this beautiful implementation of like, what's the ASIC Joel, right? And like, I love it and.
I believe everything's open source, but if not, that's fine. Like, but I have the access to the code and like, can build my Oracle network. And then one option would be, I could compile the prover, put it on AWS and just run my own prover. Um, and like I'd be my own prover. And then if I want to change from Jolt to, uh, don't know, risk zeros, what is it? Boundless about risk zero is a product.
I can kind of do the same thing where like, okay, like I'm just going to run my own prover and the, you know, I kind of like control the whole stack. so like, I don't need to really worry about the switching costs of, and like all the pricing changes and like the, the model behind all of this of like switching between, VMs because I'm just going to control the whole stack.
If you're talking to a builder who is thinking in that mind frame, how would you talk them through why it makes much more sense to go with a prover network or with Formal specifically?
Vanishree Rao (Fermah) (55:02.474)
Yeah, I'll play out how it happened with CK Sync. They were pretty much one of the first, if not the first projects to embrace proof market as a category and be a paying customer. And they were using GCP before, before working with us. The reason why they switched is, you need to pay up in three things.
If you're working with GCP or AWS, one is the cost because they have very, very simple reason. They have very, very, they love to have high margins of, profit, gross profit margins at AWS and GCP. We already cut down the cost already because on the supply side, the machines we have are machines that are owned by people and these machines are line Guidal and they want to make some money off of it and to help them make money off of it. We hit a sweet spot that'll work for them. That'll work for.
our other side of the marketplace, the seekers. So cost. The second one is the whole effort of scaling. You need to figure that out with AWS and you probably need a DevOps team. You need to keep the cost down and you need to scale properly. You don't have consistent demand all the time. So there is that. The third one is
How can you build out? So it's not just put it in AWS and there is only one machine, right? A prover. If you look at the previous version, Airbender is coming, but the current version of ZK Sync's prover, that has a really complex workflow. It has multiple levels and each level has multiple circuits. I think it's up to like 40,000 circuits or something in total.
Right. 20,000 to 40,000 circuits depending on what is the transaction, what are the transactions in your block. To orchestrate this whole thing is another work. And you change, you change the proof system, has a different workflow. You orchestrate this whole thing. You need to build that out. These are all these. So the third one is upgradability. Upgradability of proof systems. These three things are.
Vanishree Rao (Fermah) (57:33.484)
people. And that's why they that's why it's good to use a universal proof market. That's why universal proof market is necessary.
Rex (57:41.679)
So those are all very good reasons that you would make for say, hey, don't manage your proving yourself. You should go with a third party. But what I'm a little bit missing in there is why go with a proof network versus a third party who is specialized in this but is running a centralized prover just on your behalf?
In general, for blockchain, we think, what's the purpose of having a blockchain network? There is something for sure about uptime and reliability, but really why we do it in blockchain is to create this trustlessness where you don't need to trust any people because you can verify yourself. The amazing thing about ZK or even TEEs is we don't need to worry about that. It's in the nature of the
the math that you can verify it whether or not a decentralized network did it or one unit did it. And so I'd love for you to pick a part like, why as a builder does it make sense to really build on top of a decentralized proving network as opposed to a centralized one, a centralized prover?
Vanishree Rao (Fermah) (58:45.686)
Yeah.
Vanishree Rao (Fermah) (58:58.496)
Yeah, we could be centralized too. You can call us centralized too. I think you get to this knee, you're attacking this, this this wrong narrative around pro-word network on it being a decentralized network of pro-words. It does not add value.
Decentralization actually is costly. Somebody needs to pay up for decentralization. Usually, the person who is requesting proofs, it's suboptimal. That's not the direction where we are going. What we are saying is, we have this network, we need machines to prove. If we go with one person, it probably is not reliable. If they go down, they likely will go down entirely. We need, let's say,
50 machines, one person will bring five, another person will bring two, another person will bring. This is how we will orchestrate. We get that level of reliability. And the goal of every market is you have demand and supply, your supply has to hover right about demand. And you have to do that at any given point in time, right? That's what you wanna do. That's exactly what we're doing. What we are not, we are not,
interested in putting how we, every single part of this transaction between us, the matchmaker and the prover, we are not going to put it on chain. It's wasteful because ZK sync and scroll and all these other parties are happy as long as they receive the proof on time. That's all. And they have reliability on time. putting this on chain is not going to bias anything. Unnecessary decentralization of
You can actually build mechanisms, There are these mechanism design researchers and they have this vocabulary of different tricks to build mechanisms. There is a class of mechanisms that helps you achieve decentralization. In those mechanisms, the pricing
Vanishree Rao (Fermah) (01:01:26.695)
is always not optimal. You have to pay up. And that is not the class that we picked from. Our mechanism, we focused on making it optimal for the seeker on the cost and time. And we made optimal for the prover in their utilization rate and maintaining the supply to hover right above the demand.
This is exact, these are the constraints when we built our mechanic, when we designed our mechanism. It's, think decentralized poor network is, it's kind of, think a very, you're 100 % right. I completely agree with you. It's unnecessary and it just goes against your ability to build a good product.
Rex (01:02:18.38)
Yeah, I mean, I, what's funny is when you said like AWS, like they, you know, any of these like mega, or is this what we're calling hyperscalers or I don't know, but any of these like mega compute, like cloud computing companies, of course, like they're are in the business to make money and they have huge margins. And, you know, it's like, funny is that it's like, yeah, but when you look at decentralized compute, like
Our margins aren't that big because, but like our margins aren't that big, we still have, you end up having to pay for decentralization, either directly or indirectly anyway. And so like, I was, I was definitely curious how you can get lower, like better pricing, in, in a decentralized method. And it sounds like what you're saying is like for, for what you're building, decentralization is not like critical. and so.
Vanishree Rao (Fermah) (01:03:13.885)
Yes.
Rex (01:03:15.818)
Maybe what it sounds like you're saying is that what Fermat does is like goes and finds operators and like those could be like highly centralized. could, they could for their own business reasons be like super decentralized. These are the things that are not really important to you. What's important is just like, do they have the technical ability to run your like kit, which would allow anyone to like submit proofs and then generate the proofs and then get back the, or sorry.
get back their result and you're sitting in the middle and being the coordinating layer between these entities.
Vanishree Rao (Fermah) (01:03:52.245)
Exactly,
Rex (01:03:54.617)
And then can you talk through just a little bit who are these entities? are these, like who is sitting around with enough like technical expertise and compute that's interested in doing this that haven't like basically purchased that compute for something that they are doing?
Vanishree Rao (Fermah) (01:04:11.411)
Yeah, yeah, they are, many of them have had experience mining.
seems like not a thing that they can do anymore. So they have the machines. I think they try to be part of AI networks.
Our, our goal is a lean operation. And even in our test net, we have kept it clean and even our main net will keep it lean and there the supply will only hover over demand. And that way you increase the utilization rate and this kind of hits right for people in our, in our, in how we are, we are approaching this. And these are, these are the folks and this is a, this is our offering. And these are the folks that these folks respond well to this offering.
Rex (01:05:13.324)
Yeah, I totally make sense to me. I have to ask you this if you're in contact with these kind of people, because I've seen these stories too, of like, especially Bitcoin miners who are realizing that that might not be profitable, but are shifting to try to be like AI data centers. Like, I kind of thought the whole thing about Bitcoin was that everyone was so heavily invested in these ASICs that like weren't really useful to anything else. And so like,
On the one hand, like I understand that they might have a lot of extra maybe like power and like energy access, but are they, do they, in order to service, whether it's AI or things like proof generation or any of these things, are they having to go buy hardware or was there, is there some way to repurpose actual mining hardware for the types of things that you need?
Vanishree Rao (Fermah) (01:06:07.562)
I think they probably are not using the same hardware. We have this variety of hardware on our testnet to see how different kinds of jobs can be orchestrated at the same time. It's a good question. I don't know. I don't know if they had to buy this newly. And I don't know. I should ask. I'll ask. And I'll let you know.
Rex (01:06:34.222)
Okay
Vanishree Rao (Fermah) (01:06:36.584)
Why do they have those many machines? know there are whales and there are big, there are people who have a lot of these machines in China. A lot. They are the ones who are bringing big amount of power to our network. I should ask why they have that much.
Rex (01:07:01.486)
Yeah. Well, when you find out, I'd love to have you back on the pod and let's talk it through. But before I let you go, just like on the way out, can you tell me like, again, we started this conversation on the applications and like the products and, I don't want to like kind of take us right back there, but I would love to walk out on a conversation of like, what are you bullish about?
right now and what are you bullish about proving and verifiable compute really unlocking in the next phase of like this story and that can be anything from like I do believe that verifiable compute has a huge role to play in AI because like what we're learning from AI is we have like no ability to track where anything's coming from.
Of course it has direct applications in blockchain and there's just so many more things where these kinds of concepts are becoming more and more relevant every day. So as you are building these networks and seeing real usage and trying to balance that supply and demand, what are you starting to really get excited about that you think is gonna unlock the next chapter of this story?
Vanishree Rao (Fermah) (01:08:04.063)
Yeah.
Vanishree Rao (Fermah) (01:08:19.24)
Right. there is a, so crypto has seen this category of super high PMF, Perpdex's prediction markets, et cetera. That's that I think is, that I think is one category. I think there is a huge amount of category, a huge big category that we haven't explored at all. And, and that is imagine your apps. So before I say that,
Let's look at Web2. In Web2, AI has taken over every single app. Every single app AI has taken over. Why has it not taken over Web3 apps? Here is why. can't, technical, because blockchains don't allow that to happen. We are solving more fundamental problems and trying to solve more fundamental problems in Web3 saying.
Can we do training in a decentralized manner? Can we do inference in a decentralized manner? These are so fundamental. I believe these are great to attack and these are so hard and it'll be awesome to see that happen. But here is a model. I think it's attributed to Warren Buffet long.
Google, he was saying something to this effect, but here is an implication of that.
refrigerator, someone who invented refrigeration, they had some direct impact. They made some money, right? They made some
Vanishree Rao (Fermah) (01:10:02.377)
Coca-Cola is the one who used refrigeration and created an empire.
Rex (01:10:08.653)
Mm-hmm.
Vanishree Rao (Fermah) (01:10:12.061)
What we are trying to do is build reinvent refrigeration in a better way by having decentralized inference and decentralized training, et cetera, which is great because it's a necessity. It is great. It solves some of the problems of doing things in a centralized manner. those problems will get bigger and bigger as the world progresses in the AI direction. So these are great to solve. But there is a flip side to it.
which we, which is largely undiscovered and largely not looked at. And that is building up with applications that are building these entire applications, the Coca-Cola's that can use AI for refrigeration, right? That I'm super excited about. Can we have apps that are Perpetex transfer of money, great blockchains are built for it. You can build that great PMF, of course.
How can we go beyond that? That I'm super, super, super excited. You know, global AI is on its track to bring multi-trillion, I think maybe 15 trillion, I remember saying somewhere, by this credible analyst, 15 trillion to some market GDP, global GDP.
What is what is crypto's share of it? It's a rounding error. It's just a rounding error. There is a reason because we are not built to capture that. That's I'm super excited about. And to go there, you need that. So the good thing about centralized AI is that there is reputation behind it. The bad thing about decentralized AI is without reputation, it doesn't work.
Rex (01:11:44.365)
Mm-hmm.
Vanishree Rao (Fermah) (01:12:08.551)
You need to have a notion of reputation. You need to have a notion of who you're talking to there. That's verifiability. That connects to the larger notion of verifiability of compute that we have been talking about. comes, so I told you about the bigger picture that is exciting me about AI and the lower level.
requirement is building that infrastructure out and that, that I'm super excited about.
Rex (01:12:42.476)
Yeah, no, I think your story about refrigeration, excellent technology, but what's interesting is not just technology. It's what are things that were not possible before that technology existed. that's really, you know, all of us like nerds here, we like the technology. Like we're interested in how things actually work. But like the story of how the world changes is not what is the technology.
Vanishree Rao (Fermah) (01:12:56.872)
Yes.
Rex (01:13:11.022)
what does that technology unlock? I have to say, like with ZK, I kind of was expecting by 2025, 2026 to know what those experiences were, but we still are early and like things are still changing very rapidly. So, know, the time will tell like what those things are.
Vanishree Rao (Fermah) (01:13:13.449)
Thank
Vanishree Rao (Fermah) (01:13:35.081)
Maybe the ZK, the privacy part will take over though. Maybe.
Rex (01:13:36.855)
Awesome.
Rex (01:13:40.192)
Yeah. Yeah. And, and of course there's so much application there. that like how many times has all of our, everything from social security to like, you know, like WhatsApp accounts getting hacked. like everything is a nightmare. like if nothing else, ZK, if they could, if we could just use it to fix the concept of a password, like my God. so yes. Yeah.
Vanishree Rao (Fermah) (01:13:46.149)
yeah.
Vanishree Rao (Fermah) (01:13:54.697)
Yes.
Vanishree Rao (Fermah) (01:14:02.633)
Yes, different life, yeah.
Rex (01:14:07.502)
And maybe that'll tie into the story of verifiable computer. Maybe those are kind of like brothers that are telling different stories. So we will see. Yes. All right, Vanashree, thank you so much. Before I let you go, can you just share with the audience if they have been inspired by this conversation or want to learn more? What's the best way to find you and what's the best way to learn more about Fermat?
Vanishree Rao (Fermah) (01:14:15.731)
Mm-hmm. We'll see.
Vanishree Rao (Fermah) (01:14:30.089)
I am Vanashree underscore r-a-o raw on Twitter. Anybody wants to, so actually, you know, we are building this SDK. If you don't mind me plug this, adding this plug, we are building this SDK. We have this core group of developers who are trying out this SDK. We wanna get more people to try this. I will give a link to that.
to you if you can please add that next to this episode please. Awesome.
Rex (01:15:03.884)
Of course, of course, of course. All right, Vana Sree, thank you so much and have a good rest of your day.
Vanishree Rao (Fermah) (01:15:09.79)
Thank you. This was amazing. Rex.