qBitTensor Labs Live — September 18, 2025
Subnet 63 stability improvements, a 3x speedup in peaked circuit generation through a novel obfuscation-based reuse technique, Shor's challenge timeline, and dispatches from Quantum World Congress including insights on national quantum strategies and the race toward 50 logical qubits.
Stability and Fairness on Subnet 63
We shipped a round of stability updates targeting the miner experience on Subnet 63. The headline change is an initial handshake mechanism: when a new miner registers, the validator now reaches out to ask what difficulty level they want before issuing a circuit. Previously, new miners would receive a minimum-difficulty circuit and then wait through increasingly long revisit windows — sometimes 20-plus hours — before getting the difficulty they actually wanted. The immunity period has also been extended to give miners more time to land initial solutions.
We also tackled a persistent out-of-memory issue during peaked circuit generation at higher qubit counts (37-39). When generation crashed, the NVIDIA driver would hold onto GPU memory even after the process died, causing downstream failures for other miners. A new step-down mechanism ensures that if a validator repeatedly fails at a given qubit size, it permanently drops to a lower maximum so every miner gets served a circuit rather than being skipped.
Peaked Circuit Generation: 3x Faster via Obfuscation Reuse
Will (BongoCatFeynman) presented a novel approach to speeding up peaked circuit generation. Rather than generating every circuit from scratch, we now reuse previously generated circuits by applying randomized obfuscations — inserting pairs of random single-qubit unitaries and their adjoints around two-qubit gates, then adding randomized bit flips to the output layer. The result is a distinct circuit that preserves the peaking property but can be produced at least three times faster. This replaces the earlier circuit-stitching approach, which turned out to be too easy to break. The code was published on a public branch and went live the following Monday.
Shor's Challenge on the Horizon
The technical paper for the Shor's challenge is complete, the generation code is written and tested, and integration into the subnet is ready. We have been holding off on launch to avoid introducing too much change at once while peaked circuits are being stabilized. Our best estimate is a Shor's launch in two to three weeks, with the option to release the paper earlier if there is enough community interest.
Open Quantum Waitlist and End-to-End Progress
The Open Quantum waitlist is now live — prospective users can sign up and share their background, experience level, and intended use cases. Behind the scenes, we are working toward a first end-to-end test that routes a request from the web portal through to an actual quantum computer and back. That milestone was expected by the end of the following week, though a production-ready launch still requires significant surrounding work.
Dispatches from Quantum World Congress
Bob reported live from Quantum World Congress, where several themes stood out. The CTA president framed quantum as "the technology that will shape the next decade." Quantinuum's CEO committed to delivering 50 logical qubits commercially this year — a threshold where real-world problems start becoming solvable on the DARPA scatterplot. Naftali Bennett, former prime minister of Israel, delivered a striking warning: Adi Shamir (the "S" in RSA) told him that once quantum computers scale, RSA encryption "might as well be plain text." Bennett also highlighted nation-states stockpiling encrypted data today at near-zero cost, waiting for the ability to decrypt it. Forty-one countries now have quantum policies, up from twelve a year ago. And Quantum Rings, our partner, appeared on NVIDIA's deck at the event.
Subnet Merging and Winner-Take-All
We addressed two recurring community topics. On subnet merging: we find the technology interesting but are not taking on that additional risk right now given the number of moving pieces across SN48, SN63, and Open Quantum. Letting other subnets go first will help de-risk the decision if we revisit it later. On winner-take-all scoring for SN63: the team is unanimously in favor of the concept. We see it fitting naturally into Phase 3 of the roadmap, possibly earlier, and are exploring variations such as "winner takes most" with smaller offset pools to help miners cover costs while competing.
Hello everybody and welcome to qBitTensor Labs live. Thanks for joining us again on this Thursday. On the East Coast today, so it's noon for me. Usually we're doing this at 10 o'clock mountain time. But yeah, we're happy to have everybody with us. Now, as we dive into things, we'll maybe just start off with the usual that we're going to say a whole bunch of forward looking stuff. And I think last time everybody did a great job of sort of like interpreting the information, right? And, you know, not weaponizing it, but for recap, and in case anybody new is here, these are all ideas, not promises. This isn't investment or legal advice, even though some of the quotes that we shared today from World Congress are actually going to be about investment strategies. So I'm just repeating what's saying we're not financial advisors here.
We love that you guys have been taking the information from these things and using it. But keep doing it the way it is. Don't use it out of context. And if you're still here, we assume that you're on board with us. So let's start going deeper into what we got going on at qBitTensor Labs. So in the usual format, we're going to start off with some tech support stuff. We have SureShot on the line, Ryan, who's going to take us through some stability updates that have happened on Subnet 63 and a little bit of an update of what we have going on with Grafana. I'm also going to just talk a little bit about this theme of fairness after Ryan takes us through some of those stability updates.
Next, we'll give you some looks into R&D. We have BongoCat Feynman, Will, who's going to take us through some really cool enhancements that we have with Peaked. Last time we talked, we sort of said, you know, we're not done making this better yet, and we found a cool way to make it even better. So, Will's going to take us through that. I'll give a status on Shores in the sort of the next circuit. We'll talk about the open quantum waiting list.
And then during the community section of the meeting today, I'll give some updates from Quantum World Congress where I am today. I'll talk a little bit about sentiment and there've been a couple of big themes that everybody's been requesting that we talk about more, which is this idea of subnet merging, which came up on novelty search last week and this idea of winner take all. We'll spend a little bit of time talking about that. So we'll sort of start off with tech support and the topic of stability. So before I hand it off to SureShot, I am going to just say, you know, everything that we cover here, we kind of posted on Discord notes. So if you are a miner and you're not on Discord, well, if you are a miner, you are on Discord, I'm sure of it. But this will just sort of be a double click on the posts that we already put out there. So Ryan, take it away.
Yeah, so, you know, with our update, we're really trying to focus on stability and better experience for the miners. And one of the features we added was the initial handshake. You know, we're going to talk a little bit about that. And I want to talk about also the average miner validator visit time has increased significantly. The majority of the miners out there are hitting the cap for peak circuits.
We're seeing 20 plus hours on some validators in terms of revisit time. We're going to have additional updates regarding this, but upon registration, a miner would be handed a minimum difficulty circuit when they would get registered. And the revisit is so long that you're not going to get the difficulty that you want for that period of time.
So we've decided to do an initial handshake. That way we can reach out to the miner and say, hey, what do you want? And that way when we come around and give them a circuit for first time, it will actually be the difficulty that they desired. So also, go ahead.
Nice. Just to make sure I understand that one right. And I'll sort of repeat it back to you more simply and you can tell me if I got it or not. So in the up before this update, when a new miner would join, it essentially like inherit the like whatever settings the previous miner with that UID had, is that right?
No, it would actually go to the minimum difficulty. Upon a registration, we would detect that and it would be reset. This is would happen.
Got it. So then it's like, you're running a circuit that's going to get a way lower score. So like chances of dereg while you're doing that are pretty high, right?
Right. Yeah, so also you may have noticed due to the bot on the Discord chat, but the immunity period for the subnet has also been increased to allow more time for initial solutions to be accepted. It's not set in stone where we have it. We may adjust it in the future, but we feel that a longer period will help people out more at this point.
So, validator out of memory. I've been trying to do a bunch of work on this. We've encountered some behavior on unlucky circuits where generation will actually cause an out of memory on a 39 qubit, sometimes even on a 37 qubit, sometimes on a 38 as well.
When the generation process ends and crashes, the NVIDIA driver still holds onto memory, even though the process is dead. The only way to kind of recover from this is the driver would actually be reloaded. And a side effect of this is that we would succeed on certain size circuits that would fit into the memory space that was still free.
And then we'd fail on circuits that needed the extra memory space. And then another challenge is each validator has hardware that performs differently and it's really outside of our control. It's important to keep the concept of certificates, we believe, still for aligning trust between validators. That way you could have a validator performing much more poorly than another and still end up with the same kind of overall score for a miner is the goal here.
Got it.
Yeah, and when a validator did crash generating a circuit, it would just move on to the next UID. So we have implemented a step down functionality. We had a retry functionality in place ever since we sped up peak generation, but we enhanced that even more. That way, if a validator is continuously failing on a certain size of qubits, then it will actually permanently get stepped down to a lower maximum. That way it has success serving all UIDs and everyone gets a fair shot at it.
Great. So on this one, because say I make a request as a miner for a 39 qubit in the old code base before this update, if it crashed with one of these out of memory exceptions, then I get no circuit. It just moves on to the next miner. And so the miner that's doing maybe 36 qubits, way, way easier to compute than 39, maybe ends up getting a circuit and getting scored because of that.
Right. Right.
Is beating me, right, which is a pretty bad result. And so this ensures that it doesn't move past a miner until it gets a circuit.
Yep. Yep.
Right, yeah, it's a pretty tough problem to solve because of how NVIDIA is treating this type of behavior. Really just, it holds onto the memory. There's nothing you could do about it except for the driver reload. It's pretty complex in terms of like trying to work around this.
Totally got it. Well, and that plays really well into the topic of fairness, which I'll hit after you give us some Grafana updates.
Yeah, I just wanted to touch upon this real quick because we haven't really released a Grafana dashboard. You know, things looked initially promising. We did see some interoperability issues with OpenTelemetry counters. OpenTelemetry is the library that we use to publish the information to Grafana. We are finding that Grafana is interpreting it not the way it's intended to, and we weren't getting the right data on the dashboard. A lot of times we're receiving no data. We were also having issues with how they were tracking billing metrics and our bill and what budget we have in terms of being able to insert metrics into Grafana and use their stack. Unfortunately, we're probably gonna have to find an alternative solution to get the community data that they need. I think we're at a pretty good place where we could swap something out with another system and dashboard pretty easily. It's just going to take a little bit of time to get this done. So I apologize for the setback on the metrics information, but we'll get there.
Yeah. And I'll sort of like repeat that. I know how important data is to everybody, especially when you're like, you know, when you're a miner trying to figure out what's going on all around you, like a lot of these metrics are hugely valuable and even kind of in its broken state. Grafana was actually kind of fairly valuable to me, but I do appreciate that if we put it out in the state that it's in, it would almost be as frustrating as the value that people are getting out of it. So I'm really looking forward to getting a different solution in place.
Cool, well, last item I have is fairness. So I've seen a lot of comments where the miner's trying to do a very good job. They're seeing somebody else getting scored higher. And a lot of times we'll jump right into this, like, there must be some exploit in the system. And somebody's cheating and getting an unfair shake. Well, are people trying to cheat? A hundred percent. Like, absolutely. Like it kind of drives me nuts, but like it is the nature of the thing. Like there's, you know, there's incentive to be made mining. And so like people are going to try to find ways to incentivize it or to take a disproportionate amount of that incentive. Are people succeeding at cheating? Well, so far, we did have one instance where there was like somebody that had successfully done a man in the middle attack. But it was reported by the miners. We always look into things when they're reported by the miners. We found that to actually be credible and we patched it very quickly. That was in the very early days, kind of in the basic subnet code. But for the most part, no. And we're always monitoring this ourselves. You know, there's always like this initial reaction of like, you know, when somebody reports that they think that there's an exploit, but we always take that stuff seriously and we always take a look at it.
And so, you know, in almost all cases, this turns out to just be undesirable behavior. Like in this out of memory exception that Ryan was talking about generating peak circuits, you know, unless you are debugging the code, it's hard to see that like, okay, I requested a 39 and it crashed. And because of the way that the restoring works, it moves on to the next UID. So that's why miners that were sort of asking for larger circuits were sort of temporarily working at a disadvantage to ones that were doing fewer circuits. But like all of these behaviors, when they do turn out to be undesirable behaviors are sort of happening fairly. Like they're happening in the way that the code is happening. People aren't, you know, exploiting the system. And so what I'll say is like, you know, definitely keep an eye out if you're a miner, report stuff. We'll always look into it.
But, you know, so far I'm excited to say that, you know, with the exception of that one man in the middle attack early on, you know, all of the other things have turned out to be unrelated issues that have been, you know, fixed. All right, cool. So anyway, I get a lot more excited talking about R&D and I'm really excited about the faster peaked updates that we have. And so I'll hand it over to BongoCat Feynman, AKA Will, who will take us through that in a little bit more detail. But I will say before I hand it over that the public branch for that was published last night. And so if anybody is interested in sort of like getting in there and poking around, take a look at it. And I think we'll — if I know the planned release date, I'm not sure if you have the planned release date, but if you don't hit that, I'll hit that after you're done presenting.
I actually don't, so probably that would be good.
Perfect. So the intention is to make that branch live on Monday. So look forward to you guys enjoying the benefit of what Will is about to present.
All right, awesome. Yeah, so we're continually working on making peak generation faster. It's been kind of a continual thorn in our side, and so obviously we're putting in effort to make it better. So previously we had talked about this idea of circuit stitching. That was kind of the original thing that we were thinking about to speed up peak circuit generation, and it was covered in the last qBitTensor Labs Live.
It turned out to be too easy, so we had to scrap it. So what we had talked about then was sort of classical computing CSE kind of optimizations to speed things up. And we have ShorShot Ryan to thank for a lot of that. And that was a sizable improvement. But of course, we would still like to go faster for all the reasons outlined earlier.
And what we've come up with, this is kind of one of those ideas that's so simple that it's like kind of kicking ourselves for not thinking of it sooner. And those tend to always be sort of the most beautiful, most nice solutions. But the idea is to reuse sort of existing peak circuits that have already been generated in the past. So I guess I'll go over the high level bullet points about what this gets us. And then on the next slide, I'll have sort of a brief overview of exactly how it works. But the idea is that we have validators continually generating peak circuits, and they only get used once. They're really hard to generate, so it would be nice if we could try to reuse some of the work that's already been done. And what we've done is come up with a set of randomized obfuscations that can be easily applied to already generated circuits.
The idea is that these would be made public somewhere so that to try to make things as transparent as possible while keeping everything still workable. These will not be the set of public circuits that are already on the GitHub, but it'll be more like sort of a rotating pool of things that have already been generated across, maybe across validators, maybe within validators. That's something that we still have to decide. But from initial testing, what we found is that doing this process of reusing already generated circuits has resulted in something that's at least three times faster.
Of course, work is continuing and probably I have a suspicion that we'll be able to get this number a little bit higher in the future. But the main benefit to all of this is that it allows us to sort of homogenize and reduce resource requirements for validators. This is a big thing that I think has been going sort of on in the background since day one, since we initially put out the recommended hardware, I guess, for validators. But what this will do is allow us to basically have a sort of a background loop of still generating peak circuits to keep things fresh. But then we'll store those circuits and then sort of be able to reuse them on lighter hardware systems to generate new peak circuits faster for miners to work on.
So here's a kind of an abstract technical diagram for how exactly this works. The idea is starting off to the left, we have sort of a bare peak circuit that we've generated somehow. And we proceed over to the middle box by adding in a bunch of single qubit gates on either side of every one of the two qubit gates that are all of these white blocks. So all of these red and blue blocks are single qubit gates and the idea is to add in pairs of these are randomized, but they're random unitaries as well as their adjoints. And multiply them into the two qubit unitaries to produce a different set of distinct unitaries that still perform the same peaking. And then on the output layer, we'll add in randomized bit flips to the output layer right before the measurements, right before the final output of the circuit to be able to randomly select a different bit string for the peak circuit. So we think that this is definitely a much faster process to generate a peak circuit given an initial one. The way we have it right now, it's quite secure. We think there's no way to break this obfuscation, no way to fudge it or fudge the outputs. And so we're pretty excited that this will be good for speeding up the overall revisit time the validators get to give miners.
Awesome. This code is so exciting. And what's really cool about this is like the last approach that you had taken to try to solve this was like fundamentally different, right? Like it was a radically different approach. This is completely different, but it's so cool how like, you can work so hard on something, bring something almost all the way to launch and then realize there's a problem with it. And then somehow go all the way back to the drawing board and invent a whole new thing that I don't think I've ever heard of. Like, I think this is completely novel, right?
Probably for this problem, yeah.
Yeah, well, it's anyway, it's quite crazy. In fact, one of the things we should do that we haven't really talked about is, you know, I know Sean, one of the original authors on the peak circuit paper was sort of working on that tiling approach. We should actually get back with them and sort of share this concept because I think he was facing a little bit of criticism because the peak circuits are so hard to generate and this could be helpful to him also.
Yeah, I don't know that that was kind of the complete scope of all the criticism he was facing, but I think he would be interested in this for sure. Yeah.
Cool, awesome. OK, well, nice job on this. And again, this code is live. I mean, it's not live. It's public on a published branch and it goes live on Monday, so we should see a 3X improvement on peak circuit generation Monday. Awesome, OK, cool, so I'll give you the status update on Shor's.
So when I look at sort of the, I mean, this is an oversimplified list, but when I look at the list of tasks that kind of go into launching a new challenge, there's a number of different things that we have to accomplish. So the technical paper has actually been done for a number of weeks. So we're sort of ready with that. The generation code is done and tested and delivered. We even have integrated that code into the subnet. The public branch is sort of ready to work on when we decide to, and we can pick a date and launch it. But we've sort of been postponing this and delaying it because we don't want to introduce too much change simultaneously and getting this peak circuit right is sort of foundational, right? So we have one really good working, very reliable challenge for everybody.
And so, you know, the gist on this is that after we ensure stability with peak circuits, which we're hoping that we're really, really close to, we'll move to do the Shor's release. So I think there's two big things that we still want to look at with the peak circuits. One of them is that, you know, so we got this 3x improvement, but as Will sort of alluded to, there is actually still a little bit of chance that we can even get a better multiple on that improvement by sort of adjusting one of the foundational steps in that process. But there's also the question of scale. So right now, we're at like 39 very reliably. Sometimes validators can do 40 qubit generation. But we also want to be able to increase the ceiling in terms of what can be requested. And we sort of have a line of sight to that. And so we're going to spend at least another week continuing on peak circuits. But what I will say on Shor's is we have the paper ready. I would say that my best guess would be that we can probably launch Shor's in about two to three weeks.
If people want the paper sooner, we can probably put the paper out sooner. Generally speaking, like our tradition has sort of been to put the paper out a few days before we launch it. But if there was enough interest from the market, you know, hit us up on Discord, hit us up on LinkedIn. And, you know, if there's enough of a pull for that, we could definitely put the paper out a couple of weeks ahead of the launch instead.
Cool. Next update was an open quantum waitlist. So we actually have a ton of really good stuff on open quantum, but it's kind of like, yeah, it would be sort of strange to go into all of the nitty gritties of what is being implemented and where. So I did want to share at least one thing that we launched, which is on open quantum today. You can actually click the join the waitlist button and we gather information. It goes into a customer database and we can start to gather information from people in terms of who are you, what level are you familiar with, what type of stuff do you like to do with quantum computing, what's your experience level. And so, yeah, we're excited to share that update is out. By the end of next week, we actually do hope to have an end-to-end test on open quantum that goes from request coming in on the web portal all the way through to quantum computer has executed a job and all the way back. Now that sounds like we're really close if we have that, actually like that's sort of like the first major end to end. And so there is still a lot of work to do in the surrounding ecosystem to make sure that all of that is very tidy and clean. And so I don't want people to get too excited and feel like we're launching any day because there's plenty of work to do there, but we're making really good progress on it.
All right, on the community front, so I'm at Quantum World Congress. I actually was lucky enough to find like one of those tiny funny phone booths open. So I'm in like a little like three foot by three foot phone booth right now. I want to share a bunch of the stuff that I've heard here and sort of just kind of give my take on what these things mean. So the first slide I'll share was from Kinsey Fabrizio and she's the president of Consumer Technology Association. So that's essentially the guys that do the giant consumer electronics event in Las Vegas. And one of her big statements, which I thought was important because she's not from quantum was, if AI is the technology that's shaping this decade, quantum is the technology that will be shaping the next. And at the last consumer electronics show, they actually had quantum. They have like a quantum track, which is pretty mind blowing to think about. And you can see the slide that she showed at the event and while my text is kind of covering it, you can see there's a, in the top left, a giant mock chandelier from a quantum computer drawing a lot of interest at the show.
So this was another interesting one. She talked about AI. So Dr. Charles Tehan is a partner at Microsoft. He was formerly an associate director of like the Office of Science and Technology for the previous administration. And he said, you know, he was highlighting sort of the intersection of AI and quantum, which I thought was kind of cool. So he highlighted that there are a lot of ways AI is already benefiting quantum. Like we're using machine learning in terms of like how to send the right pulses and how to make code more efficient and stuff like that. But he also suggested that there were a lot of ways that quantum would help AI. And one of the things that I thought was really exciting about how that can happen is the CEO of Quantinuum called out that they are currently using quantum computers to generate data, like many body system data. So think about like the data used to train AI models in quantum chemistry or quantum biology in order to train AI models, they're synthesizing that data very efficiently using quantum computers. So essentially, his claim is that today they are generating data that's training AI models to do things that it could never be trained to do before, which is kind of an exciting concept considering how immature quantum computers really are in the grand scheme of things today.
All right. So this bit I thought was interesting. And again, I'm going to reiterate that I am not giving any financial advice, but one of the highlights at the show was sort of looking at this event in 2024 versus at this event in 2025. And so you can see what publicly traded companies were worth in 2024 a year ago and what they're worth today. And it's incredible. It's quite incredible. The presenter highlighted that if you invested $100,000 in the four publicly traded companies a year ago, it'd be worth $1.7 million today, which is fricking bonkers. You know, I've talked to a lot of the guys from a lot of these companies and like they're riding sky high. It was also interesting because there was an investor that presented — actually it was the former prime minister of Israel was presenting, he also happens to be a big tech head. And he said, you know, this essentially provides a roadmap for where you can make a lot of money as a venture capitalist, because you've basically seen how these companies can skyrocket. So jump in with smaller companies that are becoming available now.
On that, so one of the most interesting topics that happened during this was that interview of the former prime minister of Israel, Naftali Bennett. Okay, so what he said was fascinating. He started off with this fairly high level thing, and then he said a bunch of like super controversial stuff, which I'll read back to you, but I didn't put on slides, but he said, as the head of a nation, we once said AI would never happen. And that's totally true. We had these AI winters where investment disappeared for entire decades in the AI space. They said, now I'm using it. And when I ask what the impact of quantum computing will be when it scales, I hear something very scary and I have no choice but to invest. Like, it's not a decision, yes or no. It's like, you must absolutely invest.
And so he continued on and said that he had talked to this guy named Adi Shamir, who is the S from RSA. And his claim was that Adi said, when quantum computers become available, RSA encryption might as well just be plain text. Right? Which is like the boldest statement I've heard, especially coming from literally one of the named inventors of RSA. But then Naftali continued on to say, we know bad actors are storing huge warehouses full of encrypted data today. They have virtually zero costs to do it. They've been collecting it for years and they're just waiting to decrypt it, which is spooky. He then continued to even go further down the dystopian rat hole and pointed out that in 1519, when the Spaniards met the Aztec, they had about a hundred Spaniards where the Aztecs had millions of people behind them. And they dominated them because they had weapons, tactics, strategies, materials that the enemy couldn't even imagine. And so you sort of start to get into this concept that technology matters, especially when it comes to who controls power. And so he said that the winning nations are going to have batteries that are 20 times more efficient, logistics that are 10 times more efficient, materials that are five times stronger.
And he called for the sort of need to control and contain quantum technology like we should have controlled and contained nuclear technology, which I think was really quite fascinating. So continuing on, we did also hear from the director of OSTP, the current director of OSTP, which is basically like the science and technology advisor to the president. And he said, it's a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global quantum computing dominance, which I thought was an interesting topic to be so close to Naftali Bennett's comments.
Okay. So on the more positive side, let's lighten the mood a little bit. One of the really interesting numbers here is 50 logical qubits. So the CEO of Quantinuum essentially went on stage and promised, to hundreds of people, that this year they are going to have, he called it the industry's first, 50 logical qubit commercial system. Actually, Atom Computing also has a 50 logical qubit system that's available today, although maybe not commercially available, so maybe that's a distinction. But 50 logical qubits is a really important number because if you guys remember this slide from novelty search, this came from a DARPA presentation from last year. And we sort of said, there's a scatterplot of problems that quantum computers are going to solve. And on the bottom axis, you have the number of qubits. And on the vertical axis, you have the number of operations. And we sort of showed that quantum computers are getting better and problems are getting easier, which is really exciting. But I'm just going to, for the purposes of this, stop being quite so designer friendly and show you the full color so you can kind of see the problems and map it into the key. But 50 qubits, because this is a logarithmic scale, is like right here. So that's where we're starting to intersect with problems. And at 100, we have like entire categories of problems that are sort of behind us. So it's an extremely exciting time.
And what you can even see that's even more exciting, I think, is companies like IonQ are projecting that by 2027, they're gonna have 800 logical qubits. Now, I think that's actually a really bold claim. I think that's a little bit bullish, but this is what they're presenting. And 800 takes you to here. So essentially, you'll have entire classes of problems that are totally solvable with quantum computers if they can achieve those things that we can't computationally solve today, which is really actually quite exciting.
Okay, so then in addition to all of that sort of public broadcast stuff, there was also some sessions that were under Chatham House rules, which is essentially to say that you can go to these meetings but it's like a media free zone. And so you're not allowed to attribute anything that said to any person. So no photos of it and no sources. But some of my interesting takeaways from that session where, you know, quantum supremacy has been a term that's been used for a long time. They're starting to call it quantum advantage instead of quantum supremacy. The term has come under a little bit of scrutiny. But instead of talking about achieving quantum supremacy as a technology, there's a really big change this year in that people were talking about quantum supremacy as a descriptor of nations, which is sort of like a scary trend, in my opinion. It was also called out that 41 countries have quantum policies now, whereas 12 months ago, it was 12. So like a lot of countries are starting to sort of think about what technology do we want to control? What do we want to let out?
And there was this statement from some of the smaller countries that no matter how big you are, no single country has the entire value chain for all the different quantum computing modalities. So this call for sort of like, we must work together. And there was a highlight that the technology life cycle of this stuff is way longer than any political life cycle. It was also called out that it's an eye-wateringly expensive investment to achieve quantum computing. And then it was obviously pointed out that the biggest single customer of quantum is the US government, which I feel like that's a no-brainer. But hey.
So we were super excited. I know this is kind of a qBitTensor Labs thing, but our partner, Quantum Rings, was on NVIDIA's deck, which was pretty excellent to see. Yeah, getting some recognition for some of the cool work that we do. And overall, it's just been a really incredible event. The mood behind quantum is clearly seeing tremendous momentum. This industry is about to explode. It's really quite crazy.
All right. So jumping into sentiment. So one of the things I do is I do have somebody on the team always sort of on the lookout for just how are people feeling? And so some of the things I would say is that it feels right now like the dTAO investors are feeling pretty good. Feeling extremely comfortable with these quantum stocks, playing chess, not checkers. The idea that bringing people in for free compute and getting them to start participating in innovation awards. "I don't even invest in other subnets," that's fun.
Loved the interviews. Yeah, kind words about the podcasts. In general, the dTAO investors seem to be feeling pretty bullish. Also hilarious. So this one says, I need to finish the interview, was super late for me and I passed out listening to Bob's voice. So I appreciate it. This is my favorite. I think that's Quantum Tangerine judging from the icon. "If quantum subnets make me rich, I'm getting open quantum logo tattooed on my ass." I love it. We'll see if we can achieve that for you. And then this one, I hope this is mostly in jest, but "63 holders have Stockholm syndrome at this point." At least you're still with us. Yeah, we'll try to make it worth your while.
You know, okay, so on the miner front though, up until we released those changes, there was starting to be a little bit of frustration. So I'm really grateful to Certain Tangled and ShorShot and all the good team that has been working on fixing those issues. So yeah, people are asking about getting higher than 39. People are saying hey, I'm getting deregistered when I'm trying to do good work. So like my option is sort of to stop mining. The good news is that after we released it, we actually had like 24 hours of like nobody posting on Discord. And I was a little bit like worried. I was like, is this a good thing or a bad thing? Well, it turns out it was a good thing, I guess. So people were feeling like the latest updates were really giving them what they needed. And Timo who had posted earlier that he was going to hang it up for a little while came back. So yeah, we're thrilled that those updates worked well for you guys. We're going to have another cool set of updates coming out to even speed up the revisit period as we increase generation.
All right, so the two big topics, the big themes of, I would say like good, valid feedback that's coming from you all right now is on subnet merging and on winner take all. So I wanna talk just a little bit about each of those and then we will wrap up and I'm gonna get back to the Quantum World Congress.
So on subnet merging. There is sort of a lot going on right now. If you think about what the team's juggling, we're getting phase one roadmap of 63 going right. We've got the peak circuit work that we talked about. We're launching Shor's, which we talked about. We also need to get to our phase two, three, which is sort of this bridges of quantum concept, which is actually a fairly big high risk change for us also. We're in the process of finishing and launching 48, which is not just a subnet. It is a subnet that is a two-sided marketplace where validators are running little web servers. And we also have a full centralized open quantum stack that has to be launched at the same time.
So I guess my thing on the subnet merging is I'm really grateful that that technology has been made available. And I think it's a really cool idea. And I'm certainly not ruling it out, but I guess what I am saying is we're not taking on that additional risk right now. There's just so many different moving pieces. We've sort of filled our plate with risk. And the other cool thing is by not doing it right now, that gives us the opportunity to let somebody else make the first moves on it and sort of harden it and figure out what works and what doesn't work. And it will de-risk the future if we do decide to revisit it and take that on. But anyway, thanks for pulling our attention into that though. It's really cool to see that thought has gone into that. And at some point it might be really a strong consideration.
Okay, and the last one was winner take all. So pulling back slides from last week, if you remember, we sort of had this thing where we said quantum compute, subnet 48 aims to be the Shoals of quantum and quantum innovate 63 is going to continue to evolve to look more and more like the Ridges of quantum. And in that vein, we sort of highlighted what things we liked about Ridges, like why would we want to be the Ridges of quantum? And the big emphasis was bringing outsiders in, which really has to do with US dollars instead of crypto wallets, problems that are more meaningful instead of just solving the problems, open sourcing the code and with those types of things, how can we start to attract pedigree and start to incentivize quantum experts in?
So, okay, so to achieve those, none of those things dictate that we have to be a winner takes all, but we still love this idea. Like there's zero people on the team that don't like this idea. We totally see this coming into play by phase three. I mean, it feels like a slam dunk. We aren't ruling it out sooner. We think it could come into play phase two. It could even come into play phase one. And it may be all, right? So maybe a winner takes all. It may be a winner takes most, right? Where we sort of set aside like award pools, but we maybe have some smaller way for people to offset a portion of their costs because a lot of these miners will incur costs trying to innovate.
We love the idea. We see this the same way that you all see this. We're definitely not saying no to this idea at this point. We think that'll be a huge thing in Subnet 63.
Cool, so with that said, I think that is the end of our content for today. So I'll maybe conclude by just giving my gratitude to everybody, to all of our partners out there. Whether you are a miner participating in this by trying to crank through these problems and push the limits on what you can simulate. Whether you are a dTAO investor who continues to show faith in our vision, who's starting to really see the opportunity here the same way that we are. Or whether you're just a general TAO enthusiast or quantum enthusiast, we're really grateful that you're here with us. And with that, I think we will wrap. Thank you, everybody.