qBitTensor Labs Live — October 2, 2025
Peaked circuit generation sees a 5-10x volume increase alongside reduced revisit times, Shor's challenge details and launch timeline revealed, Open Quantum platform progress with 150k+ lines of backend code, and a deep dive into market modeling and halving preparedness.
Peaked Circuit Volume and Stability Gains
We reported a major leap forward in peaked circuit generation volume and overall subnet stability. A code enhancement, combined with two additional validators coming online, drove a 5-10x increase in the number of peaked circuits being delivered to miners. Revisit times dropped from days to hours, and for the first time since launch, the team felt comfortably ahead of miner demand rather than constantly playing catch-up. An additional hotfix addressed performance issues that surfaced after the initial scaling update.
Scaling Peaked Circuits Beyond 39 Qubits
Will explained the recycling technique that made the volume increase possible: rather than generating every circuit from scratch, validators now recycle pre-existing tensor networks and convert them into many distinct circuits that can be delivered much faster. Looking ahead, the next frontier is scale rather than volume. Miners are currently competing on speed at the same difficulty level, and GPU memory remains the binding constraint at higher qubit counts. Validators can occasionally generate 40- or 41-qubit circuits on H100 hardware, but failures are common enough that a cap is enforced. The team outlined plans to find innovative approaches that let difficulty keep climbing without being bottlenecked by validator hardware costs.
Shor's Challenge Architecture and Launch Timeline
Rob, the architect behind the Shor's challenge and primary author of its technical paper, joined the broadcast for the first time to walk through the algorithm and challenge design. He explained that Shor's algorithm factors semi-prime numbers by finding the period of a modular exponentiation function via quantum Fourier transform, and that the challenge focuses specifically on the order-finding step. Unlike peaked circuits, miners will submit raw measurement data rather than a final answer, and Rob introduced deliberately embedded features in the output data as a proof-of-work verification mechanism. We announced that the public branch would be published on October 9 with a target launch of October 13 to give miners time to prepare.
Open Quantum Platform Progress
We demonstrated the Open Quantum platform end-to-end for the first time using real, implemented code rather than mockups. The demo showed single sign-on via GitHub and Google, user-scoped job queries, circuit upload with metadata, job submission that routes through validators to quantum computers, and job management including cancellation. The backend API alone had surpassed 150,000 lines of code. The remaining work centers on credit management, e-commerce integration, and building Python framework integrations so users can submit jobs directly from their development workflows without touching the web UI.
Halving Strategy and Emission Management
We addressed a community question about the December halving event and its potential impact on the subnet. Our approach is to launch Open Quantum while continuing to burn a substantial portion of miner emission -- starting at roughly 80% -- to create a healthy queue for quantum compute resources rather than offering zero wait times that would be unsustainable at scale. Leading up to the halving, we plan to bring the burn rate down to approximately 50%, which provides significant flexibility to absorb the halving's dollar-denominated impact on quantum computer operating costs. This gradual approach also benefits investors by avoiding the sudden tokenomics shift that typically occurs when a subnet flips from burning all emission to distributing it.
Market Sizing and Value Creation Model
We shared a detailed market model for Open Quantum based on publicly available Qiskit user milestones cross-referenced with Unitary Fund survey data on quantum framework market share. The analysis estimated a lower bound of roughly 750 million dollars and an upper bound of 1.3 billion dollars for the addressable market, with projected first-year capture of 22 to 40 million dollars under modest market share assumptions. These figures assume quantum computers in their current form with no technological advancement, and the growth rates used were drawn from the relatively quiet 2022-2023 period, making the estimates conservative by design.
Hello everybody and welcome to qBitTensor Labs Live. Today I am excited to be in our actual office space. Normally I take this meeting from my home office or on the road because I have awesome internet. Our office is actually in the Colorado Quantum Incubator associated with the University of Colorado Boulder. And I actually can see the IT gurus right now going and working on the internet connectivity. So if we end up dropping or sucking, apologize in advance. All right, let's jump right into it.
All right, so as always, the information being presented here is more information than we would usually put out. And so if you've been with us before, you know that these are ideas that we're sharing, not promises. We never give investment advice, but today we will be talking a little bit more about dollars and tokens than we usually do. And so again, I just wanna emphasize, not investment or legal advice. You know, we appreciate all of the treatment that you guys have had of this content so far. And if you agree with that, stay on. And if not, drop off. All right, so today we're going to talk a little bit about tech support, which is mostly like a really surprisingly good news story. So I'll be excited to hand it over to ShorShot Ryan when we get to that. Ryan is one of my most trusted cynics when it comes to this stuff. And so I'll be excited to hear his takes on that.
When we get to R&D, we'll talk about some of the work that we have to do to continue to scale the peaked generation. We also have a new quantum geek who you all have not met before, but we've known for a long time, Rob, who will be talking about Shor's. We'll also share a little bit of looking ahead at just kind of how Subnet 63 starts to prepare for the transition from the current world that we're in into kind of the ridges of quantum that we've talked about, share some pretty good quantum updates for Open Quantum, and then community. The community section, were a lot of good questions submitted, so we're gonna spend a little bit of time there. So this looks a little bit light, but actually I think it's gonna be some thick content today. So without further ado, we'll jump right into it. So yeah, on the tech support side, Ryan, I'll kinda hand it to you.
Yeah, so as you may have seen in the chat, we've had an update to the validator, a couple updates, because there was an additional hotfix after the original peak scaling update that was causing some performance issues. Yeah, basically what we've been doing with stability is -- we had a major code enhancement and like I said, we've been able to improve the performance and then we also had two additional validators come online recently. So that should also improve how many circuits people are receiving as well. Because of this, we have a massive increase in the number of peak circuits generated, maybe even overwhelming for some people. And then we have revisit times that have gone down from days to hours and then circuit volume has had an over 5x increase as well.
Yeah, and think that 5x increase is sort of even like a low estimate, right? Like, what kind of ranges are you seeing in your data?
Yeah, I mean, I think it could probably be between 5 and 10 times the increase, but yeah.
So with all that increase in circuit generation, are any of the miners starting to squeal that it's too much yet, or are people handling it pretty well?
I haven't heard a lot of chatter. It would be nice to kind of get some feedback from the community and see how things are going with them.
Yeah, awesome. Well, nice work on that. And so, yeah, with that, I think this is actually probably the first time since we've launched where I don't feel like we're sort of behind the eight ball in terms of trying to keep up with miners. So knock on wood. Hopefully we can keep that lead, but things are looking really solid. But I think the next big thing then is getting to scaling peaks. So do you want to talk a little bit kind of about that.
Yeah, sure. I think my controller is messed up, so if you could hit next, that'd be great.
Yeah, no worries, we'll do this old school, no worries.
Yeah, sorry. I think that's actually my slide. All right. Yeah. I mean, so just to recap, that was weird. I went the opposite direction. Just to recap for anybody who, I guess, missed it, right? The main improvement that we've made over the most recent improvement has been to introduce this method of recycling, I guess, pre-existing circuits. These things are really hard to generate in a completely fresh way. So what we've come up with is this nice simple solution to kind of be able to recycle things, recycle like these large tensor networks that we're generating basically, and be able to convert them into many different kinds of circuits that then get sent out to miners in a much faster way. So far, we don't think anybody has sort of broken this, which is really great.
And it's ended up being pretty great for the health of the subnet, right? We've dramatically increased the speed of circuit delivery to miners and it's been really great to see things running smoothly. So far, nobody has complained, which is good. And the main thing that I guess we want to think about looking forward is how to anticipate miner innovation because while this has dramatically improved the current performance of the subnet, eventually miners are going to innovate and start asking for much larger circuits and more frequently. And that's something that we need to stay on top.
Yeah, and so with that, Ryan, what are we kind of looking at next?
Yeah, if I could actually get next, think. Yeah. Yeah, I had the same problem as Will where the slide controller made it go backwards. And that confused the heck out of me and tripped me up. But yeah, no worries. Technology happens. So the bottom was volume. The next is scale.
Yeah, just give me a shout out anytime, brother.
Today, miners are competing by solving the same level of difficulty faster than others, especially now that we've decreased the revisit times and gave people more circuits to solve. So it's really a game of speed at this point. And then scaling will enable them to take on harder challenges. And then increasing the difficulty by one doubles a miner's work. We are currently memory-bound by GPU memory as well doing that generation, I guess you could say. So our goal is to leverage work done to increase the quantity. There's a path to now increase in the scale as well. So we just need to kind of start to try to implement that.
Yeah, okay, that sounds awesome. And like I think with the memory bounds from GPU memory, it's been, yeah, super harsh trying to like push beyond, where are we at like? 39 qubits right now?
Right, yeah, and you know, there is some capability to generate 40 and 41 with an H100, but it rarely succeeds. Currently, if a validator cannot get above a certain qubit level, then they're actually capped at that level, and they won't go above it because of failures that may occur. And so it's really a challenge of kind of like, you know, fighting between what hardware validators choose, what we think we need, costs of running these validators by the validator groups and how much they want to spend. What we want to do is come up with an innovative way that we could still keep increasing the difficulty and provide value to our miners.
Cool, that's awesome. And yeah, just to sort of look back on this for a second, I'll go and weigh off script on this. It was actually pretty mind-blowing when people started doing 37. Now people are like, give me 40, right? Which is pretty breathtaking, actually. It's pretty good progress.
Absolutely.
All right, so continuing on though, peaked is just one of the challenges that we offer. I think a lot of people saw the announcement, but in qBitTensor Labs Live two weeks ago, we sort of said, hey, you know, if you guys want to see the paper a little bit earlier than we usually put it out, we'd be happy to put it out. And so, yeah, from, you know, it was only 8.21 on novelty search that we first sort of teased that Shor's was going to be a thing. And now that's out there in live. And we actually have the kind of architect behind this challenge and the primary author of the technical paper, Rob, on the call with us today. And so, Rob, I'll sort of hand it over to you to talk about Shor's.
Well, hi everybody. It's good to meet you.
Hey, Rob, actually, let me go off script for a second. Nobody's met you before. You should give a quick little recap, introduce yourself.
Yeah. I'm Rob. I'm a PhD in physics and been interested in quantum algorithm development and joined Quantum Rings about a year ago. I do the algorithm development. Shor's, spent a lot of time with Shor's and interested in types of algorithms like financial algorithms and things like that, which maybe we'll see what we can work in. But that's just a quick introduction. Let me introduce you to the algorithm. That's probably more what you're interested in.
So the Shor's challenge. So Shor's algorithm is exciting because like, for example, with Bitcoin, you're trying to solve factoring problems. It's important because one of the really big exciting things about quantum computers that got them on the map is this very algorithm because it's predicted that it'll break RSA encryption eventually. It can't do it now, but what you can do now is run it in simulation. And so that's what we're getting at.
So the goal of Shor's is to factor a semi-prime number into two prime numbers by doing an order finding and this challenge is going to focus on the order finding which I'll explain more in a bit. The structure, I will also explain a bit. I have a diagram, which is easier to look at than this short paragraph. And then the output. One of the important things is Shor's is a factoring algorithm. And so you take your semi-prime and you factor out two primes. But the quantum part of that really stops at this period finding. And so that's where the challenge is going to be, is we're going to generate the data. It's going to be different from the peak circuit because instead of actually sending the final answer, you're going to send your raw data. So let me show you what the circuit looks like.
Our slide remote control backfires again. Sorry everybody, do the opposite of what you think.
Okay, yeah, and it looks like I got this in the wrong order, so I'll have to remember that. But I wanted to show you guys Shor's circuit, because this is really the way to understand Shor's. The generating function, I don't know, can you see my cursor?
No, no, can't.
Alright, so the generating function is at the bottom. This f of x is equal to a to the x mod n. That is really the key function in Shor's. And a mod function is periodic, like a sine function. And so you can imagine if this was f of x is equal to sine x, it would be a periodic function. If you want to find the period of a periodic function, you do a Fourier transformation. And that's what Shor's does.
So the register on the top left is your x input value. So if you were going to put in 1, the bottom one would be 1 and the other two would be 2. If you're going to put in 2, the middle one would be 1 and the other two would be 0. That's where you put in your x value. Now the way that Shor's works is you put these Hadamards on which, I don't know if you guys have been following how quantum gates work, but this puts it in superposition so that you're looking at all of the x values at once. So if you could imagine like if you're looking at sine x, it's like you have your entire x axis. You're looking at it all at once and doing the Fourier transformation.
And then at the bottom what you're doing is you're representing your a values and this is -- my text here is not great but it should be a to the 2 to the 0, a to the 2 to the 1. So this is where you're creating that a to the x part of it. So that is what Shor's circuit looks like. And then how to go from the period to the primes takes a little bit of math and I'm not going to get into it here but it's easy to find and you can certainly ask us and we'd help you. But that's not where the goal is going to go, that's just sort of for your own learning about Shor's.
The challenge, like I said, we're going to give you the order. We're going to give you the circuit, a QASM circuit just like you're used to getting, and you can run the circuit on a CPU, GPU, QPU, however you want to run the circuit. We're not going to find N, we're going to give the raw data and analyze that raw data.
The difficulty of the scale, similar to the peak circuit, it grows exponentially, or at least geometrically, I think exponentially, as you go up in level of difficulty. And this is gonna be quoted in level of difficulty rather than in qubits. And the reason for that is the registers, like if you have two registers, they grow together. So you can't just add one qubit. You've got to add two at a time or three at a time. So we're doing it in levels of difficulty. And just to be fair to you, I put features on the broad data just as verifications that it's been run to do more of a proof of work. So the output data that you guys get is not going to be a pure signal. It's going to have some features added to it. So anyway, that is the challenge.
That's really cool. And that whole adding of features into it is a cool security feature that you've added. One of the questions that came up in the kind of like pre-meeting Q&A was around some of the fingerprinting stuff that we've talked about in past, so I'll kind of comment on that. But this is sort of like a Shor's specific fingerprint that Rob's introducing to basically ensure that real quantum simulation occurs. Really cool.
All right, so as we continue on, that takes us through most of the technical part, but to give kind of like full visibility into where Shor's really is, we had previously shared a version of this kind of high level status that showed that most of the work was done. We're putting dates on it now. We will publish the public branch that shows Shor's in its full implementation on October 9th. I believe that is a Thursday, but don't quote me on that. And the aim is to give miners time to go in and actually start working on that, get that implemented. And then the intention is to launch it on Monday, October 13, US time. So if you're overseas, that'll be kind of in your afternoon, evening. But hopefully that should give all miners a chance to onboard and get ready for the challenge. So we're super excited to see how everybody does with it.
All right. Looking ahead, so we heard the team talk about the peak circuit, which has been like an extremely reliable, very hard to do anything other than a real quantum simulation on, and some of the tuning that they've done to improve the number that gets generated for those. We'll continue to work on scaling those up even further. Rob shared about Shor's, this new challenge, which is super exciting because it's literally the algorithm that basically triggered all governments to start massively funding quantum computing. So our miners will get some hands-on experience with that.
But as we look ahead, I just kind of want to touch on the 63 roadmap, and then we'll sort of transition into Open Quantum before getting into more of the community topics. So looking ahead, we talked about this in the qBitTensor Labs Live on September 4th. But basically saying quantum computing, you can kind of look at that as analogous to being the shoots of quantum. Quantum innovate, you can look at that as sort of being analogous eventually to the ridges of quantum.
And if we kind of say like, okay, well, we just described these problems where you give me a problem, I execute it and I deliver the answer, and you tell me how I did and I get emissions, and ridges is much more focused on delivering innovation on algorithms through source code, how do you sort of make that transition?
Today, the subnet kind of looks like this, again, with these challenges to cut your teeth on. But we're gonna be expanding that. And so, we will continue to operate what we originally called the phase one challenges for a period of time, but we're really looking to start to transition more and more over time to these much bigger things that are really focused on advancing some of the biggest challenges that face quantum computing. And in doing that, to create a wake of quantum IP that can either be commercialized or licensed.
And again, some of the big things, if you haven't heard, the intention is to do that in USD, where these other challenges to cut your teeth on happen in Bittensor directly with alpha and tau, and to really try to focus on bringing people from outside Bittensor in to do that. Now, as we do that, and as we prepare for that, we need to sort of begin to align incentives.
One thing we quietly did a few weeks ago, and we announced that it wasn't like we did it opaquely, but we didn't draw a lot of attention to it, was we started collecting a small portion of miner emission into an innovation pool. Right now, that's at about 10%. And so a big part of that is starting to sort of fill the pools that will be used in the incentivization for this next phase. There will be a slow increase as we go. We just sort of want to highlight why we're doing that and in future weeks we'll talk a lot more about some of the actual structure of how the prizes and the incentives work in that sort of new realm.
But I just kind of wanted to give a heads up now that we'll slowly be turning that dial and we'll do it in a way that's very thoughtful to the miners who are on the current challenges, but in a way where it starts to again populate those award pools that we'll be using in the next phase. Also, I suppose, while we don't usually comment on tokenomics, there probably is like a tokenomics impact of that change also.
Okay, so the big vision though is to get both the quantum compute and these quantum challenges live on Open Quantum. And so without further ado, let's talk more about Open Quantum. All right, so this was kind of like the format that we used to talk about the status of Shor's. And it's just not deep enough to give you kind of any level of detail. So what we'll do is we'll go kind of a level deeper than that.
If you look at this kind of in-progress stage, there are a number of different elements. First, we have to develop the subnet code. And the subnet code has made actually tremendous progress. So validators, the validator code is written, the miner code is written, job management scoring, like a lot of the key things are done. This has all been run on local net. But we're at the point where we're about to start getting things onto Testnet. And we're actually just about to do our first end-to-end dry run where things go all the way from Web UI onto physical quantum computer, which is pretty exciting progress.
But that subnet code is just what it takes to run most subnets. We actually have this whole Open Quantum platform that needs to be built on top of that. And so the Open Quantum API has made a huge amount of progress also. Really the key things remaining in here are a little bit of business logic and then we haven't implemented like the credit management or e-commerce yet, but most of the rest of it is implemented. The website has been wired into most of that and we'll show you what that looks like here in a minute. And the piece that we really need to get to once we start to wrap up the remaining API work is to focus in on like integration into the quantum frameworks, which is, you know, sort of like instead of users using our website, actually installing Python libraries that wire into their workflows and just let them execute circuits right away.
And so a few weeks ago, we had shared a video that was like a mock-up. Now I'm going to share the exact same workflow, but actually implemented. So you can see one click, single sign-on to actually get into the account with GitHub and Google. You can see that we're actually querying jobs based on the user account. You can create those jobs using the systems that the backend says are available. You can go in and upload your QASM files, give it all the necessary metadata that you're looking for. Now, again, the e-commerce and the credits are mocked up still, so you'll probably see some funny numbers on those, but don't be distracted by that. You submit your job and that job ends up going all the way down to where the validators then pick it up, transfer it into the miners that wrap quantum computers, and the job goes to the quantum computer, which is pretty intense.
Now, you can also from here see that filtering is working, but you can also cancel your jobs and manage them if you decide that you had set some parameter wrong or whatever. And so, yeah, I guess the gist is that -- I can't remember when we showed this, whether it was four weeks ago or two weeks ago, the mockup -- but pretty mind blowing that the team has taken this all the way down to implementation. That's all wired up, I mean, hats off.
In fact, I saw something recently where we sort of talked about all of those different major components, which all have smaller components within them. But I think it was like -- and Ryan, it was like 150,000 lines of code so far in just the API project. Is that right? Am I thinking about that right?
Yeah, in our full stack area for just the backend APIs and services.
Yeah, wild, wild progress. So nice job so far, guys. And yeah, we'll be super excited to get that launched and going. All right, so hey, as we go, let's talk community. And so we spent a lot of time last time we talked with sort of like the positive hype, because we're coming off of Quantum World Congress.
And one thing I really like, you know, I think you guys are pretty aware that this always happens, but it seems like especially relevant in deep tech like quantum, where you'll sort of have like two pretty polarized sides of things. You'll have like a lot of people out there that are hyping things pretty significantly in the positive direction, but then you also have a lot of people that are like hyping things pretty negatively. And so I kind of just wanted to talk a little bit about that and just share what we should do with that type of information and how we sort of like separate the noise from the signal.
Now this is one example that one of the guys on the team shot over to me. I don't know if you guys are aware of Martin. Whether it's quantum or anything, I think Martin typically operated more in the pharmaceuticals area. But he represents sort of the archetype of the person who essentially makes all their money shorting technologies. And so the way his MO goes is you take a short position on something and then you go onto all the socials and you just blast it any opportunity you can.
And you know, that is one archetype. There's also like, I think a more interesting archetype, which is something that this user posted. They sort of said, hey, there's a lot of hype around 48 and 63 and I'd encourage everyone to listen to the actual scientists before investing. I think they're saying don't invest, but actually, I think you should listen to the science and then make a choice.
But there are very well respected physicists out there who are also pretty skeptical. And that's actually like really normal for deep tech because essentially we live in a world where science still needs to happen. And so there's a lot of really cool theory and discovery that's gone out there. And in order to actually harden those things, you have to have the hard skeptics that will also beat the hell out of these ideas.
And so yeah, in this particular video, this was actually a physicist taking sort of a negative view on quantum. And I think she had actually a lot of really fair points. So my takeaways were significantly different than hers, but actually a lot of perspectives. And so I think what I would say is do your homework. Get to know it, listen to the good sides, listen to the bad sides. And then if you have any questions, hit us up and we'd love to talk through sort of what's real, what's not real, what are physicists certain about, what are they sort of speculative about, because those things do kind of all get blended together in an interesting way.
But yeah, lots of, at the same time that those types of things are happening, lots of major things where HSBC just did some awesome work showing a quantum algorithm doing some algorithmic trading. We continue to see government prioritizing quantum along with AI and basically no other technology areas, which is pretty crazy. And lots of other exciting things coming from the market. So obviously lots of positive hype, which really does outshine, I think.
All right, so Q&A, you guys put a lot of questions in and we're gonna go a lot deeper than we usually go. So I'm gonna just repeat that all this stuff is ideas, not promises. I'm gonna say some things about financials. Most of them are models and not actuals. There's no investment advice in here, but let's answer some of the questions you're looking for.
All right, so here is, we'll start with kind of a softball. Jeremy, let's go. I think this is Quantum Tangerine. Maybe he's trying, I don't know, if you're trying to get rid of that, I'll stop calling it Quantum Tangerine. I thought that was a cool name. But this was a really solid question. It's like, okay, free quantum computing for all is a cool user acquisition strategy, but what happens when you just start getting bloated with spam? Super legit question.
So there's a lot of ways we have talked about handling that. There's sort of like the pay to play, where you say, either as a one-time fee when you create your account, or as a recurring fee on some periodic basis, or even per circuit, create some sort of nominal fee to disincentivize people from submitting dumb stuff. Now, there are a lot of students in this field, a lot of academics in this field, and so the pay to play option is a real option, and we definitely won't rule it out, but it's also not a preferred option, because we really are focused on trying to democratize access and allow anybody to innovate on the space. But that is one option.
The other option is like a very simple per user throttling, where you basically say, for any given account, you can only submit a job on n period. That would work. What is most common in scientific computing is a dynamic throttling, which essentially deprioritizes where you fall into a queue based on how much usage you have. This is a model that most universities use in their scientific computing clusters, and that would also be a very good model.
With all of these things, these are all actually very easy to implement. And so for launch, the intention really is to focus on what you highlighted there, Jeremy, which is user acquisition, getting more users, more data into the system. And so at launch, we will make it open, but we'll have these essentially ready to be able to limit spammers as we go.
Oh, you know, there was one other option we had talked about that we didn't put in here also, which was basically like an implementation of an MFA type solution. When you submit a circuit to be executed, it sends you a text message and you need to follow a link in the text message to basically avoid people from just creating fake accounts and spamming you.
All halving. I think this one just came in this morning, maybe early this morning, but this is a super relevant question. And so we'll start to share some pretty interesting details on the way that we're looking at this. Basically, the question said, hey, in December, there's gonna be this halving event. It seems like it's gonna be pretty disruptive. How are you guys gonna handle it? There's some details that are probably a little bit incorrect in this post too, but the principle of the thing is totally right.
So today, pre-launch, we're taking the miner emission and we're burning it, right? Like obviously we don't have miners mining and so it's good for everybody to just burn that emission. And most subnets when they launch, they actually just stop burning that emission and they start feeding it all out to the miners. Well, our aim is actually going to be to like throttle that and to slowly turn that dial up.
And so if you imagine at launch, trying to continue to burn something like 80% of emission, one of the key things here is that the miner emission is essentially going to define the operational budget for the quantum computers. And so at some point, once demand is high enough, even if we have 100% of the emission going into it, there will be queues. There will be people lined up to execute their circuits.
To sort of like limit the supply of quantum computers that are available so we don't create like a zero wait time for our users, which is unrealistic to maintain as we scale, we'll sort of be burning substantial amounts of alpha at launch to create a queue.
Now, over time, leading up to the halving, our intention is to get to burning about 50% of the emission. And the aim there is that all of these quantum computers are being paid for in US dollars. And so people were sort of bickering about, is it the alpha that's getting cut or the tau that's getting cut? But the net net is that, at least in the short term, the US dollars will be impacted by the halving.
So by keeping the burn at about 50% up until the halving, that gives us a huge amount of flexibility to manage the impacts of that without actually having impact on our users' experience. Now, the other thing, again, I don't -- alphanomics, tokenomics, whatever. As I understand it, a lot of times at the launch of a subnet, you have a pretty significant impact on investors because they sort of say, hey, well, you go from burning all of this miner emission into distributing all of this miner emission, and that changes the tokenomics quite a bit.
By actually turning that dial slowly, I think we also create a much more comfortable environment for investors because the tokenomics math just looks a little nicer. Now, that said, everyone has a plan until you get punched in the face. And once we get going and we launch, we'll see how things go. Those numbers probably won't be the exact numbers, but take them as the current guidance.
All right, value creation. So some good questions around what is the actual value. And so I'd love to spend just a couple of minutes sort of modeling that. Now, again, this is model only, but this is one of the two top models that we use in terms of estimating market.
So if you look at Qiskit users, Qiskit is IBM's quantum framework. They've had a couple of events where they released user counts at milestones. We also have a survey from the exact same time period from the Unitary Fund that showed what the market share of Qiskit users was to the total universe of quantum users from a survey. And so we can use that number to essentially forecast out what the actual estimated lower bounds of the number of quantum developers who are playing in the space are. And it's a pretty big, I mean, it's a pretty reasonably big number.
We also can apply the compounded annual rate from these periods to sort of show where that would forecast into the future. Now keep in mind that that growth rate is from 22 to 23, which is actually like not the most popular time for quantum. 23 to 24 and 24 to 25 have actually been a lot bigger, and so these numbers actually under-represent by logic what that market probably is.
So then you can say, okay well, what is that market? Well, now we have to start getting into some assumptions, which is the jinx of modeling. So consider the error bars on this 100%, right? Because until we start actually seeing business, these are all just assumptions.
But if we assume that 15% of the community is hobbyists, 3% of them are professionals, and that leaves 82% for what I call freeloaders, right? They're not gonna pay for anything anytime. Or maybe on the upper bounds, we give it slightly higher percentages. And then we sort of say, okay, well, if you're a hobbyist, maybe you're doing 50 circuits a year on a real quantum computer, and maybe 10% of the time, you're paying a couple bucks to jump in line, and maybe 50% of the time, you're paying a couple bucks to do bigger executions.
But if you're a professional user, you're actually doing a significant amount of work, you're probably doing low 100 executions per week, and you're doing these as private executions 100% of the time because you don't want your circuits being shipped out to a universe of miners, right? That actually has a much higher price tag on it.
And if you map those numbers in, it creates a pretty compelling market, right? So I divided everything by a million so that these numbers wouldn't take up very much space. You start off with something like a $750 million market on the lower estimate and a $1.3 billion market on the upper estimate.
And if you say, well, let's assume we can get some modest share, which again, with a free quantum compute offering, we should be able to attract a substantial portion of the market. So these also feel like they're sandbagging a little bit. That creates capture for Open Quantum that would look like something like $22 million to $40 million in the first year with a pathway leading up to billions of dollars.
And by the way, when we're talking about the anti-hype, this is for just quantum computers as they are. This isn't for quantum computers that are actually solving the world's biggest problems. This is literally for giving developers access to these quantum computers in their current form today with no technological advancement.
And so, yeah, we're pretty excited about the market. We absolutely think it's real. Last, there were some questions about HHL and proof of compute. And so I know I do have Rob and I do have Will on. So I might actually ping you guys for some opinions on that.
Before I do, the first question was from the community. And it said, hey, will you tackle HHL? I'm impressed that you're asking about HHL, because when you hear people talking about quantum algorithms, they very rarely talk about HHL. But my starting point is regardless of HHL or not, we are trying to get 63 migrating from these sort of toy problems into very bigger meaningful problems. And so whether or not we would do an additional challenge probably depends a lot on how Shor's and peaked sort of consume the attention of the mining community.
But Rob, I'd love your take on just HHL, kind of a few thoughts about that algorithm in general, if you're available.
Yeah, I'm here. So I think HHL is an interesting algorithm. It's one of the earlier algorithms that was developed. One of the things that I think is interesting in it is that even though quantum computing has been around for a little while -- I can't remember who with the slide said when Peter Shor came out with Shor's algorithm -- but there's not that many different algorithms that have been developed. And so HHL is kind of a quantum algorithm to do matrix math and I've tried to use it before and one of the limitations of HHL is that it requires a sparse matrix. And so that kind of limits the problem set that it can be applied to. So I think that the way I think of it is that it's kind of not a utility matrix math algorithm but more of like a special case matrix math algorithm.
Yeah, yeah, I think that's like a really good perspective. The other thing I would say, because like I tend to be less in the weeds than sometimes you guys are, is like, it's just harder, it's also harder to talk about it. Like from a marketing perspective, you know, it's like, Shor's, what's that gonna do? It's gonna break encryption. Okay. QAOA, what's that gonna do? Any binary unconstrained optimization problem, right? So like there's sort of these ways to like map a lot of the other quantum algorithms into like really big, exciting things that the world wants to talk about. And HHL, I think it's practical for a subset of problems, but it's lower down in the stack.
So totally a legit question, but probably wouldn't make it to the top of the list by my take. But I'd love more into it on that. So yeah, I'd love to hear more. And then Will, the second one in here was about fingerprinting. And I'll share just a quick thought on it, and then maybe I'll have you go a little bit techier on me.
But basically, this was saying, hey, with this whole proof of compute thing, if you guys are doing closed loop for quantum computers, do you really need to tackle that right now? When do you need to tackle it? Do you only need to tackle it when there's QPUs everywhere? And I think the key distinction that I would share here, and then maybe I can get some updates from you on if there's any evolved thinking on this one yet, is that one of the key enablers here is actually to let miners participate through HPC simulations in Open Quantum, like the really broad open network of miners.
Because with the quantum computers today, we don't need to do verification because we can actually verify those companies directly. And even fingerprinting it wouldn't necessarily work because the computer itself actually is gonna be fairly noisy and error prone. And so the fingerprint would be even harder to detect anyway. So really the initial target for the fingerprinting was going to be in proof of compute for HPC simulation. But Will, I'd love just any updated thinking that you have on that one since the original statement that we shared back during the novelty search.
Yeah, well, I would say that probably my thinking hasn't changed overall too much, which is to say that, of course, this is a very important problem. I'm particularly tickled by the thought of QPUs becoming so commonplace that we need to have some sort of protocol to verify that they're running on quantum hardware, having worked on quantum hardware myself.
But yeah, certainly this is a very important thing to think of in the near term for HPC on Open Quantum. And it's, as you know, it's something that I haven't been thinking about very clearly because -- and it's sort of just kind of an interesting problem in its own right, because it has very strong analogs to quantum error correction, right? Which is sort of a different take on the same sort of idea of imprinting something in the circuit, right? In the case of error correction, it's being able to detect where something has gone wrong and being able to tell sort of exactly where in the circuit it's gone wrong or how to correct it. And so that's kind of one of the main inspirations for what I'm thinking about right now for Open Quantum.
Yeah, awesome. Yeah, thanks for that. I will say, so Will and I were talking, I guess actually even just this week, and there was sort of this question of like, hey, should I be working on the next challenge for 63? Or should I be working on fingerprinting? And it was like, fingerprinting, like it's time. It's gonna unlock a lot for us. So yeah, cool question, Jeremy. And yeah, we'll have more updates on that, I suppose to come.
Anyway, I think that takes us through it. No, we do have a tiny community sentiment. So yeah, this is sort of like a standing item. And I had somebody on the team just go and grab a bunch of sentiment and it looks good. I feel like people are very positive. Even like Neuromancer who we met earlier, I think he was saying nice things. There might've been sarcasm I wasn't picking up on, but I feel like Neuromancer was giving us some kudos.
People sort of giving us shout outs for the peak circuits being generated a lot more aggressively. The dTAO investors seem to be pretty bullish. So yeah, right now we're feeling pretty stoked about things. And again, as always, we really appreciate all of you and all of your participation in the system. We appreciate the continued engagement on the communities, both pointing out things that you see that might be hiccups that we face if we don't think about them in advance or giving us call outs on the things we're doing right or the things we're doing wrong. So yeah, keep up all the engagement and we'll look forward to seeing you guys soon. All right, back to work. We'll see you.