Customer X: Hi, D-wave? So, I hear that you have this computer that can be used to solve computationally hard problems. Oh, yes, sorry, should have said a quantum computer, my bad. Well, you know we've got this hard computational problem, [Editor: problem description deleted to protect identity of involved company.] So what do you think, can you solve this problem for me? Great! Let me put you in contact with my technical guy. Yes, I'll wire the money to your account today.
Months later.
Customer X: Hi D-Wave, thanks for all your help with getting us set up to use your machine to solve these hard computational problems. We ran the adiabatic algorithm a few times, but it doesn't seem to be working. Do you have any suggestions? Oh, try a different adiabatic annealing schedule, okay, I'll pass this on to my technical guy. Thanks for your help. Is it still raining in Vancouver?
A day later.
Customer X: So we tried a new annealing schedule, but it didn't seem to help. Well it helped on a few of our instances, but not all of them. Any suggestions? Okay I can hold. [Celine Dion music ensues for twenty minutes.] Right. Your tech guys suggest this particular annealing schedule. Great, we'll try that! How's the rain?
An hour later
Customer X: Well okay, so we tried that one and again it got a few more answers correct, but now it doesn't work on the other instances. Can you tell me where that annealing schedule came from? Oh, I understand company secret. Okay can you send me another annealing schedule? Rain again? Sheesh, Noah would have loved Vancouver.
Days later, many annealing strategies shown not to work.
Customer X: So, um, I guess I should have asked this when we started, but what understanding do you have about the speed-ups guaranteed by your machine? I mean, certainly you have at least some evidence that the machine will be able to solve the instances that matter, right? Or at least tell me if my instances will be sped up on your computer? Hello? Hello?
[This blog post brought to you by the letter R and the quote "For now the adiabatic quantum optimizers have the upper hand."]
- Log in to post comments
It's a pretty safe assumption that anyone trying to build a quantum computer in today's economic and scientific climate won't be "guarantee and safe bets" types of folks ;)
Indeed! But I wonder how this meshes with trying to sell the end product to a group that isn't quite so open to uncertainty? (Or at least that's the point I was trying, but failing, to convey.)
Anybody who would place bets on any type of quantum computer (at least at its current stage) are going to be culturally very similar to the people who are trying to build them in the first place. No-one in their right mind asks for guarantees. What they want are to understand and quantify the risks vs. the potential pay-off.
I understand that those placing bets (which I assume you mean investors) would have a high risk profile and not insist on guarantees. But why would a customer be willing to take these same risks?
"No-one in their right mind asks for guarantees. What they want are to understand and quantify the risks vs. the potential pay-off."
But this is exactly what I don't understand. If you are a customer with a hard computational problem, and you dump a bunch of money buying the product with only a hazy idea of whether your instances will be sped up, well, then you have to be crazy to do this. There has to be at least some evidence that the product will produce some speedup. So while it's fun to bash academics for their desire for evidence, I don't quite understand why customers won't be just as demanding?
No I mean customers. The early stage customers for these machines are not primarily interested in the "computing" aspect, they tend to be more interested in the "quantum" part. There is a large market for scientific test equipment, which is realistically what these things are right now. These machines can help answer some of the most important unresolved questions in all of science, in particular in understanding foundations of quantum mechanics via experimental probes of open quantum systems.
Which machine do you think is a better probe of fundamental physics: a 128 qubit quantum computer or the LHC?
D-wave's new business plan is to build scientific instruments?
Comparing the LHC and a quantum computer for fundamental physics is like comparing PCR and X-ray crystallography for biology.
Geordie: The LHC wins hands down compared to the adiabatic devices you are concerned with. Simply put, we expect quantum mechanics to correctly predict the behaviour of quantum computers. Even if this fails, it will be in the limit of large coherent states, which, as I understand it, do not exist within you devices.
The LHC is a completely different beast, as we know the standard model isn't the full story, and so we expect the LHC to produce cool new results.
I look at a question like Geordie's about the LHC vs a large adiabatic quantum computer as a question of how many orders of magnitude into a new regime does a device push.
For the LHC this is clearly energy, what particles are smashed together, etc. For the AQC, it's not clear what this number is. It can't just be # of qubits as coherence is not maintained across all of them. So question: what is the proper metric for testing the "quantumness" of a device.
I would have though distance from a separable state would be a good start.
Distance to separable seems an odd test. I mean quantum theory could break down for only separable states :)
It could indeed, but then having a bigger quantum computer doesn't really help you, does it? It seems to me that the obvious regime that quantum mechanics has not fully been tested in is in terms of large scale coherences, which would seem to suggest large scale entangled states as the obvious place to look. Long time scales for maintaining coherence are another, but since it seems to me that physics should be invariant in time, I feel like we are less likely to get interesting results simply by keeping an electron in a superposition for a really long time.
>Simply put, we expect quantum mechanics to correctly predict the behaviour of quantum computers.
Why? Shouldn't we expect quantum computers to correctly predict the behaviour of quantum mechanics? What do you guys have against the real world? ;)
>Even if this fails, it will be in the limit of large coherent states, which, as I understand it, do not exist within you devices.
Assuming that large coherent states can exist at all in any open quantum system.
Sorry - just thought I'd stir things up with a nice experimental spoon.
Suz: I have nothing against the real world. Quantum mechanics is a well defined mathematical model of how the world works. What an experiment does is measure how the world really works. When this matches with the predictions of QM, then it is good evidence that QM is a good model, but when there is a discrepancy, we aren't so much learning something new about QM, but rather that the model is failing for some reason. It may simply be that we have made some invalid approximations, since actually determining the wavefunction to complicated. But in the case where an experiment deviates from what quantum mechanics actually predicts, then we aren't learning something new about quantum mechanics per se, but rather learning that it is not a correct model for how the universe operates. Hence I would only expect quantum computers to predict the behaviour of quantum mechanics, if quantum mechanics is indeed a correct model for the computer. In which case, we haven't really learned anything fundamentally new about physics, but just solved a potentially computationally hard problem.
> Assuming that large coherent states can exist at all in any open quantum system.
Well, the whole point of building a quantum computer is surely not to simulate open quantum systems, but rather approximate closed quantum systems.
> Well, the whole point of building a quantum computer is surely not to simulate open quantum systems, but rather approximate closed quantum systems.
Hi Joe, why do you think this?
Geordie: A couple of reasons.
1) Closed quantum systems can efficiently simulate open quantum systems, but the converse is very unlikely to be true in general.
2) Most of the quantum algorithms we know of require substantial entanglement to gain any speed-up, and there are no go theorems for speed-ups from separable pure systems.
3) A general purpose quantum computer must be able to simulate another (noiseless) QC efficiently. This is only something we know how to do with close-to-pure systems.
Dave: Re: Customers would be crazy
Putting scientific equipment aside, let's imagine that there is a 'first' customer out there. What would they look like?
A) They probably have a very large and profitable business, with costs on the order of 5e8+ USD/year. Those costs are probably already being optimized by the best classical and heuristic algorithms money can buy.
B) They probably have competitors who are doing exactly the same.
C) Since they're large and profitable, they probably have a division that's concerned with their long-term outlook. Not a 6month outlook. A 10 year outlook.
D) You work in said division. Your company is going to be paying 5e9+ USD over the next 10 years to cover their (growing) costs. A disruptive decrease in your costs might be 3% advantage over your competitors averaged over that period. Where would you look for solutions? How much would you be looking to spend in year 1? How much would you be looking to spend in year 10?
How crazy does that seem now?
Hey Murray,
"A) They probably have a very large and profitable business, with costs on the order of 5e8+ USD/year. Those costs are probably already being optimized by the best classical and heuristic algorithms money can buy."
Okay I'll make that assumption, but if by "cost" you mean what usually appears in financials as the cost of revenue, then I'm not necessarily going to give you all of that cost as being related to classical or heuristic algorithm optimization. Clearly this varies by company, but many costs are not amenable to just coming up with a better algorithm. Finally you should probably also include in the analysis not just "cost" but also how the algorithms are related to "profit."
"B) They probably have competitors who are doing exactly the same."
Accepted.
"C) Since they're large and profitable, they probably have a division that's concerned with their long-term outlook. Not a 6month outlook. A 10 year outlook."
Okay, sure. But you've limited the companies considerably, I think, by being big and profitable enough to have a 10 year outlook right?
"D) You work in said division. Your company is going to be paying 5e9+ USD over the next 10 years to cover their (growing) costs. A disruptive decrease in your costs might be 3% advantage over your competitors averaged over that period. Where would you look for solutions? How much would you be looking to spend in year 1? How much would you be looking to spend in year 10?"
Two things: (1) It's not clear to me, as I said above, that the advantages of a D-wave's computer would stretch across the entire cost you're talking about. It seems to me that you are missing a multiplier here. Now there are obviously exceptions to that: places where the business profits are directly tied to optimization which could (possibly) be improved by a better optimizer. For example, I might guess that there are places in finance where this could be true, certainly I think any company that relies on some heavy data mining that involves hard optimization, etc. (2) I think that the risk that you are taking can only be considered in the context of other risks the company faces. Is eking out a small optimization really where the company can best spend it's money.
I assume, of course, that the VCs who have invested in D-wave have done all of these calculations a lot better than I could ever imagine. I would like to assume that the fact that they got funding should at least move my prior in the direction of thinking that this calculation works out. Of course when I say this to people in Seattle they laugh: the reputation for due diligence seems to vary by orders of magnitudes across the VC community :)
I guess what interests me more is the conversation that the company has to have to convince that first company to sign on. Would I, as a company, for example, be satisfied with seeing adiabatic speedups for contrived problems? Would I be satisfied with average speedups on small instances? How would I know whether there is zero, 10^(-5), 1, or 100 percent chance that the optimization problems will be sped up with the machine? The quote that set off this little ditty was Geordie indicating that quantum optimizers have the upper hand...I have a hard time seeing this in the context of a real world customers needs. But maybe it's just because I woke up on the skeptical side of the bed this morning :)
You've pointed out some realistic complications to my simple case, but having only scratched the surface, I think it's compelling that we've uncovered a glimmer of hope for the effort. And hard working, industrious-types can turn a glimmer of hope into diamond earrings.
I will say that it seems to me most companies of 1000 employees or more must have at least one person in the R&D or executive ranks who is looking at their 10-year outlook. Unless their strategy is to not be around that long. Anything less seems like driving on the highway while watching the car bumper ahead of you.
Also worth noting: meeting with customers about purchasing a quantum computer seems like it might be of interest for you... maybe a potential future career path to consider ;-)