Hacker Newsnew | past | comments | ask | show | jobs | submit | pieteradejong's commentslogin

Suggestion: I've found that doing a graduate CS degree really helps me build a theoretical foundation which has made me much better at my work. Example: a much deeper understanding of the HTTP protocol and the client-server model, and how it relates to the world of networking and computing overall.


Graham Hancock talks about this on several of his Joe Rogan episodes.


Graham Hancock always gave me a strong pseudoscience vibe, do you think he should be taken seriously?


Absolutely not. He's not Von Daniken crazy, but potentially more insidious. He's extremely careful to caveat and disguise his crazy under a legitimate looking veneer. His books are extensively annotated and referenced, he talks like a legitimate archaeologist and makes sure that the more outlandish and conspiracy theory ideas are there by implication rather than overt claims. He works very hard to make it difficult to dismiss him out of hand and to keep a seat at the legitimate debate table.

However he does make sure that the more out there crazy does get referenced and carefully avoids saying anything, or referencing any counter-evidence that would discredit them. So he avoids being categorised with the more extreme crazies, while assiduously making it clear he is their friend. After all, he needs to sell his books.

Having said that, his stuff as with Von Daniken really is a lot of fun. I occasionally check it out just to keep current on state of the art crazy. Just bear in mind there's a lot of flim-flam in there and he will avoid referencing any evidence that debunks his claims, no matter how much of it there is. You will not get the whole story from him. For example he will write a book on a topic from the point of view of say 20 years before the book's development and publication, and ignore any inconvenient research later than that, and probably also ignore some relevant research and counter-arguments from before that as well.


I don't have any skin in the game, but I like listening to Graham Hancock's theories. They're entertaining, just like Time Team[0] which is genuine archaeology.

[0] https://www.youtube.com/channel/UCvmEISc6e4tLwn8TyS14ncw (possibly UK only)


Ignoring conflict evidence, ie, confirmation bias, sounds like just about every human I've ever seen. Especially scientists with their pet theories. I suppose it comes down to how people weight the evidence, they're definitely biased toward weightings that support their favorite theory.

Just as I'm sure you are with your Hancock's BS theory.

Is it really true that he's only been peddling unsubstantiated stuff tho?

Didn't he propose that the younger dryas ice age was caused by an impact, a claim later supported by more evidence, including nanodiamonds? I know the cause remains unsettled, but even if you read Wikipedia, which starts out saying the evidence for that is misinterpreted, it goes on to detail a bunch of solid sounding evidence. That's there's contention and ambiguity is not unusual for science, particularly such a speculative science as archaeology. But his proposal, at least, sounds neither crazy, nor appears to be unsubstantiated, as you claim.


He definitely does engage in legitimate debate about reasonable theories and weighing plausible evidence. On issues like that he can be a reasonable contributor to the public debate. That's fine. The question is to what extent is that cover to get himself a seat at the public debate table, to lend credibility to the woo woo stuff which is what sells his books?

He doesn't make money from suggesting there might have been an asteroid impact that affected the climate at a given point in history. He gets paid for writing books about "Meetings with the ancient teachers of mankind".


Just because someone has some highly imaginative hopes and theories that you may personally find distasteful, does not a crank make, nor does it tarnish other stuff they've done, of course.

Unless you're comfortable with Newton being a crank for his alchemy, Oppenheimer being a crank for his mysticism, Turing being a crank for his desires, Galileo for his solar centrism, Harvard's Avi Loeb for his derelict alien ship, Chinese being cranks for TCM, and so many others.

It's funny how rational skepticism and "crank calling" (shaming?) rubs shoulders so closely with bigotry they're almost indistinguishable. But i suppose our beloved skeptics are apt to ignore that inconvenient interpretation. I just think that the propensity to think outside the box, and courageously push against the boundaries of (sometimes merely culturally normative, as in the case of TCM) orthodoxy, while also keeping connected to truth, is possibly one of the foundations of scientific genius, and seems to be something to be encouraged. At the very least, it's benign. I'm not quite sure why people seem to be so terrified of scientists who dare stand outside the herd, and propose new ideas.

Don't we have enough groupthink everywhere else (politics, think tanks, political science, education, religion), can't we have one place to celebrate dangerous ideas? Why wouldn't science be the perfect place for that?

I think it's possible your Hancockyness is a little overblown. Perhaps he's not quite the crank you think, tho he may be the crank you need.


Newton was a crank w/r/t to his alchemy. Likewise, Pauling with his Vitamin C. Doesn't invalidate any of their other work, but it does mean we shouldn't just take their authority as relevant in areas outside their core competencies.

W/r/t Turing, a person's sexual preferences aren't even in the same universe as someone's unsubstantiated personal beliefs about how the world works. Perhaps you would want to retract that piece? While I don't agree with you overall, I think your argument would be stronger if you excluded his example.

I think you're looking at the "crank / non-crank" evaluation as an attribute of a person. I'm suggesting it's more relevant to use a person/field-of-study grain to apply the label. (It probably also makes sense to break it down further, but at some point, the complexity outweighs the benefit.)


no. I think the turing thing is very important. Because just like Galileo it's a culturally normative thing to hate on gay sex back then just like hating on solar centrism was. And I think a lot of the other prejudices against far out ideas are going to be these bullshit culturally normative bigoted biases that turn out to be incorrect. Just like the blanket Western cultural bias against tcm. almost just like the marijuana thing...you know "marijuana turns you into the devil and makes you crazy"... "gay sex is morally wrong." All of this bullshit kind of was eventually overturned in the tide of public opinion but at the time people were so certain (just like witchcraft) you know that there was a reality to their demonization.

So no, won't rewrite nor retract. Is important to keep it in to reinforce how relative alot of this is.

I think you probably got upset by assuming I was associating gay sex with some sort of absolute measurable moral position. Hopefully what I said has reassed you that's not the case and let you feel better about it.

Btw good point about crank aspects not tainting the whole. Maybe sometimes what we think of as crank is simply undiscovered science. Maybe Newton would have had a better theory if he knew something more about nuclear transmutation.

I think there's a spectrum of these crank things though some stuff like flat Earth come on that has to be false.

But i think we should be giving more due, and less crank, to people like Hancock.. i know it's a personal thing tho, how your feel about a particular person. I just don't like to see a groupthink pile on, from any group.


Please let me reassure you that I was not upset by your statement - merely providing feedback for how to strengthen your point and avoid a distracting and loaded sidebar.

I'm still not clear what the broader point is. That sometimes norms change, and have non-linear impact?

Most of the time, a crank is just a crank. That's why we know about the outliers.


how can you provide feedback on how to make the point clearer if you don't understand what the point is? So I think you probably do understand it. There's no need to pretend you don't just because you disagree and you're not sure how to state your ddisagreements.

Sorry, (very) freudian slip there (i guess), i meant ressaured not re-assed haha

is that really true though that a crank is just a crank most of the time? I don't think that's true. I think the skill is looking for the sincerity and the truth in what they're saying aside from any noise that might be there as well. Just like you're trying to make the point of reducing distractions. And just like I think it's important in data analysis you know you want to increase the signal reduce the noise and that's something you as a reader can do. So I don't think it's true that a crank is just a crank it's too much of an easy dismissal. it's important to have these alternative hypothesis generators and to listen and not get distracted by the other stuff. If everyone was obsessed that Newton or Turing or Galileo had culturally normative crank ideas they would have missed the good stuff. And maybe there is good stuff in some of those culturally normative crank ideas. And maybe we did as a society miss out on some of the good stuff because we wanted to say oh cranks are just cranks. so I don't think we should do that and I think you should probably stop doing that if you want to support this idea of you know scientific inquiry and the expansion of knowledge. Just a pointer ;) :p

So, I see you making your points there and I've already made my points so I don't see anything more to add. I'm comfortable that we have different views on it.


Too many people who have genuine major contributions to fields make the mistake of thinking that because they’re an expert in one then they’re automatically qualified in others, or that because they nailed one thing their beliefs about something else must also be true (ie Pauling and vitamin C).

I see this all the time in medicine. One of the best neurosurgeons in my country and someone I consider a friend still tells me that I need to keep the microwave door shut for 3 seconds after it’s done or else the waves will escape and give me cancer. That defies the laws of electromagnetism on several levels.

We absolutely need alternative hypothesis generators, but the widespread acceptance of every crank idea that comes along even in the face of overwhelming evidence to the contrary is not only infuriating but actually serves to damage the scientific method, when the lady down at the mothers group claims that the vaccine gave her son autism and there is a global conspiracy led by bill gates to implant microchips in everyone.

At the end of the day, in my belief, it boils down to a sad lack of trust in experts - some of it warranted from the abuses and oversteps of the past, and some of it actively facilitated by people who have ideological reasons to oppose what the scientific truths are presenting (ie climate science denial).


Hard to disagree with that. Where does that term crank even come from?

I think science theories wrt to public belief have both high false positive and negative rates. A lot of people believe stuff to be true that isn't (that climate changes are only due to us, are terrible, and that there's something we can do about it that will help), and don't believe true stuff that is (flat earthers... I'm pretty open minded but also quite sure we live on a fucking ball).

The obvious caveat is nobody fucking knows anything we're all just fumbling around in the dark but consensus and a sense of certainty certainly does help. It's surprising how shit a lot of the data that we have (even in this scientific age) is. And how easily manipulated the narratives can be.

So i think we should be giving more due, and less crank, to people like Hancock.. i know it's a personal thing tho, how your feel about a particular person. I just don't like to see a groupthink pile on, from any group.


Wanted to add another comment after learning what TCM (Traditional Chinese Medicine) meant. This is a really interesting grey area. There appears to be validity to some of it, some of it is supernatural fluff. Out of the evidence-led examples, it's possible some of the effect is placebo.

There are many layers, some full of cranks and some not. Maybe this is a field where generalization just isn't useful? I dunno, but wanted to think out loud and thank you for making me think a bit more than planned today :).


Your attacks on him shows how biased you are - for instance, you called or used the word 'crazy/crazy' 5 times, without mentioning a single actual concrete instance of what makes his assertions crazy.

There are various videos of him online engaging in straightforward, evidence-based debates with others, and where some of the things he's said has been subsequently proven by more recent discoveries.

Your opinion of him seems entirely prejudiced. You have no substance in your description of him at all.


https://astronomy.com/news/2020/09/gobekli-tepe-the-worlds-f...

> Hancock's ideas have helped fuel the surge of interest in Gobekli Tepe as an ancient observatory. But he has an even more fantastical claim about the vulture and other carvings on Pillar 43. He believes, again without evidence, that it's an ancient constellation diagram that shows the winter solstice against a backdrop of today’s modern sky.


A team from the University of Edinburgh School of Engineering doesn't think it's too crazy.

https://www.researchgate.net/publication/322150872_Decoding_...


You might want to read the article from astronomy.com, part of it is based on this paper. ("In 2017, a pair of chemical engineers made global headlines [...]")


There are two episodes dedicated to him on the our fake history podcast. He thoroughly demolishes pretty much everything about his theories.


He’s a great populariser of bullshit. Interesting theories but nothing over 30 years of building unsubstantiated theories out of thin threads of evidence that aren’t connected


> He’s a great populariser of bullshit. Interesting theories but nothing over 30 years of building unsubstantiated theories out of thin threads of evidence that aren’t connected

What about his work that led to the acceptance of the Younger Dryas period being a result of the asteroid impact, that was refuted in academia for so long that is now been proven with time?

That revelation alone is worth more PhDs than most entire departments create in a decade!


What work? Doesn't look like he did much https://en.wikipedia.org/wiki/Younger_Dryas_impact_hypothesi... and it doesn't look like there is consensus about it https://en.wikipedia.org/wiki/Younger_Dryas#Causes


That page is "curated" by a retired historian with a grudge.

Take a look at the revision history of the page, where you see him routinely reversing revisions that would improve the page -- particularly, reversing deletions of irrelevant but prejudicial content.

There is no reason to mention Graham Hancock on the page at all, or any century-old stories. They are there just to muddy the water, and to give the ex-professor a soapbox for name-calling.

The evidence is presented in a deliberately confused order, and very selectively. New evidence is not permitted. Wikipedia is ruled by people like that, now. We have to wait for them to die before we can have pages not somebody's hobby-horse.


Hancock didn't originate the theory an impact contributed to the Younger Dryas event, nor has he originated any evidence or research towards substantiating it. So other than mentioning it in some books and talk shows, what was his contribution that merits a PhD?


> Hancock didn't originate the theory an impact contributed to the Younger Dryas event, nor has he originated any evidence or research towards substantiating it. So other than mentioning it in some books and talk shows, what was his contribution that merits a PhD?

I didn't say he was the only one to do so, but he put in the work to get observational data and brought to the mainstream archeaologists and historians who wrote countless works and based their dissertations on their 'irrefutable fact' of how the Ice Age ended which never included an asteroid impact and he staunchly challenged academia as a JOURNALIST and did the legwork with his colleagues who were actually trained archeologists and documented his observations.

And just so it's clear: I'm not parroting academia's indoctrination it forced me to bend to, but Wikipedia is not a valid source for citation for a reason, and if I had submitted a paper referencing it was dismissed entirely.

But here is an essay from the Man himself if you are so inclined to read it [0].

I think his work is notable and his contributions worthy and have merit, which is honestly much more than what many people based their PhDs on when they make arguments about things like the extinction of wooly mammoths and other mega fauna as result of Human over hunting. When NOTHING could explain these massive dead zones, but were none the less awarded degrees for their findings.

0: https://grahamhancock.com/hancockg17/


No reputable scientist deals in 'irrefutable facts' like that, only evidence. If you're going to put quotes on something like that, I think it's reasonable to ask for a reference.

I'm just wondering, can you give examples of the research papers for these PhDs or degrees where you evaluated them and found them insufficiently rigorous. It sounds like this has happened a lot, so are there any resources on this I can read?

Hancock's pop sci summary of other people's work you linked to is entertaining, but are we really surprised that geologists 100 years ago had a limited understanding of the history of the earth? That controversy was 40 years before we even came up with plate tectonics. It's hardly cutting edge stuff.


I’m not hugely familiar with his contributions to this, but if you throw 10,000 darts at a board, you get one bullseye. It doesn’t make you a genius.

As another poster said, his theories are interesting (I spent several days as a teenager running through all sorts of measurements trying to work out the megalithic yard and see if door measurements/side measurements in other monuments could be related, and found it fascinating when in the sign and the seal/hiriam’s key the original of the megalithic yard was discovered), but the benal/fascinating realities of the unit of measurement are a long way from the phantasmagorical claims made in heaven’s mirror and fingerprints of the gods


> You throw 10,000 darts at a board, you get one bullseye. It doesn’t make you a genius

That's not a very good dismissal of the man's work, like I said that revelation alone is as solid as an entire University's department of post doc and grad students combined. It's overturned so many people's dissertations and the narrative of how the Ice Age ended, specifically to the migration of People into the Americas from Eurasia that has so many implications in other departments like biology.

Also, his observations of water erosion on Egyptian pyramids is very compelling, and places it way before the modern Egyptian narrative if its true as nothing provided by modern Egyptologists really explains how deep it is. Wind/sand erosion cannot carve that deeply, especially since so much of the pyramids was under sand until maybe only modern times when it became a large part of the Egyptian economy via tourism, thus preserving it from further erosion.


I had a reply here but on further thought I decided I would do some reading on Hancock’s contribution to the Younger Dryas hypothesis. Wiki not only suggests that he wasn’t the first but that his contributions to theories largely amount to other claims which are just as hard to substantiate (ie cyclical nature of impacts) [0].

So really it seems that he hasn’t contributed a huge amount to this field at all apart from being another voice amongst many, when the claim has still not been fully substantiated by the geological record. An area worthy of more study, but it’s hardly ‘his’ contribution

To say that this guy has thrown 10,000 darts and come up with hardly anything is absolutely an accurate way to characterise Hancock’s work.

I also think it’s quite disingenuous to claim that modern Egyptologists are unable to claim ‘how’ ‘water’ erosion exists there (or even if it is water erosion) - there’s not only a decent chunk of carbon dating from ie. The camps next to the pyramids used by the builders but on an Occam’s razor perspective the consensus is the one most substantiated by existing evidence.

And don’t go into an argument about how groupthink dictates too much of consensus, as a doctor and scientist I understand the need for alternative hyptheses and I’ve found Hancock’s theories to be very interesting, but he makes a literal living out of pareidolia - he sees faces in every group of shadows, and vociferously denies they were just shadows when the lights are turned on.

[0] https://en.m.wikipedia.org/wiki/Younger_Dryas_impact_hypothe...


Thanks! I've borrowed this one, with very basic info.


Looks like the fundamentals of ML would be covered by the Math for CS section.


Found this recently, and wanted to share. I'm sure someone will find this interesting.

Partly written by Url Holzle, of Google.


Fantastic work. I love how the url is "hello-world".


author here, thank you! homage to the K&R experience of showing something new, even if the subject matter is very different.



"used it to create my own resume site with analytics that I send out to companies. I can see who viewed my CV, when, and whether or not they actually read through it or bounced immediately"

I'd be super interested to learn more about how you did that.


Google Analytics


Awesome, thanks! Listening now, and shared with brother.


As a software engineer looking for a highly marketable and differentiated skill set, given your projections for quantum computing roadmap, when should I start exploring this area? (or: when should I start writing code and doing side projects)


Now. "Exponential" is faster than most people, including myself, can believe. When one unit of resource doubles your computational capacity, it doesn't take many units.

I like to use the analogy. Adding 1 GB of RAM these days isn't that big of a deal. You can maybe open two more tabs in Chrome. :) Adding 1 giga-qubit to your computer would make it 4.6 x 10^301029995 times better. That's unimaginably more powerful than anything any human can think of.

We don't have quantum software engineering figured out. And it's not going to be figured out by a few academics, although they may lay some good foundations. It's going to be figured out by the same folks who figured out traditional computing: people who try stuff, break stuff, and experiment.


I disagree. Through Scott Aaronson's writings, I gather we still don't have conclusive evidence [1] that anyone's built an actual quantum computer capable of more sophisticated computations than a human 10-year-old can do with pen and paper, and furthermore that it's not clear we'll be able to build such a machine in our lifetimes. I'm not saying it's impossible, but we know it's going to be really hard.

What you're ignoring with your exponential growth argument is that it is also exponentially harder to add one qubit while maintaining usefulness (i.e. long decoherence times).

I'd say, anyone who doesn't want to work either in academia or on "vanity projects" like D-Wave's much-hyped collabs with Lockheed, Google etc. should wait half a decade and see.

[1] Arguably the D-Wave machines are faster than a human, but we don't have evidence (yet) that it's not just a fancy annealing ASIC.


I do not dispute the claim about the existence of a quantum computer which surpasses its classical brethren. Scott is correct.

I do disagree that it is exponentially more difficult to add a qubit. Coherence times are something to optimize, and densely packing qubits is also difficult with coherence times, but the notion of adding a qubit to a system doesn't come with an inherent exponential difficulty.

Regarding whether it is useful or not to learn quantum computing for your profession, if it's true that systems can be built that grow with exponential power, then they'll be relevant faster than one might think.


> I do not dispute the claim about the existence of a quantum computer which surpasses its classical brethren.

It's less "please beat a $20000 server stuffed with GPU accelerators" and more "please beat a 6502, or to start with at least an abacus".

The problem is that everything is so toy-level so far that it's not even in the category of "computation tool".


Again, I do not dispute this claim and also do not see it as a problem. Folks working on it, including myself, would like to see it as a viable replacement for any computational device. And it's not, right now.

There's a reason we work on it, though, and that's because of two reasons: (1) our current and insofar accurate understanding of physics says with certainty that a quantum computing device is superior to its classical counterpart, and (2) while the problem is not easy, there seems to be just the right number of engineering problems (signal integrity, signal routing, superconducting non-magnetic fab, etc.) in our way that we feel we can tackle them in a timely manner. Rigetti in particular is a company that believes that having a full stack team will allow these interdisciplinary problems to be solved faster.

No one, on our team at least, is disillusioned about where we are. As the article says, 8 qubit chips are in the final phases of validation. As I say, 8 qubit chips are simulatable faster on your shiny Intel chip. Does that mean the entire enterprise is useless? No. It is a stepping stone for a company that has raised less money than many CoolNewLikeUberButForX apps you see pop up here. I find that unimaginably remarkable.

When I answer questions about quantum computing, however, I want to share my and others' visions about it based off of what we know, in a relatively accurate fashion, that is understandable by a general audience.


Isn't it true though that there would be an O(n^2)-type difficulty in adding extra qubits, since they all need to interact?

Or is that an oversimplified view?


They do not all need to interact directly with one another. You can create full entanglement even if they linearly interact. It just means you pay a penalty in the compilation of your program.

Architectures with higher two-qubit connectivity is merely an optimization.


> Now. "Exponential" is faster than most people, including myself, can believe. When one unit of resource doubles your computational capacity, it doesn't take many units.

Please don't make bullshit claims about exponential speedups. I don't know exactly what technology you are claiming to have, but statements like this cause me to believe less in your technology, not more.

We've been through the cycle of unfounded hype many times (with D-WAVE and others). Scott Aaronson has an entire category on his blog filled with depressingly many posts debunking the same bullshit over and over [0].

[0] http://www.scottaaronson.com/blog/?cat=17


The simplest quantum algorithm shows an exponential speedup over the best classical solution, this, of course, is the toy example often used in QC texts of determining whether or not a function is constant or balanced(Deutsch).

But, as John Preskill points out...this is not even the really interesting thing here. Quantum simulation actually lies outside the class of NP, because there is no efficient way to verify the solution of such a simulation.

This area is where quantum computers, in my opinion, are the most interesting, we will be able to do things we simply cannot on a classical computer....and for the record..most people commenting should know that D-WAVE operates using the Adiabatic model, and is not a universal quantum computer.


The size of the state space in which the qubits live is exponential in the number of qubits. This is because the qubits live in an n-fold tensor product of two-dimensional Hilbert spaces. Performing an operation on a single qubit is the same as performing a 2^n-dimensional unitary transformation on the state of the system.

This is not disagreed by experts in the field of quantum computing, including Scott.


I know what a Hilbert space is, and I also know that this 2^n-dimensional space cannot be accessed except through a destructive measurement operation. An exponential state space does not imply that there is exponential computing power to be harnessed there.

As an analogy, when you execute a randomized classical algorithm, the size of the state space in which the bits live is also exponential (and at the end you observe the result, and your uncertainty collapses from a probability distribution to one of its possible outcomes). Yet you would look at me like I'm crazy (or a fraud) if I claimed that randomized algorithms have exponentially more computing power than deterministic ones.

The only way in which the quantum case differs from the classical picture above, is that amplitudes have a phase and can thus interfere (constructively or destructively). The art of creating quantum algorithm lies entirely in orchestrating favorable interference patterns.


The way I like to explain it simply to people is that right now, you can use frameworks to manipulate probability distributions. Quantum logic gates are basically a restriction on the operations you can use to combine pdf functions. Ultimately, unless you can find a clever way to convert an algorithm into one that uses pdfs and then achieves a pdf where one single value has 99% of the EV, QM aint gonna help ya.


It seems like we should be more careful when saying "exponential" increase in computational performance. For many quantum algorithms, the speedup is actually superpolynomial [1] [2]. In some sense, this is due to the fact that the state space grows exponentially but, as you correctly pointed out, it can only be accessed in a destructive manner. For many algorithms (e.g. Shor's), the net result is a superpolynomial improvement in the resources required for solving a practically important problem (factoring).

Unfortunately, the nuance of superpolynomial vs exponential is lost in many high-level discussions about quantum computing. Maybe we should just say "much, much faster" ;) To make matters worse, quantum computing textbooks often present Simon's Problem [3] as a showcase for truly exponential speedup. It turns out this is misleading, as I've never heard of a practically relevant algorithm with truly exponential speedup.

[1]: http://math.nist.gov/quantum/zoo/

[2]: https://en.wikipedia.org/wiki/Time_complexity#Superpolynomia...

[3]: https://en.wikipedia.org/wiki/Simon%27s_problem


> In some sense, this is due to the fact that the state space grows exponentially

What I take issue with is precisely the conflation of the size of the state space with the quantum speedup. Shor's algorithm is fast because QFT (quantum fourier transform) creates an interference pattern that can reveal the period of certain functions, and QFT can be implemented efficiently because of its specific structure. As I said before, the size of a classical state space of a probability distribution is also exponentially large, so no, the root cause is emphatically not the size of the state space, but the way in which that space can be manipulated and the fact that amplitudes add up in a way that's not linear (when looking at the resulting probabilities).

Note that Grover's algorithm achieves only a quadratic speedup with the same size of state space as Shor's. Your explanation doesn't add up, it just adds to the confusion.

I just think that it's very important to stay far away from the (wrong, but pervasive in pop science) idea that quantum computers are fast because they "try exponentially many solutions in parallel". Excessively highlighting the size of the state space is already a step too far in that direction for my taste.

My words are a bit harsh, but I do appreciate the fact that you are engaging honestly, and please don't take my skepticism personally. I would like to hear what your technology brings to the table, how it differs from competing approaches, etc.


I am in agreement with your sentiment here. Adding a qubit does not mean that every single thing you do on a quantum computer doubles in speed, which is a possible way to interpret some of my statements.

From a purely personal perspective, I do think that it is very interesting that we can affect the entirety of a state with an otherwise linear number of physical operations. Whether that is useful in providing lots of exponential or even polynomial speedups in the arena of practical algorithms is yet to be determined. I suspect that with a robust enough computer, the answer will be a resounding "yes".


> As I said before, the size of a classical state space of a probability distribution is also exponentially large...

This is true, but a single state in a classical probability distribution is not exponentially large. Because of superposition, a single quantum state can be associated with an exponentially large number of amplitudes. As you mentioned, quantum algorithms rely on the interference of these amplitudes. However, if you could somehow assign a complex amplitude to each state in a classical probability distribution, you would still be limited to manipulating only one amplitude at a time. It is in this sense that the exponential scaling is important.

> Note that Grover's algorithm achieves only a quadratic speedup with the same size of state space as Shor's. Your explanation doesn't add up, it just adds to the confusion.

I didn't mean to imply that all quantum algorithms have superpolynomial speedups. But (especially) for the ones that do, I about the exponentially large set of amplitudes being manipulated in parallel.

> I just think that it's very important to stay far away from the (wrong, but pervasive in pop science) idea that quantum computers are fast because they "try exponentially many solutions in parallel".

100% agreed.


Ah,looks like I botched parts of this:

> However, if you could somehow assign a complex amplitude to each state in a classical probability distribution, you would still be limited to manipulating only one amplitude at a time.

This is probably just more confusing. What I should say is that classical probabilities have no physical manifestation that you can directly manipulate - they just denote our lack of information about a system. Amplitudes in quantum systems can be related to probabilities, but they don't represent lack of information. The probabilistic nature of quantum systems is deeper than that: measurements project superposition states onto classical states in a probabilistic way. This is

For exponentially large superposition states, there are an exponential number of amplitudes. When we act on the state in certain ways, we update all of the amplitudes in parallel. There is no counterpart to this when acting on classical states, even when you have incomplete information about the state (and thus an exponentially large probability distribution).

> But (especially) for the ones that do, I about the exponentially large set of amplitudes being manipulated in parallel.

Let's try again.


Ah, looks like I botched parts of this:

> However, if you could somehow assign a complex amplitude to each state in a classical probability distribution, you would still be limited to manipulating only one amplitude at a time.

This is probably just more confusing. What I should say is that classical probabilities have no physical manifestation that you can directly manipulate - they just denote our lack of information about a system. Amplitudes in quantum systems can be related to probabilities, but they don't represent lack of information. The probabilistic nature of quantum systems is deeper than that: measurements project superposition states onto classical states in a probabilistic way. But before this projection, we're forced to say that the physical state of the system is in superposition. Even more, the amplitudes accociated with each part of the superposition state are part of the physical definition of the state. In this sense, they are more "real" than classical probabilities.

For exponentially large superposition states, there are an exponential number of amplitudes. When we act on the state in certain ways, we update all of the amplitudes in parallel. There is no counterpart to this when acting on classical states, including when you have incomplete information about the state (and thus an exponentially large probability distribution).

> But (especially) for the ones that do, I about the exponentially large set of amplitudes being manipulated in parallel.

Let me try again. The built-in exponential in the physical state (as I described above) helps me see how quantum speedups (especially super-polynomial ones) could even be possible. You're right that there's more to the story than just having an exponentially large number of amplitudes, but it's an important part of the story!


But you said "When one unit of resource doubles your computational capacity" which I believe is what your parent comment rightly called bullshit.


> Adding 1 giga-qubit to your computer would make it 4.6 x 10^301029995 times better. That's unimaginably more powerful than anything any human can think of

That's not true and you should know better. For example, there are very few problem for which quantum computers are known to perform better than standard computers.


It is true that I am not being mathematically precise in my statements. The precise way to say what I said is: In order to represent completely an arbitrary state in the space of one billion qubits, you will need a number of bytes exponential in that number of qubits. If we have, as mathematical entities, one billion additional qubits, this will be equivalent to increasing the dimension of our existing system by 2^(1 billion) times.

Of course, I am saying "mathematical entities", and almost all practitioners of quantum computing are aware of the challenge to actually build them.


> there are very few problem for which quantum computers are known to perform better than standard computers.

http://math.nist.gov/quantum/zoo/


That is a great list! (But I don't think it contradicts what I wrote)


what would be signs that this is taking off commercially? will explore the github repo!


There are lots of "checkpoints" one can imagine with the commercialization of a technology. Right now, large industry players, whose survival depends on their tech strategy, are investing in quantum computer R&D. I don't mean that these companies are themselves trying to build quantum computers, but they are interested in applying them to their hardest technical problems.

Quantum computation is such a new and different computing paradigm, that whoever is prepared will be able to reap the benefits much earlier. And, if the promises of scaling are true (they are from a fundamental physics standpoint), such companies will propel themselves far ahead of the competition.

I would say that, in the current stage of development of quantum software and hardware, even a seasoned software professional will not—on short order—be able to apply the tools directly to their problems. As a programming language enthusiast, it's like taking a long-time K&R C programmer, and asking them to be productive in Agda. It's not that they can't, but they probably won't be able to do it by tomorrow. It'll take time, energy, and investment to think in new ways.

I personally believe that commercialization will become more and more apparent when services are accelerated by quantum computation. But how many people are going to share that secret sauce?


I'd hope one milestone is "Someone with no knowledge of physics, nor a desire gain any, is capable of programming with this hardware/software."

Is that feasible? Is it desirable?


That's my goal! It is desirable, and I think it is feasible.

I said in another comment that I think the best thing we can do is get quantum devices in the hands of people and let them play. Unfortunately, for a long time, quantum computers and their programming have been so utterly out-of-reach and opaque that that has been difficult. Now I think we are taking good steps to opening the possibility of experimentation up.


Just to give you an idea, I've spent about 30-45 minutes reading over various materials (the Github links). I think my level of knowledge would be equivalent of understanding how dup, drop, and rot work in FORTH (or car and cdr in Lisp)... Basic element manipulation (bit/qubit, stack, and list).

The difference though, is that I only needed to understand there was a container of multiple items in FORTH and Lisp. For basic element manipulation, I needed to understand matrices.

At this rate, it would take hours before I understand how to write a basic program. And my trailblazer sense is already tingling (that I should let others be pioneers).

Normally I'd just resume lurker mode at this point, but my interest in combinatorics is driving my curiosity towards understanding what might be possible.


I admit it is an unusually larger leap to get to anything useful. We have been blessed to have such a fantastic and intuitive understanding of classical computing. We can pick up most new programming languages gradually and efficiently. When the fundamental object of manipulation is this wacky thing called a "state vector in 2^n dimensional Hilbert space" as opposed to "a bag of bits", and operations must be reversible, and ... and ... and ..., things are just harder.

I hope we (both Rigetti and the quantum computing community at large) can continue to refine and simplify the concepts at hand.


Unlike adding RAM, adding each extra qubit is also exponentially harder, since maintaining coherence of all the qubits becomes more and more difficult. That's why scaling from the tiny quantum computers we have today (which are not useful) to a useful quantum computer remains a decades long research agenda.


You might take a look at this other reply:

https://news.ycombinator.com/item?id=14598516


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: