>I'm willing to bet any amount of money that 99.99% of AI doomers identify with the same extreme end of the political spectrum.
Good: a man willing to put his money where his mouth is! However many dollars you put up, I will put up $10. (I.e., I will give you 10:1 odds.) How much do you bet? Who do you suggest as arbiter in case one is needed?
Pascal's wager is an argument that even if the probability of God's existence is very small, it is still rational to believe in God and live accordingly. Yudkowsky is the author of a blog post titled "Pascal's mugging", which likewise involves a small probability of an extremely bad outcome, but that blog post is completely silent about the dangerousness of AI research. (The post points out a paradox in decision theory, i.e., the theory that flows from the equation expected_utility = summation over every possible outcome O of U(O) * P(O).)
No one to my knowledge has ever argued that AI research should be prohibited because of a very small probability of its turning out extremely badly. This is entirely a straw man set up by people who want AI research to continue. Yudkowsky argues that if AI research is allowed to continue, then the natural expected outcome will be very bad (probably human extinction, but more exotic terrible outcomes are also possible) [1]. There are others who argue that no team or organization anywhere should engage in any program of development that has a 10% or more chance of ending the human race without there first being an extensive public debate followed by a vote in which everyone can participate, and this is their objection to any continuance of AI research.
[1] But don't take my word for it: here is Yudkowsky writing in Apr 2022 in
https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/: "When Earth’s prospects are that far underwater in the basement of the logistic success curve, it may be hard to feel motivated about continuing to fight, since doubling our chances of survival will only take them from 0% to 0%. That's why I would suggest reframing the problem - especially on an emotional level - to helping humanity die with dignity, or rather, since even this goal is realistically unattainable at this point, die with slightly more dignity than would otherwise be counterfactually obtained."
You doubt that Yudkowsky "was only advocating for state-sponsored airstrikes, not civilian airstrikes, bombs, or attacks." Let's let the reader decide.
In the article, the string "kill" occurs twice, both times describing what some future AI would do if the AI labs remain free to keep on their present course. The strings "bomb" and "attack" never occur. The strings "strike" and "destroy" occurs once each, and this quote contains both occurrences:
>Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
>Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
>That’s the kind of policy change that would cause my partner and I to hold each other, and say to each other that a miracle happened, and now there’s a chance that maybe Nina will live. The sane people hearing about this for the first time and sensibly saying “maybe we should not” deserve to hear, honestly, what it would take to have that happen. And when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do what’s politically easy, that means their own kids are going to die too.
Trying to argue that certain strings don't occur in the page is the kind of argument that gets brought out when someone is desperate for any technicality to avoid having to concede a point.
This level of weaponized pedantry is what makes trying to debate anything with LessWrong-style rationalists so impossible: There's always another volley of gish gallop to be fired at you when you get too close to anything that goes against their accepted narratives.
You were trying to get people to view what EY wrote in the time.com article as an encouragement to engage in criminal violence (as opposed to state-sponsored violence a la an airstrike on a data center) such as the firebombing of Sam's home when in actuality (both before and after the publication of the time.com article) EY has explicitly argued against doing any crimes particularly violent crimes against the AI enterprise.
Knowing that most readers do not have time to read the entire article, I brought up how many times various strings occur in the article to make it less likely in the reader's eyes that there are passages in the article other than the one passage I quoted that could possibly be interpreted as advocating criminal violence. I.e., I brought it up to explain why I quoted the 3 (contiguous) paragraphs I quoted, but not any of the other paragraphs.
In finding and selecting those 3 paragraphs, I was doing your work for you since if this were a perfectly efficient and fair debate, the burden of providing quotes to support your assertion that EY somehow condones the firebombing of Sam's home would fall on you.
Those 2 links certainly satisfy my request. Thank you.
My summary of Eliezer's deleted tweet is that Eliezer is pointing out that even if everyone dies except for the handful of people it would take to repopulate the Earth, even that (pretty terrible) outcome would be preferable to the outcome that would almost certainly obtain if the AI enterprise continues on its present course (namely, everyone's dying, with the result that there is no hope of the human population's bouncing back). It was an attempt to get his interlocutor (who was busy worrying about whether an action is "pre-emptive" and therefore bad and worrying about "a collateral damage estimate that they then compare to achievable military gains") to step back and consider the bigger picture.
Some people do not consider the survival of the human species to be intrinsically valuable. If 99.999% of us die and the rest of us have to go through many decades of suffering just for the species to survive, those people would consider that outcome to be just as bad as everyone dying (or even slightly worse since if 100% of us were to die one day without anyone's knowing what hit them, suffering is avoided). I can see how those people might find Eliezer's deleted tweet to be alarming or bizarre.
In contrast, Eliezer cares about the human species independent of individual people (although he cares about them, too).
Also, just because I notice that outcome A is preferable to outcome B does not mean that I consider it ethical to do anything to bring about outcome B. For example, just because I notice that everyone's life would be improved if my crazy uncle Bob died tomorrow does not mean that I consider it ethical to kill him. And just because Eliezer noticed and pointed out what I just summarized does not mean that Eliezer believes that "it might be ok to kill most of humanity to stop AI" (to repeat the passage I quoted in my first comment).
> How many people are allowed to die to prevent AGI?
He didn’t say “not everyone dying is preferable to everyone dying”. The question was about acceptable consequences from preemptively stopping AGI under his assumption that AGI will lead to the extinction all humans.
Those are only the same thing under the assumptions that 1) AGI is inevitable without intervention and 2) AGI will lead to the extinction of humanity.
If he believes he is being misunderstood, his “apology” doesn’t actually deny either of the assumptions I identified, and he is widely known to believe them.
In fact, his stated reason for correcting his earlier tweet, that using nuclear weapons is taboo, is an extremely weak excuse. Given the opportunity to save humanity from AGI if that is what you believe, it would be comical to draw the line at first use of nukes.
No, I think Eliezer is trying to come to grips with the logical conclusion of his strident rhetoric.
You have a population of relatively wealthy, scientifically-educated people who believe that AI risk is real and existential. That if they/we don't act, humanity itself might become extinct -- and that this is an unacceptable outcome. Then you have Yudkowsky mooting the possibility that this is basically inevitable (in the absence of global coordination that seems highly unlikely), and suggesting that hyper-violent outcomes might be literally the only way our species survives.
What I am not saying: Yudkowsky intends to exterminate most of humanity.
What I am saying: this is a dangerous environment, and these kinds of statements will be seen as a call to action by a certain kind of person. TFA is literal proof of the truth of that statement. Moreover: within the community there exist trained experts who might be able to, at the cost of millions of lives, plan an attack that could (plausibly) delay AI by many years.
The danger of this argument is that someone who reveres Yudkowsky might take his arguments to the logical conclusion, and actually do something to act on them. (Although I can't prove it, I also think Yudkowsky knows this, and his decision to speak publicly should be viewed as an indicator of his preferences.) That's why these conversations are so dangerous, and why I'm not going to give Yudkowsky and his folks a lot of credit for "just having an intellectual argument." I think this is like having an intellectual discussion about a theater being on fire, while sitting in a crowded theater.
I said something to the same effect in a sibling comment to yours.
> someone who reveres Yudkowsky might take his arguments to the logical conclusion
What about Eliezer himself? Does he not believe his own rhetoric? Certainly if he believes the future of the human race is at stake it demands more action than writing a book about it and going on a few podcasts.
I think the whole thing is a bit like the dog who finally caught the car. It’s easy to use this strident rhetoric on an Internet forum, but LessWrong isn’t real life.
If I ran the FBI I would be very gently keeping tabs on the most fervent (and technically capable) anti-AI groups. Unfortunately I don't think anyone is currently running the FBI. If I was tightly connected to folks in these communities, I would be keeping tabs on my friends and making sure they're not getting talked into anything crazy.
The Zizians had only a tangential relationship to the people that believe that AI "progress" should be prohibited. They were banned from events run by the Berkeley rationalists well before they started killing people, and their ideological reasons they told each other to justify the killings were trans rights and farm-animal welfare, not to slow down AI "progress".
How many people believe continued AI "progress" would be so dangerous that it should be prohibited? 136,513 people signed a statement to that effect:
The name of the man that threw the Molotov cocktail is Daniel Alejandro Moreno-Gama, and "Daniel Moreno" is one of the signatures on the statement. I concede that his motivation almost certainly was to try to slow down AI "progress".
I know, right. He paid himself more per year than 99.9% of Americans will make in their entire lifetime while denying coverage to people who died as a result.
Good: a man willing to put his money where his mouth is! However many dollars you put up, I will put up $10. (I.e., I will give you 10:1 odds.) How much do you bet? Who do you suggest as arbiter in case one is needed?
reply