This post replies to pdxthehunted from Reddit (everything he said there is included in quotes below). There is also previo discsion before this exchange, see here. This post will somewhat stand on its own without reading context, but not 100%. Topics include about whether animals can suffer, the nature of intelligence and the flaws of academia.
[While writing this response, the original post was removed. I think that’s unfortunate, but what’s done is done. I’d still love a quick response—jt to see if I understand you correctly.]
Hi, Elliot. Thanks for your response. I want to say off the bat that I don’t think I’m equipped to debate the issue at hand with you past this point. (Mostly based off your sibling post; I’m not claiming you’re wrong, but jt that I think I—finally—realize that I don’t understand where you’re coming from, entirely (or possibly at all). I’m willing to concede that—if you’re right about everything—you probably do need to have this conversation with programmers or physicists. If the general intelligence on display in the article I cited is categorically different from what you’re talking about when you talk about G.I. than I’m out of my depth.
Yes, what that article is studying is different and I don't think it should be called "general intelligence". General means general purpose, but the kind of "intelligence" in the article can't build a spaceship or write a philosophy treatise, so it's limited to only some cases. They are vague about this matter. They suggest they are studying general intelligence becae their five learning tasks are "diverse". Being able to do 5 different learning tasks is a great sign if they are diverse enough, but I don't think they're diverse with respect to the set of all possible learning tasks, I think they're actually all pretty similar.
This is all more complicated becae they think intelligence comes in degrees, so they maybe believe a moe has the right type of intelligence to build a spaceship, jt not enough of it. But their research is not about whether that premise (intelligence comes in degrees) is true, nor do they write philosophical arguments about it.
That being said, I’d love to continue the conversation for a little while, if you’re up for it, either here or possibly on your blog if that works better for you. I have some questions and would like to try and understand your perspective.
If I'm right about ~everything, that includes my views of the broad irrationality of academia and the negative value of current published research in many of the fields in question.
For example, David Deutsch's static meme idea, available in BoI, was rejected for academic publication ~20 years earlier. Academia gatekeeps to keep out ideas they don't want to hear, and they don't really debate what's true much in journals. It's like a highly moderated forum with biased moderators following unwritten and inconsistent rules (like reddit but stricter!).
My arguments re animals are largely Deutsch's. He taught me his worldview. The reason he doesn't write it up and publish it in a journal is becae (he believes that) it either wouldn't be published or wouldn't be listened to (and it would alienate people who will listen to his physics papers). The same goes for many other important ideas he has. Being in the Royal Society, etc., is inadequate to effectively get past the academic gatekeeping (to get both published and serioly, productively engaged with). I don't think a PhD and 20 published papers would help either (especially not with issues involving many fields at once).
For what it’s worth, I think this is a fair criticism and concern, especially for someone—like you—who is trying to distill specific truths out of many fields at once. If your (and Deutsch’s) worldview conflicts with the prevailing academic worldview, I concede that publishing might be difficult or impossible and not the best e of your energy.
I asked for a solution but I'm happy with that response. I find it a very hard problem.
Sadly, Deutsch has given up on the problem to the point that he's focing on physics (Constructor Theory) not philosophy now. Physics is one of the best academic fields to interact with, and one of the most productive and rational, while philosophy is one of the worst. Deutsch ed to e.g. write about the implications of Critical Rationalism for parenting and education. The applications are pretty direct from philosophy of knowledge to how people learn, but the conclions are extremely offensive to ~everyone becae, basically, ~all parents and teachers are doing a bad job and destroying children's minds (which is one of the main underlying reasons for why academia and many other intellectual things are broken). Very important issues but people shoot messengers... The messenger shooting is bad enough that Deutsch refed me permission to post archived copies of hundreds of things he wrote publicly online but which are no longer available at their original locations. A few years earlier he had said he would like the archives posted. He changed his mind becae he became more pessimistic about people reaction's to ideas.
I, by contrast, am pursuing a different strategy of speaking truth to power without regard for offending people. I don't want to hold back, but I also don't have a very large fanbase becae even if someone agrees with me about many issues, I have like two dozen different ideas that would alienate many people, so pretty much everyone can find something to hate.
I don't think people would, at that point, start considering and learning different ideas than what they already have, e.g. learning Critical Rationalism so they could apply that framework to animal rights to reach a conclion like "If Critical Rationalism is true, then animal rights is wrong." (And CR is not the only controversial premise I e that people are broadly ignorant of, so it's harder than that.) People commonly dismiss others, despite many credentials, if they don't like the message. I don't think playing the game of authority and credentials – an irrational game – will solve the problem of people's disinterest in truth-seeking. This is view of academia is, again, a view Deutsch taught me.
Karl Popper published a ton but was largely ignored. Thomas Szasz too. There are many other examples. Even if I got published, I could easily be treated like e.g. Richard Lindzen who has published articles doubting some claims about global warming.
I’m not going to respond to the rest of your posts line-by-line becae I think most of what you’re saying is uncontroversial or is not relevant to the OP (it was relevant to my posts; thank you for the substantial, patient responses).
I think most people would deny most of it. I wasn’t expecting a lot of agreement. But OK, great.
For any bystanders who are interested and have made it this far, I think that this conversation between OP and Elliot is helpful in understanding their argument (at least it was for me).
Without the relevant CS or critical rationality background, I can attempt to restate their argument in a way that seems coherent (to me). Elliot or OP can correct me if I’m way off base.
The capacity for an organism to suffer may be binary; essentially, at a certain level of general intelligence, the capacity to suffer may turn on.
I don’t think there are levels of general intelligence, I think it’s present or not present. This is analogo to there not being levels of computers: it’s either a universal classical computer or it’s not a computer and can compute ~nothing. The jump from ~nothing to universality is discsed in BoI.
Otherwise, close enough.
(I imagine suffering to exist on a spectrum; a human’s suffering may be “worse” than a cow’s or a chicken’s becae we have the ability to reflect on our suffering and amplify it by imagining better outcomes, but I’m not convinced that—if I experienced life from the perspective of a cow—that I wouldn’t recognize the negative hallmarks of suffering, and prefer it to end. My thinking is that a sow in a gestation crate could never articulate to herself “I’m uncomfortable and in pain; I wish I were comfortable and pain-free,” but that doesn’t preclude a conscio preference for circumstances to be otherwise, accompanied by suffering or its nonhuman analog.)
I think suffering comes in degrees if it’s present at all. Some injuries hurt more than others. Some bad news is more upsetting than other bad news.
Similarly, how smart people are comes in degrees when intelligence is present. They have the same basic capacity but vary in thinking quality due to having e.g. different ideas and different thinking methods (e.g. critical rationalist thinking is more effective than superstition).
Roughly there are three levels like this:
- Computer (brain)
- Intelligent Mind (roughly: an operating system (OS) for the computer with the feature that it allows creating and thinking about ideas)
- Ideas within the mind.
Each level requires the previo level.
Sand fails to match humans at level 1. No brain.
Apes fail to match humans at level 2. They run a different operating system with features more similar to Windows or Mac than to intelligence. It doesn’t have support for ideas.
Self-driving cars have brains (CP) which are adequately comparable to an ape or human, but like apes they differ from humans at level 2.
When Sue is cleverer than Joe, that’s a level 3 difference. She doesn’t have a better brain (level 1), nor a better operating system (level 2), she has better ideas. She has some knowledge he doesn’t. That includes not jt knowledge of facts but also knowledge about rationality, about how to think effectively. E.g. she knows some stuff about how to avoid bias, how to find and correct errors effectively, how to learn from criticism instead of getting angry, or how to interpret disagreements as disagreements instead of as other things like heresy, bad faith, or “not listening”.
Small hardware differences between people are possible. Sue’s brain might be a 5% faster computer than Joe’s. But this difference is unimportant relative to the impact of culture, ideas, rationality, bias, education, etc. Similarly, small OS differences are possible but they wouldn’t matter much either.
There are some complications. E.g. imagine a society which extensively tested children on speed of doing addition problems in their head. They care a ton about this. The best performers get educated to be scientists and lower performers do unskilled laborer. Someone with a slightly faster brain or slightly different OS might do better on those tests. Those tests limit the role of ideas. So, in this culture, a small hardware speed advantage could make a huge difference in life outcome including how clever the person is as an adult (due to huge educational differences which were caed by differences in arithmetic speed). But the same hardware difference could have totally different results in a different culture, and in a rational culture it wouldn’t matter much. What differentiates knowledge workers IRL, including scientists and philosophers, is absolutely nothing like that the 99th percentile successful guys are able to get equal quality work done 5% faster than the 20th percentile guys.
Our actual culture has some stuff kinda like this hypothetical culture, but much more accidental and with less control over your life (there are many different paths to success, so even if a few get blocked, you don’t have to do unskilled labor). It also has similar kinda things based on non-mental attributes like skin color, height, hair color, etc, though again with considerably smaller consequences than the hypothetical where your whole fate is determined jt by addition tests.
Back to my interpretation of the argument: Beneath a certain threshold of general intelligence, pain—or the experience of having any genetically preprogrammed preference frtrated—may not be interpreted as suffering in the way humans understand it and may not constitute suffering in any meaningful or morally relevant way (even if you otherwise think we have a moral obligation to prevent suffering where we can).
It’s possible that suffering requires uniquely human metacognition; without the ability to think about pain and preference frtration abstractly, animals might not suffer in any meaningful sense.
This is a reasonable approximation except that I think preferences are ideas and I don’t think animals have them at all (not even preprogrammed).
So far (I hope) all I’ve done is restate what’s already been claimed by Elliot in his original post. Whether I’ve helped make it any clearer is probably an open question. Hopefully, Elliot can correct me if I’ve misinterpreted anything or if I’ve dumbed it down to a level where it’s fundamentally different from the original argument.
This is where I think it gets tricky and where a lot of miscommunication and misunderstanding has been going on. Here is a snippet of the conversation I linked earlier:
curi: my position on animals is awkward to e in debates becae it's over 80% background knowledge rather than topical stuff.
curi: that's part of why i wanted to question their position and ask for literature that i could respond to and criticize, rather than focing on trying to lay out my position which would require e.g. explaining KP and DD which is hard and indirect.
curi: if they'll admit they have no literature which addresses even basic non-CR issues about computer stuff, i'd at that point be more interested in trying to explain CR to them.
I’m willing to accept that Elliot is here in good faith; nothing I’ve read on their blog th far looks like an attempt to “own the soyboys” or “DESTROY vegan arguments.” They’re reading Singer (and Korsgaard) and are legitimately looking for literature that compares or contrasts nonhuman animals with AI.
The problem is—whether they’re right or not—it seems like the foundation of their argument requires a background in CR and theoretical computer science.
My view: if you want to figure out what’s true, a lot of ideas are relevant. Gotta learn it yourself and/or find a way to outsource some of the work. So e.g. Singer needs to read Popper and Deutsch or contact some people competent to discs whether CR is correct and its implications. And Singer also needs to contact some computer people and ask them and try to meet them in the middle by explaining some of what he does to them so they understand the problems he’s working on, and then they explain some CS principles to him and how they apply to his problems. Something like that.
That is not happening.
It ought to actually be easier than that. Instead of contacting people Singer or anyone else could look at the literature. What criticisms of CR have been written? What counter-arguments to those criticisms have CR advocates written? How did those discsions end? You can look at the literature and get a picture of the state of the debate and draw some conclions from that.
I find people don’t do this much or well. It often falls apart in a specific way. Instead of evaluating the pro-CR and anti-CR arguments –?seeing what answers what, what’s unanswered, etc. – they give up on understanding the issues and jt decide to assume the correctness of whichever side has a significant lead in popularity and prestige.
The result is, whenever some bad ideas and irrational thinkers become prestigio in a field, it’s quite hard to fix becae people outside the field largely refe to examine the field and see if a minority view’s arguments are actually superior.
Also, often people jt e common sense about what they assume would be true of other fields instead of consulting literature. So e.g. rather than reading actual inductivist literature (induction is mainstream and is one of the main things CR rejects), most animal researchers and others rely on what they’ve picked up about induction, here and there, jt from being part of an intellectual subculture. Hence there exist e.g. academic papers studying animal intelligence that don’t cite even mainstream epistemology books or papers.
The current state of the CR vs. induction debate, in my considered and researched opinion, is there don’t actually exist criticisms of CR from anyone who has understood it, and there’s very little willingness to engage in debate by any inductivists. Inductivists are broadly uninterested in learning about a rival idea which they have not understood or refuted. I think ignoring ideas that no one has criticized is something of a maximum for a type of irrationality. And people outside the field (and in the field too) mostly assume that some inductivists somewhere did learn and criticize CR, though people ually don’t have links to specific criticisms, which is a problem. I think it’s important to have sources in other fields that aren’t your own so that if your sources are incorrect they can be criticized and corrected and you can change your mind, whereas if you jt say “people in the field generally conclude X” without citing any particular arguments then it’s very hard to continue the discsion and correct you about X from there.
From my POV, (a) the argument that suffering may be binary vs. occurring on a spectrum is possible but far from settled and might be unfalsifiable. From my POV, it’s far more likely that animals do suffer in a way that is very different from human suffering but still ethically and categorically relevant.
That’s a reasonable place to start. What I can say is that if you investigate the details, I think they come out particular way rather conclively. (Actually the nature of arguments, and what is conclive vs. unsettled –?how to evaluate and think about that –?is a part of epistemology, it’s one of the issues I think mainstream epistemology is wrong about. That’s actually the issue where I made my largest personal contribution to CR.)
If you don’t want to investigate the details, has anyone else done so as your proxy or representative? Has Singer or any other person or group done that work for you? Who has investigated, reached a conclion, written it up, and you’re happy with what they did? If no one has done that, that suggests something is broken with all the intellectuals on your side – there may be a lot of them, but between all of them they aren’t doing much relevant thinking.
In some ways, the more people believe something and still no one writes detailed arguments and addresses rival ideas well, the more damning it is. In other words, CR has the exce of not having essays to cover every little detail of every mainstream view becae there aren’t many of to write all that and we have ~no funding. The other side has no such exce yet they’re the side, between all those people, has no representatives who will debate! They have plenty of people to have some specialists in refuting CR but they don’t have any.
Sadly, the same pattern repeats in other areas, e.g. The Failure of the 'New Economics’ by Henry Hazlitt is a point-by-point book-length refutation of Keynes’ main book. It es tons of quotes from Keynes, similar to how I’m replying his this comment ing quotes from pdxthehunted. As far as I know, Hazlitt’s criticisms went unanswered. Note: I think Hazlitt’s level of fame/prestige was loosely comparable to Popper and more than Deutsch; it’s not like he was ignored for being a nobody (which I’d object to too, but that isn’t what happened).
Large groups of people ignore critical arguments. What does it mean for intellectuals to rationally engage with critics and how can we get people to actually do that? I think it’s one of the world’s larger problems.
new_grass made a few posts that more eloquently describe that perspective; humans, yelping dogs, and so on evolved from a common ancestor and it seems unlikely that suffering is a uniquely human feature when so many of our other cognitive skills seem to be continuo with other animals.
But this isn't the relevant proposition, unless you think the probability that general intelligence (however you are defining it) is required for the ability to suffer or be conscio is one. And that is absurd, given our current meager understanding of conscioness.
The relevant question is what the probability is that other animals are conscio, or, if you are a welfarist, whether they can suffer. And that probability is way higher than zero, for the naturalistic reasons I have cited.
But according to Elliot, our judgment of the conservatism argument hinges on our understanding of CR and Turing computability.
Does the following sound fair?
Yeah, I have arguments here covering other cases (the cases of the main issue being suffering or conscioness rather than intelligence) and linking the other cases to the intelligence issue. I think it’s linked.
If pdxthehunted had an adequate understanding of the Turing principle and CR and their implications on intelligence and suffering, their opinion on *(a)** would change; they would understand why suffering certainly does occur as a binary off/on feature of sufficiently intelligent life.*
In short, yes. Might have to add a few more pieces of background knowledge.
Please let me know if I’ve managed to at least get a clearer view of the state of the debate and where communication issues are popping up.
Frankly, I’ve enjoyed this thread. I’ve learned a lot. I bought DD’s BOI a couple of years ago after listening to his two podcasts with Sam Harris, but never got around to reading it. I’ve bumped it up to next on my reading list and am hoping that I’m in a better position to understand your argument afterward.
Yeah, comprehensive understanding of DD’s two books covers most of the main issues. That’s hard though. I run the forums where people reading those books (or Popper) can ask questions (it’s this website and an email group with a 25 year history, where DD ed to write thoands of posts, but he doesn’t post anymore).
Finally--if capacity for suffering hinges on general intelligence, is conscioness relevant to the argument at all?
To a significant extent, I leave claims about conscioness out of my arguments. I think conscioness is relevant but isn’t necessary to say much about to reach a conclion. I do have to make some claims about conscioness, which some people find pretty easy to accept, but others do deny. These claims include:
- Dualism is false.
- People don’t have souls and there’s no magic involved with minds.
- Conscioness is an emergent property of some computations.
- Computation is a purely physical process that is part of physics and obeys the laws of physics. Computers are regular matter like rocks.
- Computation takes information as input and outputs information. Information is a physical quantity. It’s part of the physical world.
- Some additional details about computation, along similar lines, to further rule out views of conscioness that are incompatible with my position. Like I don’t think conscioness can be a property of particular hardware (like organic molecules –?molecules with carbon instead of silicon) becae of the hardware independence of computation.
- I believe that conscioness is an emergent property of (general) intelligence. That claim makes things more convenient, but I don’t think it’s necessary. It’s a stronger claim than necessary. But it’s hard to explain or discs a weaker and adequate claim. There aren’t currently any known alternative claims which make sense given my other premises including CR.
One more thing. The “general intelligence” terminology comes from the AI field which calls a Roomba’s algorithms AI and then differentiates human-type intelligence from that by calling it AGI. The concept is that a Roomba is intelligent regarding a few specific tasks while a human is able to think intelligently about anything. I’d prefer to say humans are intelligent and a Roomba or moe is not intelligent. This corresponds to how I don’t call my text editor intelligent even though, e.g., it “intelligently” renumbered the items in the above list when I moved dualism to the top. In my view, there’s quite a stark contrast between humans – which can learn, can have ideas, can think about ideas, etc. –?and everything else which can’t do that at all and has nothing worthy of the name “intelligence”. The starkness of this contrast helps explain why I reach a conclion rather than wanting to err on the side of caution re animal welfare. A different and more CR-oriented explanation of the difference is that all knowledge creation functions via evolution (not induction) and only humans have the (software) capacity to do evolution of ideas within their brains. (Evolution = replication with variation and selection.)
That’s jt the current situation. I do think we can program an AGI which will be jt like , a full person. And yes I do care about AGI welfare and think AGIs should have full rights, freedoms, citizenship, etc. (I’m also, similarly, a big advocate of children’s rights/welfare and I think there’s something wrong with many animal rights/welfare advocates in general that they are more concerned about animal suffering than the suffering of human children. This is something I learned from DD.) I think it’s appalling that in the name of safety (maybe AGIs will want to turn into paperclips for some reason, and will be able to kill all due to being super-intelligent) many AGI researchers advocate working on “friendly AI” which is an attempt to design an AGI with built-in mind control so that, essentially, it’s our slave and is incapable of disagreeing with . I also think these efforts are bound to fail on technical grounds –?AGI researchers don’t understand BoI either, neither its implications for mind control (which is an attempt to take a universal system and limit it with no workarounds, which is basically a lost cae unless you’re willing to lose virtually all functionality) nor its implications for super intelligent AGIs (they’ll jt be universal knowledge creators like , and if you give one a CPU that is 1000x as powerful as a human brain then that’ll be very roughly as good as having 1000 people work on something which is the same compute power.). This, btw, speaks to the importance of some interdisciplinary knowledge. If they understood classical liberalism better, that would help them recognize slavery and refrain from advocating it.