bildramer

bildramer t1_jd0p4sl wrote

The way they restrict "false belief" makes the phrase almost an oxymoron. If you "merely accept" that the Earth is a sphere instead of "genuinely believing" it, how is that different from responding yes to the question "is Earth a sphere?", doing your calculations as if Earth is a sphere, making mistakes that reveal that you didn't know the Earth is a bit squished, etc.? All models are false, so either (i) can't be satisfied and must be relaxed, or "technically false" true beliefs are natural and commonplace.

Also, here's my example of an epistemically useful false belief: The idea that there is substance to music theory (more than what you get from a high school education, that is). You will learn a lot of useful things before falsifying it.

1

bildramer t1_jcf692h wrote

The objection is simple and banal: Utility contains terms for things like "it's bad to give in to blackmail, as this leads to more expected blackmail in the future*" - consequentialism doesn't have to be short-horizon, blind and dumb. You assess all consequences of an act.

My personal objection (why I'm consequentialist but not utilitarian as usually defined): Caring about others' utilities is not something I have to do because of some Rawlsian argument; it's just something that's already in my utility function because that's how my brain evolved to be. You can do approximations that are equivalent to "weighting people's utilities" based on your thoughts, feelings, whims, their likeability, the uncertainty you have about them, etc. And those weights can be negative, because why not? Spite is also natural. If someone tries to threaten his own bodily integrity, see if I care.

^(*: even accounting for all the not-cut fingers, and for everyone's utilities and not just yours, the "giving in to lots of blackmail" future is worse than one where you don't, which does need to be argued for but isn't hard to argue. As opposed to e.g. "giving in" to win/win trades.)

13

bildramer t1_jbnyhv8 wrote

The real compatibilist objection wouldn't be "you could have done otherwise if you had reasons/wanted to/something", it'd be "you could have done otherwise, period, the natural way we define the word "could", incorporating our uncertainty about our own and each other's thoughts and actions". I think you go into this, but your arguments are way, way too long and complicated when a few words would do the trick.

−1

bildramer t1_jb02v3w wrote

Yeah, I agree. "Context omission is always subjective" seems like a wrong way to put it. Let's split facts into assertions about a model/approximation of the world being accurate (usually implicit), and an assertion about what the truth is within that model (explicit). (That split isn't always a clear bright line, btw.)

The first part is implicit, and thus 1. less visible, 2. more fluid, in that in an argument, you can often pretend you had a different context in mind later, or your intelocutor can have a very incompatible one in mind. Hence the need for trust and compromise. But it's not really any more subjective than the second part - the choice of model is very much like the choice of fact/assertion/observation within the model: it strongly depends on the world, we can tell it is intersubjective, people tend to agree on it independently, we can call it "correct" or "wrong", etc. That's all closer to what we usually call objective, like "the sky is blue", unlike "I like anchovies", even though you always need context even for objective claims ("not at night, obviously").

8

bildramer t1_jaezqt5 wrote

I have no issue with scientific inquiry, and if you carefully read my comment you'll notice most of the problem is with the word "eventual" here. Sometimes you can outperform "science" by following scientific principles instead of looking at what groups of scientists say; nullius in verba, after all. Also, Lysenkoism, if you want a citation of the opposite. Sure, that wasn't science but state power, but where in the world does science operate without state power influencing it?

1

bildramer t1_jaevoep wrote

Where there's smoke there's fire (i.e. many people saying the same thing needs to be explained). Arguments like "surely that many people can't be wrong" or "they came to their conclusions mostly independently" are often implied but not stated. To refute those, learning about the phenomenon of information cascades is very helpful; it explains how large fractions of the population can end up believing something based on very little evidence.

The tl;dr is that if, for a particular decision, only the decision is visible and not the detailed reasoning/evidence/information, and a large majority values "social proof" or conformity more than their private information, then their private information doesn't get incorporated into the public set of information, so only the very early people who decide first get to define that public set. It's a very "sticky" process. For example, consider people in a crowd deciding whether to panic: One or two people possibly saw something concerning and screamed or ran; then, later people react less to the actual seen thing (or lack of it), and more to the number of other people reacting or not reacting. The early or closest people get to set the "tone" - if there's no reaction from others, you infer it was not worth panicking after all, and don't join, strengthening that impression; if there is, you infer it was, and join in. That can easily end up causing panics out of nowhere, or not causing panics when you'd expect them.

Once you know this, it's easy to see how millions of people can be wrong, and how the "wisdom of crowds" fails to work in such cases. Then, however, you also need to make sure the scientists themselves don't suffer from an information cascade, and they usually do - they didn't all arrive at their opinions independently from scratch. New information being available and undoing a wrong cascade partially explains Kuhn's paradigm shifts. Even in the hard sciences, the social environment can become so bad that scientists conform to fashions when they shouldn't, for no good reason - it's why plate tectonics took so long to dislodge earlier shittier theories despite the strong evidence, for example.

So there's no real way to tell which ideas like homeopathy do or don't work based solely on judging their disparity in popularity among different crowds (at least not without risking being fooled that way); you have to reason about them, at least a little bit. Or trust that experts will usually get it right regardless, which is reasonable, but not foolproof.

20

bildramer t1_j9xtaiw wrote

Maybe we just need to spoonfeed them. Give concrete examples.

"How many scientists do you think work on biology R&D? How much better do you think medicine has gotten in the past 40 years? What things do you think are possible with unlimited money and effort?" First establish that the answers are "just a few million", "massively" and "anything natural biology already does". Clarify that they understand these ideas - ask them if catgirls are possible, ensure they understand the answer is "yes". No need to go into Drexler's nanosystems - for normies, if it doesn't exist yet, you'll have an incredibly difficult time arguing it's possible. You don't want to argue two distinct things, argue one thing (AGI).

Then ask what happens when you create a few million minds that can work on biology better than any human, using all the accumulated biology knowledge instead of a subset, learning it faster, working faster, making less mistakes, having better memory, tirelessly. The idea that you can make them even faster by giving them faster hardware, or the idea of a "bottleneck" based on waiting for real-life experimental results, is perhaps too complicated, but try to include them. Perhaps also ask what fraction of IRL biologists' time is spent doing intellectual tasks like reading/writing/learning/memorizing/thinking/arguing, or sleeping, instead of actively manipulating labware. Looking at a screen and following instructions is a job you can give to an intern.

That's one field. There are many fields. There's a lot of hardware like CPUs and GPUs that already exists, and we're constantly making more. Make them realize that talking about UBI or unemployment is kind of irrelevant, like talking about steel quality or how blacksmiths might make car parts instead of horseshoes is kind of irrelevant, or saying "for birds, the incentive to move faster to compete could affect feather length in unpredictable ways" when you have a jet fighter is kind of irrelevant.

3

bildramer t1_j99l885 wrote

I don't understand all the hostility towards compatibilism in the comments. To me, asking whether we have free will or everything is predetermined is a false dichotomy, like asking if our muscles are made of fibers or made of atoms. These are just two models of reality, and they are compatible, hence the name.

Compatibilism is simple: Determinism seems true. When I say I "can" decide to stand up and go eat a bar of chocolate, then, all that means that it's a future that appears accessible to me, that perhaps I have an action plan that I think would reach it if taken, a plan I also "can" deliberate upon, accept or reject - what else could it possibly mean? There might be a single future, or a randomly chosen future not under our control - either way we don't have access to knowledge of it. I don't know in advance what I will do, and interact with people who don't, either, all the time. Clearly we're used to doing reasoning under uncertainty and mentally working with counterfactuals. "Can" is a word that works in that context, we regularly use it to reason correctly and make correct predictions about ourselves and others; it must refer to how our decision processes interact with the world/the future, and not some kind of incoherent libertarian free will.

7

bildramer t1_j8r4kdb wrote

"Illusion" is such an annoying word. It carries so many connotations, many of them wrong in most contexts it's used in. Its meaning is anywhere between "false" and "real, but looks slightly different than it is".

To this day I still don't understand how proving "your thoughts have lag" is supposed to show anything about (not compatibilist, nor libertarian, but layman) free will, for or against.

19

bildramer t1_j8hvo8e wrote

Every single time someone criticises Yudkowsky's work, it's not anything substantive. I'm not exaggerating. It's either meta bulverism like this, or arguments that apply equally well to large machines instead of intelligent ones, or deeply unimaginative people who couldn't foresee things like ChatGPT jailbreaks, or people with rosy ideas about AI "naturally" being safe that contradict already seen behaviors. You have to handhold them through arguments that Yudkowsky, Bostrom and others were already refuting back in the 2010s. I haven't actually seen any criticism anywhere I would call even passable, let alone solid.

Even ignoring that, this doesn't land as a criticism. He didn't start from literary themes, he started from philosophical exploration. He's disappointed in academic philosophy, for good reasons, as are many other people. One prominent idea of his is "if you can fully explain something about human cognition, you should be able to write a program to do it", useful for getting rid of a lot of non-explanations in philosophy, psychology, et al. He's trying to make predictions more testable, not less. He doesn't have an exact sequence of future events, and never claimed to. Finally, most people in his alleged "cult" disagree with him and think he's cringy.

3

bildramer t1_j8htxr5 wrote

I think hardware overhang is already huge, there's no point in being risky only to make AI "ludicrously good/fast" instead of "ludicrously good/fast plus a little bit". Also, algorithms that give you AGI are so simple evolution could find one.

2

bildramer t1_j8htdli wrote

It's not about naïvete. It's about the orthogonality thesis. You can combine any utility function with any level of intelligence. You can be really smart but care only about something humans would consider "dumb". There's no fundamental obstacle there.

1

bildramer t1_j7ncdyy wrote

It makes sense - that's the word's etymology from the original Greek. Prefix para- + "that-which-is-shown", basically. In modern Greek, παράδειγμα simply means "example".

1

bildramer t1_j7k9x55 wrote

To my understanding, Nietzsche basically says that slave-morality - us loving underdog stories, the poor and pitiful, sacrifice, humility, turning the other cheek, and so on - is, at its core, an inversion of master-morality, and Christianity is to blame for its popularity in the West. The slave-morality is mostly about one's attitude towards guilt, sin, vices, etc. - negative behaviors. Are some people good and some people bad (as in: high-quality and low-quality, powerful and weak, based and cringe, etc.), or some people good and some people evil? Do you treat harm done to strangers like a neutral action or a negative one?

3

bildramer t1_j7fdcwp wrote

Whose incentives? The capitalist solution to someone doing that (overcharging, not solving problems when it's cheap and cost-effective to solve them) is simple: undercut them. The only way to prevent that is if the FDA intervenes, which it does; in other countries insulin is like 50x cheaper.

2