bildramer
bildramer t1_jd0p4sl wrote
The way they restrict "false belief" makes the phrase almost an oxymoron. If you "merely accept" that the Earth is a sphere instead of "genuinely believing" it, how is that different from responding yes to the question "is Earth a sphere?", doing your calculations as if Earth is a sphere, making mistakes that reveal that you didn't know the Earth is a bit squished, etc.? All models are false, so either (i) can't be satisfied and must be relaxed, or "technically false" true beliefs are natural and commonplace.
Also, here's my example of an epistemically useful false belief: The idea that there is substance to music theory (more than what you get from a high school education, that is). You will learn a lot of useful things before falsifying it.
bildramer t1_jcf692h wrote
The objection is simple and banal: Utility contains terms for things like "it's bad to give in to blackmail, as this leads to more expected blackmail in the future*" - consequentialism doesn't have to be short-horizon, blind and dumb. You assess all consequences of an act.
My personal objection (why I'm consequentialist but not utilitarian as usually defined): Caring about others' utilities is not something I have to do because of some Rawlsian argument; it's just something that's already in my utility function because that's how my brain evolved to be. You can do approximations that are equivalent to "weighting people's utilities" based on your thoughts, feelings, whims, their likeability, the uncertainty you have about them, etc. And those weights can be negative, because why not? Spite is also natural. If someone tries to threaten his own bodily integrity, see if I care.
^(*: even accounting for all the not-cut fingers, and for everyone's utilities and not just yours, the "giving in to lots of blackmail" future is worse than one where you don't, which does need to be argued for but isn't hard to argue. As opposed to e.g. "giving in" to win/win trades.)
bildramer t1_jc1q83k wrote
Reply to comment by sejanus21 in As they still have a neutral charge, can antineutrons replace neutrons in a regular atom? by Oheligud
You can build an actual machine to detect muons from space (more precisely: from the upper atmosphere), for example. The particles are all very short-lived, but they do exist.
bildramer t1_jbsk8sn wrote
Reply to comment by Drakolyik in No empirical experiment can prove or disprove the existence of free will without accounting for the inadvertent biases surrounding both the experiment and the concept of free will. by IAI_Admin
It's not just reliigous and political motives. I have a "compatibilism is obviously the most sensible approach" motive. Also wtf, your last paragraph is unhinged.
bildramer t1_jbsk41c wrote
Reply to comment by zms11235 in No empirical experiment can prove or disprove the existence of free will without accounting for the inadvertent biases surrounding both the experiment and the concept of free will. by IAI_Admin
What makes you think chemical reactions can't have reference to truth? Also, yes, you can be fooled, that just means you aren't a perfect reasoner.
bildramer t1_jbnyhv8 wrote
Reply to I just published an article in The Journal of Mind and Behavior arguing that free will is real. Here is the PhilPapers link with free PDF. Tell me what you think. by MonteChristo0321
The real compatibilist objection wouldn't be "you could have done otherwise if you had reasons/wanted to/something", it'd be "you could have done otherwise, period, the natural way we define the word "could", incorporating our uncertainty about our own and each other's thoughts and actions". I think you go into this, but your arguments are way, way too long and complicated when a few words would do the trick.
bildramer t1_jb0nql8 wrote
Reply to Game Theory's ultimate answer to real world dilemmas: "Generous Tit for Tat" by TryingTruly
There exist some really cursed IPD equilibria, like those involving zero determinant strategies. You can unliaterally extort other players, i.e. force a linear relation between your scores. See e.g. here.
bildramer t1_jb02v3w wrote
Reply to comment by SyntheticBees in Wittgenstein’s Revenge (this genuinely changed the way I look at the world) by ElliElephant
Yeah, I agree. "Context omission is always subjective" seems like a wrong way to put it. Let's split facts into assertions about a model/approximation of the world being accurate (usually implicit), and an assertion about what the truth is within that model (explicit). (That split isn't always a clear bright line, btw.)
The first part is implicit, and thus 1. less visible, 2. more fluid, in that in an argument, you can often pretend you had a different context in mind later, or your intelocutor can have a very incompatible one in mind. Hence the need for trust and compromise. But it's not really any more subjective than the second part - the choice of model is very much like the choice of fact/assertion/observation within the model: it strongly depends on the world, we can tell it is intersubjective, people tend to agree on it independently, we can call it "correct" or "wrong", etc. That's all closer to what we usually call objective, like "the sky is blue", unlike "I like anchovies", even though you always need context even for objective claims ("not at night, obviously").
bildramer t1_jaezqt5 wrote
Reply to comment by platoprime in From discs in the sky to faces in toast, learn to weigh evidence sceptically without becoming a closed-minded naysayer by ADefiniteDescription
I have no issue with scientific inquiry, and if you carefully read my comment you'll notice most of the problem is with the word "eventual" here. Sometimes you can outperform "science" by following scientific principles instead of looking at what groups of scientists say; nullius in verba, after all. Also, Lysenkoism, if you want a citation of the opposite. Sure, that wasn't science but state power, but where in the world does science operate without state power influencing it?
bildramer t1_jaevoep wrote
Reply to comment by platoprime in From discs in the sky to faces in toast, learn to weigh evidence sceptically without becoming a closed-minded naysayer by ADefiniteDescription
Where there's smoke there's fire (i.e. many people saying the same thing needs to be explained). Arguments like "surely that many people can't be wrong" or "they came to their conclusions mostly independently" are often implied but not stated. To refute those, learning about the phenomenon of information cascades is very helpful; it explains how large fractions of the population can end up believing something based on very little evidence.
The tl;dr is that if, for a particular decision, only the decision is visible and not the detailed reasoning/evidence/information, and a large majority values "social proof" or conformity more than their private information, then their private information doesn't get incorporated into the public set of information, so only the very early people who decide first get to define that public set. It's a very "sticky" process. For example, consider people in a crowd deciding whether to panic: One or two people possibly saw something concerning and screamed or ran; then, later people react less to the actual seen thing (or lack of it), and more to the number of other people reacting or not reacting. The early or closest people get to set the "tone" - if there's no reaction from others, you infer it was not worth panicking after all, and don't join, strengthening that impression; if there is, you infer it was, and join in. That can easily end up causing panics out of nowhere, or not causing panics when you'd expect them.
Once you know this, it's easy to see how millions of people can be wrong, and how the "wisdom of crowds" fails to work in such cases. Then, however, you also need to make sure the scientists themselves don't suffer from an information cascade, and they usually do - they didn't all arrive at their opinions independently from scratch. New information being available and undoing a wrong cascade partially explains Kuhn's paradigm shifts. Even in the hard sciences, the social environment can become so bad that scientists conform to fashions when they shouldn't, for no good reason - it's why plate tectonics took so long to dislodge earlier shittier theories despite the strong evidence, for example.
So there's no real way to tell which ideas like homeopathy do or don't work based solely on judging their disparity in popularity among different crowds (at least not without risking being fooled that way); you have to reason about them, at least a little bit. Or trust that experts will usually get it right regardless, which is reasonable, but not foolproof.
bildramer t1_j9xtaiw wrote
Maybe we just need to spoonfeed them. Give concrete examples.
"How many scientists do you think work on biology R&D? How much better do you think medicine has gotten in the past 40 years? What things do you think are possible with unlimited money and effort?" First establish that the answers are "just a few million", "massively" and "anything natural biology already does". Clarify that they understand these ideas - ask them if catgirls are possible, ensure they understand the answer is "yes". No need to go into Drexler's nanosystems - for normies, if it doesn't exist yet, you'll have an incredibly difficult time arguing it's possible. You don't want to argue two distinct things, argue one thing (AGI).
Then ask what happens when you create a few million minds that can work on biology better than any human, using all the accumulated biology knowledge instead of a subset, learning it faster, working faster, making less mistakes, having better memory, tirelessly. The idea that you can make them even faster by giving them faster hardware, or the idea of a "bottleneck" based on waiting for real-life experimental results, is perhaps too complicated, but try to include them. Perhaps also ask what fraction of IRL biologists' time is spent doing intellectual tasks like reading/writing/learning/memorizing/thinking/arguing, or sleeping, instead of actively manipulating labware. Looking at a screen and following instructions is a job you can give to an intern.
That's one field. There are many fields. There's a lot of hardware like CPUs and GPUs that already exists, and we're constantly making more. Make them realize that talking about UBI or unemployment is kind of irrelevant, like talking about steel quality or how blacksmiths might make car parts instead of horseshoes is kind of irrelevant, or saying "for birds, the incentive to move faster to compete could affect feather length in unpredictable ways" when you have a jet fighter is kind of irrelevant.
bildramer t1_j9rry57 wrote
Reply to What do you expect the most out of AGI? by Envoy34
Imagine asking a pre-Bronze Age hunter-gatherer what to expect from the year 2023, and comparing it with what really happened. Then apply the analogy to today, only moreso.
bildramer t1_j9eqer3 wrote
Actually, had a normie acquiantance of mine mention ChatGPT out of the blue without prompting. So there's probably at least 15% penetration. Eventually, a majority will know.
bildramer t1_j99l885 wrote
Reply to Compatibilism is supported by deep intuitions about responsibility and control. It can also feel "obviously" wrong and absurd. Slavoj Žižek's commentary can help us navigate the intuitive standoff. by matthewharlow
I don't understand all the hostility towards compatibilism in the comments. To me, asking whether we have free will or everything is predetermined is a false dichotomy, like asking if our muscles are made of fibers or made of atoms. These are just two models of reality, and they are compatible, hence the name.
Compatibilism is simple: Determinism seems true. When I say I "can" decide to stand up and go eat a bar of chocolate, then, all that means that it's a future that appears accessible to me, that perhaps I have an action plan that I think would reach it if taken, a plan I also "can" deliberate upon, accept or reject - what else could it possibly mean? There might be a single future, or a randomly chosen future not under our control - either way we don't have access to knowledge of it. I don't know in advance what I will do, and interact with people who don't, either, all the time. Clearly we're used to doing reasoning under uncertainty and mentally working with counterfactuals. "Can" is a word that works in that context, we regularly use it to reason correctly and make correct predictions about ourselves and others; it must refer to how our decision processes interact with the world/the future, and not some kind of incoherent libertarian free will.
bildramer t1_j955zoa wrote
Reply to comment by Sansa_Culotte_ in Transparency and Trust in News Media by ADefiniteDescription
Unironically yes. My biases include things like "I've seen the news' brazen lies and refuse to trust them". Most people's don't, and therefore I can discard their news-informed opinions.
bildramer t1_j8r4kdb wrote
"Illusion" is such an annoying word. It carries so many connotations, many of them wrong in most contexts it's used in. Its meaning is anywhere between "false" and "real, but looks slightly different than it is".
To this day I still don't understand how proving "your thoughts have lag" is supposed to show anything about (not compatibilist, nor libertarian, but layman) free will, for or against.
bildramer t1_j8mnvvr wrote
Reply to comment by bradyvscoffeeguy in /r/philosophy Open Discussion Thread | February 13, 2023 by BernardJOrtcutt
What is the idea that philosophers "accept at least the relevance of these fields", if not a synonym for exactly that, the prioritisation of friends and goodwill (social status) over truth-seeking?
bildramer t1_j8hvo8e wrote
Reply to comment by vivehelpme in Altman vs. Yudkowsky outlook by kdun19ham
Every single time someone criticises Yudkowsky's work, it's not anything substantive. I'm not exaggerating. It's either meta bulverism like this, or arguments that apply equally well to large machines instead of intelligent ones, or deeply unimaginative people who couldn't foresee things like ChatGPT jailbreaks, or people with rosy ideas about AI "naturally" being safe that contradict already seen behaviors. You have to handhold them through arguments that Yudkowsky, Bostrom and others were already refuting back in the 2010s. I haven't actually seen any criticism anywhere I would call even passable, let alone solid.
Even ignoring that, this doesn't land as a criticism. He didn't start from literary themes, he started from philosophical exploration. He's disappointed in academic philosophy, for good reasons, as are many other people. One prominent idea of his is "if you can fully explain something about human cognition, you should be able to write a program to do it", useful for getting rid of a lot of non-explanations in philosophy, psychology, et al. He's trying to make predictions more testable, not less. He doesn't have an exact sequence of future events, and never claimed to. Finally, most people in his alleged "cult" disagree with him and think he's cringy.
bildramer t1_j8htxr5 wrote
Reply to comment by Frumpagumpus in Altman vs. Yudkowsky outlook by kdun19ham
I think hardware overhang is already huge, there's no point in being risky only to make AI "ludicrously good/fast" instead of "ludicrously good/fast plus a little bit". Also, algorithms that give you AGI are so simple evolution could find one.
bildramer t1_j8htmki wrote
Reply to comment by lacergunn in Altman vs. Yudkowsky outlook by kdun19ham
I don't think that's far easier. Those are basically equally impossible, and even if we got that second one, it's much better than not getting it.
bildramer t1_j8htdli wrote
Reply to comment by gay_manta_ray in Altman vs. Yudkowsky outlook by kdun19ham
It's not about naïvete. It's about the orthogonality thesis. You can combine any utility function with any level of intelligence. You can be really smart but care only about something humans would consider "dumb". There's no fundamental obstacle there.
bildramer t1_j7ncdyy wrote
Reply to comment by zedority in The often misused buzzword Paradigm originated in extremely popular and controversial philosopher of science Thomas Kuhn's work; he defined the term in two core ways: firstly as a disciplinary matrix (similar to the concept of a worldview) and secondly as an exemplar by thelivingphilosophy
It makes sense - that's the word's etymology from the original Greek. Prefix para- + "that-which-is-shown", basically. In modern Greek, παράδειγμα simply means "example".
bildramer t1_j7k9x55 wrote
Reply to comment by frnzprf in 3 reasons not to be a Stoic (but try Nietzsche instead) by Apotheosical
To my understanding, Nietzsche basically says that slave-morality - us loving underdog stories, the poor and pitiful, sacrifice, humility, turning the other cheek, and so on - is, at its core, an inversion of master-morality, and Christianity is to blame for its popularity in the West. The slave-morality is mostly about one's attitude towards guilt, sin, vices, etc. - negative behaviors. Are some people good and some people bad (as in: high-quality and low-quality, powerful and weak, based and cringe, etc.), or some people good and some people evil? Do you treat harm done to strangers like a neutral action or a negative one?
bildramer t1_j7fdcwp wrote
Reply to comment by doodcool612 in Utopia, Heterotopia, and the End of History: Marx, Nietzsche, & Foucault | The Masters’ Game 5 by Perplexed_Radish
Whose incentives? The capitalist solution to someone doing that (overcharging, not solving problems when it's cheap and cost-effective to solve them) is simple: undercut them. The only way to prevent that is if the FDA intervenes, which it does; in other countries insulin is like 50x cheaper.
bildramer t1_jee1eb6 wrote
Reply to comment by Agamemnon420XD in Selected before birth | Embryo risk screening could lower the odds of illnesses ranging from depression to diabetes, but poses ethical problems by ADefiniteDescription
But what if rich people get to be disease-free first? Clearly that's historically unprecedented, rich people being better off is astounding, nay, unacceptable, we might as well be putting minorities in camps.