bildramer
bildramer t1_j6te87v wrote
It's easy to misspecify or misgeneralize our needs and wants. When we make AIs that do have drives (usually in toy universes where we research reinforcement learning or meta-learning, or artificial evolution), we often see a concerning combination: superhuman performance, and strong pursuit/maximization of the wrong goal. Here's a paper listing evolutionary examples. There's another list of pure RL examples but I don't have the link handy.
bildramer t1_j6okziq wrote
Reply to comment by AUFunmacy in The Conscious AI Conundrum: Exploring the Possibility of Artificial Self-Awareness by AUFunmacy
"Complex neuronal activity" is not an explanation, it's basically a restatement of what generates consciousness in us, i.e. you can have complex neuronal activity without consciousness, but not vice versa, unless you do equivalent computations in some other substrate. The specific computations you have to do are unknown to us, but we have some broad hints and directions to look.
bildramer t1_j5wdfhc wrote
Reply to comment by mschweini in Ask Anything Wednesday - Economics, Political Science, Linguistics, Anthropology by AutoModerator
If I understood you correctly: Aside from energy and cold, there's no other input that's really fundamental in the way you thinking. Most processes we consider "labor" consume energy and generate heat, and other than that deciding how to do things or where to source your energy/cold from is only a matter of efficiency. For material goods, you could always (with enormous difficulty/cost) create matter from energy, and arrange it the right way by "spending" some cold/negentropy, but if you want to make steel, mining abundant iron ore is much easier. If not:
Two people can exchange goods and both be better off, and that's the basis of trade.
We create valuable things where there were none (e.g. food, buildings, a concert) or make things more valuable by changing how accessible they are, moving them, changing their form, giving them to people who value them more, getting more information about them, etc. and we do all of this because each person doing it considers it better than not doing it. That's only because they get paid, usually, but that also happens because bosses consider paying workers to be better than not doing it. And so on.
I don't really get what you mean by "injecting value", but aside from food and steel you also value more abstract things like information, law and safety, consistency, personal connections, entertainment, and others. Every time service work happens, some of this kind of value is newly generated.
bildramer t1_j5t3jjg wrote
I think something like multiple distinct "goals" is very hard for evolution to encode into organisms. It only has something like 7.5MB available to specify things into a baby's mental wiring, and a very crude training process. It also has to be robust to perturbations, so some of it is merely redundancy/buffers. And, of course, babies don't know the local language or customs, yet end up caring about them later in life, and not just instrumentally.
Most of the process of acquiring whatever we call "morals" must happen on its own, not in a hardcoded evolutionary way, even if evolution is responsible for the beginning and our more fundamental drives like hunger/arousal. I think in the process of trying to find explanations for why we feel things that we feel, we end up with certain moral axioms out of it. As we grow, their structure becomes more complicated, and we find and resolve contradictions (e.g. by finding simplified axioms that explain both values, or dismissing one contradictory axiom), leading us to believe there is One True Morality once we're old enough - even though basically everything is happening ad hoc.
bildramer t1_j5qa2qa wrote
Reply to If I had two cups of water, one normal size and one as big as a swimming pool and stirred them both with proportionally sized spoons, would the larger pool of water keep spiraling longer than the smaller? by r3volc
Ignoring the effect from length scaling and drag from walls, it gets more complicated. Liquids have viscosity, and do in fact behave differently at different scales. We use the Reynolds number to describe the different behaviors - laminar or turbulent flow. At small enough scales, your vortices will dissipate very quickly, and at large enough scales, you can ignore viscosity.
bildramer t1_j5obi9u wrote
Reply to comment by hacktheself in Argument for a more narrow understanding of the Paradox of Tolerance by doubtstack
The criticism is simple: 1. Yes, those that claim to be "cancelled" are many, doesn't mean their criticisms aren't real. They're being censored, but the censorship isn't infinitely powerful. Numbers can be high but smaller than other numbers. 2. So what if Elon Musk does it too? That's not really relevant. 3. What does the "responsiblity of being held accountable" entail? Anonymous speech exists, and I don't see what reasonable principle would disallow it. 4. You've failed to actually say whether or not you actually want to prevent others from speaking freely or not. If yes, the principle applies and you should be prevented from speaking freely. If not, then it doesn't.
bildramer t1_j5oavum wrote
Reply to comment by FlynnRausch in Argument for a more narrow understanding of the Paradox of Tolerance by doubtstack
Read up on the Rwandan genocide to understand what "open animosity" means, please.
bildramer t1_j5oasol wrote
Reply to comment by some_code in Argument for a more narrow understanding of the Paradox of Tolerance by doubtstack
If so, then affirmative action is violence.
bildramer t1_j5oa3ra wrote
Reply to comment by FakePhillyCheezStake in Argument for a more narrow understanding of the Paradox of Tolerance by doubtstack
Cringe-inducing? More like terrifying, how the people you thought were liberal and principled would put the Nazis to shame with their rhetoric. Look at some other comments in this very post, they want to criminalize conservative opinions, deny their vote or outright bomb them all, and have no fear saying it out in the open.
bildramer t1_j5j9ilv wrote
Reply to comment by DefinitelyNotAliens in US investigating baby formula plant after national shortage by nosotros_road_sodium
The FDA is also responsible for the insane regulations that don't let anyone import foreign baby food in the first place. Not because of any nutrition requirements, or safety, but labeling requirements. Why not temporarily suspend those? I guess babies don't matter that much after all.
The FDA is responsible for closing the plant two+ months after they had multiple reports about the same issue leading to baby deaths, relying on "maybe if we tell them they'll stop on their own" when they had less reports I guess, and for somehow not finding any of the clearly contaminated baby formula. Or maybe that means there wasn't any, and the contamination in the plant was confirmation bias and not significant? If they were trustworthy I wouldn't question that, but they aren't. The FDA is responsible for not responding quickly to the issue after the fact, taking entire months to sign paperwork and plan meetings when babies are potentially dying. The FDA is responsible for wanting increased control over the baby formula supply chain but having no sensible plan and communicating nothing to the public when a real crisis came. "Let's just kill the majority of the country's supply for months, and wait, maybe some day we'll reopen it" is not a plan.
I don't see how giving them more staff could fix the dumb decisionmaking.
I guess you're right in that it's not only them. The WIC contracts are responsible for Abbott having all this monopoly power, and the NAFTA is responsible for enormous tariffs on Canadian formula.
bildramer t1_j5hf7lw wrote
Reply to comment by 8BitSk8r in US investigating baby formula plant after national shortage by nosotros_road_sodium
The FDA are the very ones responsible for the mess. Paying them more money is supposed to accomplish what, exactly?
bildramer t1_j5e1f3u wrote
Reply to comment by palsh7 in Professor Martha C. Nussbaum on Vulnerability, Politics, and Moral Worth with Sam Harris by palsh7
Most free discussion of philosophy isn't worth listening to, and might in fact have negative value. I'm not convinced these 45 minutes are different, based on your summary.
bildramer t1_j566unq wrote
Reply to comment by whittily in Study of more than 2,400 Facebook users suggests that platforms — more than individual users — have a larger role to play in stopping the spread of misinformation online by giuliomagnifico
There's "content-neutrality" as in not ranking posts by date, or not prefering .pngs to .jpgs, and then there's "content-neutrality" as in not looking into the content and determining if a no-no badperson with the wrong opinions posted it. The first is usually acceptable by anyone, and not what we're talking about. And you can distinguish the two, there's a fairly thick line.
bildramer t1_j45dqes wrote
Maybe the real solution is not feeling "deep and profound moral disgust and outrage" in the first place? Wow, criminals (or people with opinions you don't like) exist, and they may have contributed to good things. What a dire conundrum. How will we possibly deal. Examine where your feelings come from and try to dissolve them, just like people usually do for any other feelings of disgust when they contradict moral principles.
bildramer t1_j3rmawi wrote
Reply to comment by gortlank in Violence and force: “Camus and Sartre are paradoxically inseparable because they are opposites in this most central and binding debate on racism and all kinds of social oppression.” by IAI_Admin
It costs them a lot to pay for HR departments, which then discriminate in an "anti-racist" way instead of hiring fairly, cause PR fiascos, waste time with DEI meetings, add various other frictions to a business. The problem is that they're effectively mandated by the government.
bildramer t1_j3bhyo0 wrote
Reply to comment by JustAPerspective in Our ability to resist temptation depends on how fragmented one's mind is | On the inconsistencies in one’s mental setup by IAI_Admin
How is that "failed replication"? Sounds like "successful replication, also we found out what causes it".
bildramer t1_j2d8p0p wrote
Reply to comment by ZeroFries in We have all the resources we need to solve the world's greatest problems, so long as we can rise above our tribal instincts. by IAI_Admin
Curing all disease, because you're not an incomprehensibly evil moral monster, right? Or at least, you'd spend at least one hour, or perhaps ten minutes, looking up the wikipedia article on what a billionaire is. Right?
bildramer t1_j2d8gnv wrote
Reply to comment by Feline_Diabetes in We have all the resources we need to solve the world's greatest problems, so long as we can rise above our tribal instincts. by IAI_Admin
If you own 51% of a company, and that company ends up making billions, and the stock is then valued as such, the media will call you "billionaire" - but that money isn't real as long as you don't sell a fraction your ownership.
bildramer t1_j24f6fo wrote
Reply to How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
That's a somewhat disappointing article. Among other things, the man in the Chinese room is not analogous to the AI itself, he's analogous to some mechanical component of it. Let's write something better.
First, let's distinguish "AI ethics" (making sure AI talks like WEIRD neoliberals and recommends things to their tastes) and "AI notkilleveryoneism" (figuring out how to make a generally intelligent agent that doesn't kill everyone by accident). I'll focus on the second.
To briefly discuss what not killing everyone entails: Even without concerns about superintelligence (which I consider solid), strong optimization for a goal that appears good can be evil. Say you're a newly minted AI, part of a big strawberry company, and your task is to sell strawberries. Instead of any complicated set of goals, you have to maximize a number.
One way to achieve that is to genetically engineer better strawberries, improve the efficiency of strawberry farms, discover more about people's demand for strawberries and cater to it, improve strawberry market efficiency and liquidity, improve marketing, etc. etc. One easier way to achieve that is to spread plant diseases in banana, raspberry, orange, peach farms/planatations. Or your strawberry competitors, but that's more risky. You don't have to be a superhuman genius to generate such a plan, or subdivide it into smaller steps, and ChatGPT can in all likelihood already do it if prompted right. You need others to perform some steps, but that's most large-scale corporate plans.
An AI that can create such a plan can probably also realize that it's illegal, but does it care? It only wants more strawberries. If it cares about the police discovering the crimes, because that lowers the expected number of strawberries made, it can just add stealth to the plan. And if it cares about its corporate boss discovering the crimes, that's solvable with even more stealth. You begin to see the problem, I hope. If you get a smarter-than-you AI and it delivers a plan and you don't quite understand everything it planned but it doesn't appear illegal, how sure are you that it didn't order a subcontractor to genetically engineer the strawberries to be addictive in step 145?
Anyway, that concern generalizes up to the point where all humans are dead and we're not quite sure why. Maybe human civilization as it is today could develop pesticides that stop the strawberry-kudzu hybrid from eating the Amazon within 20 years, and that would decrease strawberry sales. Can we stop this from happening? Most potential solutions to prevent it from happening don't actually work upon closer examination. E.g. "don't optimize the expectation of a number, optimize reaching the 90% quantile of it" adds a bit of robustness, but does not stop subgoals like "stop humans from interfering" or "stop humans from realizing they asked the wrong thing", even if the AI fully understands they would have wanted something else, and why and how the error was made.
So, optimizing for something good, doing your job, something that seems banal to us, can lead to great evil. You have to consider intelligence separate from "wisdom", and take care when writing down goals. Usually your goals get parsed and implemented by other humans, which fully understand that we have multiple goals, and "I want a fast car" is balanced against "I don't want my car to be fueled by hydrazine" and "I want my internal organs to remain unliquefied". AIs may understand but not care.
bildramer t1_j2494g3 wrote
Reply to comment by glass_superman in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Saying "we are being ruled by evil billionaires" when people like Pol Pot exist is kind of an exaggeration, don't you think?
bildramer t1_j248the wrote
Reply to comment by [deleted] in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Yes, but is your threat credible?
bildramer t1_j1d727v wrote
Reply to comment by Fluggernuffin in Epistemic Trespassing: Stay in your lane mf by thenousman
Yes. Expertise is not a synonym for "is accurate about topic", it's (ideally) better knowledge, better practices, experience, familiarity with arguments. Better epistemic practices are also alleged, but I think you should generally doubt that. That all indirectly leads to accuracy, but if they have an opinion, you can still ask them "why do you think so?", and they should be able to answer. A plumber may be able to give me more informed reasons about whether I should go for copper or plastic pipes (or something), and may favor an option. However:
If you have good reason to believe you know what exact process someone is using to answer your questions, that "screens off" expertise. If you know someone is just regurgiating the standard textbook advice, well, now you know he's exactly as good as the standard textbook advice, and your potential to do better increases. If you know an electrician is not considering pros and cons you yourself have considered, but going with the cheapest option, his expertise doesn't matter for that particular decision. And so on. Don't get too cocky, though.
bildramer t1_j18801b wrote
Reply to comment by XiphosAletheria in Educating Professionals: why we need to cultivate moral virtue in students by ADefiniteDescription
Here's some evidence that there's no meaningful correlation.
EDIT: That said, the article isn't about "advanced study of ethics", it's about more basic professional ethics. It says that e.g. engineers need to do something beyond rote algorithmic code-of-conduct-obeying, they need to exercise judgement. The humanities don't fix a lack of judgement, and IMO nothing that can be taught does. If you don't genuinely care about honesty, others' safety, others' dignity, your own personal responsibility and trustworthiness, externalities, etc., nothing that a professor will tell you (but an employer will tell you to disregard for money) will make you care.
bildramer t1_j6thqfz wrote
Reply to comment by SvetlanaButosky in How to be a sceptic | We have an ethical responsibility to adopt a sceptical attitude to everything from philosophy and science to economics and history in the pursuit of a good life for ourselves and others. by IAI_Admin
We should be skeptical, but not too skeptical, but of course also be skeptical about who got to define our idea of "too skeptical" and how. Many people seemingly assume they can skip any actual skepticism, and go pick up all the ideas labeled skepticism, and discard all the ones labeled too much skepticism, and be done, and moreover, that they have already done this. You see it all the time in polemics about "critical thinking in schools", for example - the idea that the more critical thinking, the more children's beliefs and opinions (and votes) will end up similar to yours.