gahblahblah
gahblahblah t1_iybymtt wrote
Reply to Will beautiful people lose most of their sexual market value in the coming decades? by giveuporfindaway
Advertising will become customised to your personality. The commercials you see will have ai generated images that appeal to you specifically.
You will be able to have an ai partner, including sex robot. You will be able to decide their expressed personality. And so, as they treat you in the ways that you always wanted, you will love them.
gahblahblah t1_iy6fbwu wrote
I would encourage you to check out some of the philosophers at Character ai.
gahblahblah t1_ivcejoe wrote
Reply to comment by ChoosenUserName4 in Becoming increasingly pessimistic about LEV + curing aging by Phoenix5869
I have provided direct evidence for my views. You could try r/longevity. It is a science based sub - although you may find that they disagree with you there too. Bye bye.
gahblahblah t1_iv9mcnq wrote
Reply to comment by ChoosenUserName4 in Becoming increasingly pessimistic about LEV + curing aging by Phoenix5869
A quote from my link "An AlphaFold prediction helped to determine the structure of a bacterial protein that Lupas’s lab has been trying to crack for years."
The tools is already being helpful - but you don't seem to think this counts as meaningful progress because 'more work is needed'.
Your definition for progress would make you blind to the majority of progress. Improved tools are progress, and this tool is helpful right now, and it is not a small benefit, no. You may be ignorant of the benefits, but this doesn't mean they don't exist
gahblahblah t1_iv9eswt wrote
Reply to comment by ChoosenUserName4 in Becoming increasingly pessimistic about LEV + curing aging by Phoenix5869
I have already posted a link above that is an example of progress. If bettering tools to understand the body isn't progress to you, perhaps your definition of progress is flawed.
gahblahblah t1_iv8k331 wrote
Large growth in accessibility to information about our most basic cell biology building block structures is the opposite of '0 progress' - Google’s deep-learning program for determining the 3D shapes of proteins stands to transform biology, say scientists.
You are equating your ignorance of progress as meaning there has been no progress.
gahblahblah t1_ityf76x wrote
Reply to comment by AsheyDS in AGI staying incognito before it reveals itself? by Ivanthedog2013
I completely agree. It is sensible, healthy and sane to not attempt extremist things, and it is entirely possible that computers will be better at rationality than we are.
But the question wasn't about the nature of AGI, but rather whether people had considered what AGI might do.
gahblahblah t1_itx68ix wrote
So, you're asking 'have AGI developers considered the AGI may be deceptive and attempt subterfuge'.
Yes. The nature of general intelligence is that it may try anything.
Also, the AGI of the future will likely read all of reddit, including any discussion of strategy like this.
gahblahblah t1_itoqqsi wrote
Reply to comment by Smack-works in AI Alignment through properties of systems and tasks by Smack-works
Lots of communication involves making reasonable assumptions, so that a person doesn't need to spell out every detail. My presumptions are only a problem if they are wrong.
'People learn values without solving ethics'.
Your non-answer answer to my question leads me to conclude that I am wasting time trying to ask you further questions, so we can let it all go.
gahblahblah t1_itkpsnv wrote
Reply to comment by Smack-works in AI Alignment through properties of systems and tasks by Smack-works
>You may not even need to define what is a "value statement".
You define your X statements based off value statements, but then also don't think value statements need defining. This is part of the confusion, because when I try to examine what you are talking about, the expressions that you've used previously as part of explanations and definitions you later represent as unknowable - which makes our conversations circular.
'Why is this a problem and why do you think this problem matters?'
When you represent that you can provide knowledge from a set of statements, but the dataset they are meant to represent is an infinite one, the first thing you are establishing is that the finite data that you have won't really be representative - so you won't be able to make behavior guarantees.
To create a robot that does not turn us into paperclips, I don't think requires infinite data, but rather there is a smaller set of information that would allow us to make behaviour guarantees.
In order for this set of information to not be infinite, the set requires properties that are true for all the statements in the set ie that it is possible to measure and validate if a statement should be inside or outside the set. Having a validity check means that the second value statement that you try to add to the set cannot be arbitrary - because an arbitrary statement may well be contradictory to the first statement.
'You don't need to solve ethics in order to learn values.' How do you learn values then? If you don't know, then you are also saying you don't know how to learn X-statements.
gahblahblah t1_itkjbe2 wrote
Reply to comment by Smack-works in AI Alignment through properties of systems and tasks by Smack-works
On the one hand you claim: 'X statements can be thought of as simply a more convenient reframing of value statements'
You represent that human value statements are difficult to know: 'You don't know this. The same can be said about human value statements.'
Then you represent the types of statements I already know as being human value statements: ' Just compare X statements to the types of statements you already know (e.g. value statements).'
Then you represent that values are learned empirically not systemically.
But also earlier you claimed 'Value statements have multiple interpretations and contradictions too.'
And also claim that there is no footing at all for validating correctness: 'Any type of statements can be "made up".'
It appears to me that the properties of X statements are arbitrary, because the nature of what you call value statements are also arbitrary.
If you think that what you are describing as value statements is non-arbitrary, please characterise their properties, so that I could work out the difference between a false value statement vs a true one.
gahblahblah t1_itg0anq wrote
Reply to comment by Smack-works in AI Alignment through properties of systems and tasks by Smack-works
If X statements can simply be made up - then the property that you claim they have - that they can be applied recursively without contradiction, will not hold true.
Different X statements will end up contradicting each other, and there won't be a systemic way of resolving this contradiction, as the statements don't have a systemic foundation.
gahblahblah t1_itfyh1f wrote
The first trouble with your X statements, is they seem like an infinite set. The examples you give for your X statements in point 3 don't seem to come from a finite list of statements that you could just hand to a system. Rather they appear to be rational that you'd explain after encountering a specific situation.
To make your case more real, I would apply it to whole completion about a very simple scenario (these are *all* the X statements you need to handle this situation), and then expand to a slightly more complex scenario. I juxtaposition this with your paperclips example, where it is unclear to me how much information the system needs to have learned in order to answer correctly in the ways you describe.
You characterise truth as being that which helps us humans, but then also claim for this system to be 'universal' for intelligence (including above human intelligence) - but that doesn't seem universal to me, if we humans are a special case in the system of X statements, and I suspect that this would end up creating contradictions within the statements.
What are the properties of X statements themselves? How can a statement be validated or created? Can they just be made up, in a manner of speaking, if they conveniently help humans (and so are infinite in number)? Or instead, do they need to be fair/equitable/reasonable?
Take for example one of your X statements: "inanimate objects can't be worth more than lives in many trade systems" - how can we tell this is a correct X statement? I could interpret this to mean that an automatic tractor cannot cut down wheat, because wheat is alive... If other X statements contradict this statement, do we discard those statements?
I suppose I tend to think, a more universal system, is one that is applicable ideally without needing special cases. And that ultimately this leads to new types of citizens that join our cooperative civilisation in time.
gahblahblah t1_iyjifz7 wrote
Reply to Is my career soon to be nonexistent? by apyrexvision
The need for a person to create software will not end, but the important tools for the process will change.