Surur
Surur t1_jaen1h5 wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
> It doesn't take into account though our potential inability to evaluate the state of the AGI.
I think the idea would be that the values we teach the AI at the stage that is under our control will carry forward when it is no longer, much like we teach values to our children which we hope they will exhibit as adults.
I guess if we make sticking to human values the terminal goal we will get goal preservation even as intelligence increases.
Surur t1_jaem8nr wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
It is interesting to me that
a) its possible to teach a LLM to be honest when we catch it in a lie.
b) if we ever get to the point where we can not detect a lie (eg. novel information) the AI is incentivised to lie every time.
Surur t1_jaeezsj wrote
Reply to comment by hapliniste in Is the intelligence paradox resolvable? by Liberty2012
I think the RL-HF worked really well because the AI is basing its judgement not on a list of rules, but the nuanced rules it learnt itself from human feedback.
Just like most AI things, we can never encode strictly enough all the elements which guide our decisions, but using neural networks we are able to black-box it and get a workable system that has in some way captured the essence of the decision-making process we use.
Surur t1_jaedwk5 wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
Sure, but you are missing the self-correcting element of the statement.
Progress will stall without alignment, so we will automatically not get AGI without alignment.
An AGI with a 1% chance of killing its user is just not a useful AGI, and will never be released.
We have seen this echoed by OpenAI's recent announcement that as they get closer to AGI they will become more careful about their releases.
To put it another way, if we have another AI winter, it will be because we could not figure out alignment.
Surur t1_jaea43q wrote
Reply to Is the intelligence paradox resolvable? by Liberty2012
I have a naive position that AGI is only useful when aligned, and that alignment will happen automatically as part of the development process.
So even China wont build an AGI which will destroy the world, as such an AGI cant be trusted to follow their orders or not turn against them.
So I don't know how alignment will take place, but I am pretty sure that it will be a priority.
Surur t1_jadyb5k wrote
Reply to comment by Electron_genius in Popularization of Optimism by Electron_genius
Sure, I guess. I'm not big on breaking rules usually, however.
Surur t1_jadw8bl wrote
Reply to comment by Electron_genius in Popularization of Optimism by Electron_genius
> Chances are that big organizations will not make any changes, even though they have huge potential for something great.
That is where lobbying and pressure groups come in.
> What is something we can do?
Well, if you don't mind breaking the rules, you could game the algorithm by creating brigading discord groups that mass upvote good news stories and give them initial momentum, which may help them go viral.
Surur t1_jadskgr wrote
Reply to Popularization of Optimism by Electron_genius
Given that doom creates clicks, the simplest solution would be rules for social media that mandates a percentage of wholesome news in feeds.
In short, the companies control the feed, and they can put whatever stories they want in them, irrespective of the clicks it gets.
Given the impact of the deluge of negative news on the mental health of people, they have a social responsibility to address the issue and correct the skew.
Surur t1_jadqgah wrote
Reply to comment by For_All_Humanity in EU to exceed 2030 renewable target, prompting call for higher ambition by For_All_Humanity
> This is partly due to Russia's invasion of Ukraine which started in February 2022, and which exacerbated an energy crisis across Europe, as European economies sought to wean themselves off Russian fossil fuels and Moscow stopped delivering gas to many countries.
Putin's legacy in the end would be saving the world.
Surur t1_jadaixl wrote
Reply to Context-window of how many token necessary for LLM to build a new Google Chrome from scratch ? by IluvBsissa
We already know having a LLM break down a task into steps dramatically improves accuracy, so that would be the obvious choice for a large software project - break down into steps and iterate down the project tree.
Surur t1_jabyqud wrote
Reply to comment by xzeion in I Worked on Google's AI. My Fears Are Coming True by Interesting_Mouse730
> AI is nothing more than a few complex algorithms layered on top of each other.
I think if it uses a neural network its probably AI.
Surur t1_jabyj6q wrote
Reply to comment by PixelizedPlayer in I Worked on Google's AI. My Fears Are Coming True by Interesting_Mouse730
I think you think we have a lot more control over the process than we actually do. We feed in the data, provide feedback and some magic happens in the neural network, and it produces results we like.
For complex problems we don't really know how the AI comes up with results, and we see this increasingly with emergent properties in LLM.
Please look into this a bit more and you will see its not as simple as you think.
For example:
> if you got a good grasp of the math you can adjust it as you need such as prevent your ai from saying outrageous things which we have seen ChatGPT being adjusted by Microsoft when it was added to Bing for example
This is simply not true lol. They moderated the AI by giving it some baseline written instructions, which can easily be overridden by users also giving instructions. In fact when those instructions slip outside the context window the AI is basically free to do what it wants, which is why they limited the length of sessions.
Surur t1_jabqjfc wrote
Reply to comment by PixelizedPlayer in I Worked on Google's AI. My Fears Are Coming True by Interesting_Mouse730
> Ai cannot violate its core programming.
We don't exactly program AI, do we? It's mostly black box.
Surur t1_ja9z8gy wrote
Reply to The world should be governed by people with intellectual thought and people should listen by New-Shop-7539
Good idea. I think the ubermensch should immediately start conquering the world and set up a 3rd Reich which will last 1000 years at least.
Surur t1_ja9gi2x wrote
Reply to comment by HillaryPutin in Observing the Lazy Advocates of AI and UBI in this Subreddit by d00m_sayer
Are you sure you don't prefer a twice-yearly performance review?
Surur t1_ja94aaw wrote
After the singularity, if you are still alive, you can always pretend to work, if that makes you happy. You can wake up at 7, wash and get in your car, drive to pretend work, shove a few papers around constantly, have tea, pretend to make phone calls and drive back home.
If that makes you happy of course.
Submitted by Surur t3_11ct79b in Futurology
Surur t1_ja3lylr wrote
Reply to Large language models generate functional protein sequences across diverse families by MysteryInc152
What is really interesting about this is that the LLM may have a better understanding of what makes an enzyme function than the human scientists.
The danger is the science turning into a blackbox as dense as LLM themselves.
Surur t1_ja32j1d wrote
Reply to comment by SpinCharm in AI is accelerating the loss of individuality in the same way that mass production and consumerism replaced craftsmanship and originality in the 20th century. But perhaps there’s a silver lining. by SpinCharm
Surely the "someone" is the prompter, who has the intentionality and who directs the process with the content of their prompt, and judges the results, much like any other creative process.
Thank would make the AI art engine a tool, just like a 3D rendering engine is a tool.
Or even more like a photographer who presses a button, produces 100 burst photos and picks the one which conveys his taste and message the best.
Much like a prompter they did not compose the sunset, but they know what they like, and wanted to present it to others.
Surur t1_ja2sbo1 wrote
Surur t1_ja2okvy wrote
Reply to comment by drekmonger in AI is accelerating the loss of individuality in the same way that mass production and consumerism replaced craftsmanship and originality in the 20th century. But perhaps there’s a silver lining. by SpinCharm
This will definitely be possible, and we see this on a smaller scale where for video calls instead of sending the video stream the app merely sends an initial image of your face and then subsequent position and pose updates, and the app just re-renders it at the other end.
Surur t1_ja1p6jo wrote
Reply to comment by turnip_burrito in Fading qualia thought experiment and what it implies by [deleted]
But that is not true. As I explained, we are easily able to expand our spatial borders to include machines we control.
And that question is not reasonable to ask for something which has only two states like a light switch or none like a carpet
Surur t1_ja1knct wrote
Reply to AI is accelerating the loss of individuality in the same way that mass production and consumerism replaced craftsmanship and originality in the 20th century. But perhaps there’s a silver lining. by SpinCharm
Nothing much to disagree with except one point - AI media will not be mass produced. It will be as individualized and addictive as your Facebook and tiktok feed.
Surur t1_ja1iwc1 wrote
Reply to comment by turnip_burrito in Fading qualia thought experiment and what it implies by [deleted]
Is it really. When we control equipment we seem to adopt it's borders pretty well. We can slip into roles such as a person who controls a country pretty easily.
Surur t1_jaenmas wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
I believe the idea is that every action the AI takes would be to further its goal, which means the goal will automatically be preserved, but of course in reality every action the AI takes is to increase its reward, and one way to do that is to overwrite its terminal goal with an easier one.