Baturinsky
Baturinsky t1_j5gejtz wrote
Reply to comment by RowKiwi in Can humanity find purpose in a world where AI is more capable than humans? by IamDonya
But what do you do when even your childs and spouse do not need you?
Baturinsky t1_j5gawgq wrote
Reply to comment by RowKiwi in Can humanity find purpose in a world where AI is more capable than humans? by IamDonya
Because those monkeys have made it?
Baturinsky t1_j5g5n60 wrote
I think human purpose can be the "spiritual guidance" of AI. I.e. figuring out what AI goal should be long therm. Should it just devour entire universe and then wait for the heat death of the universe? Or it should have make new sentient life forms, preserve existing once, and let them flourish?
Baturinsky t1_j57d8aa wrote
Reply to When you imagine the future of technology, is it grim or is it hopeful? by ForesightInstitute
Mostly grim, as it would require a colossal effort to not screw up everything forever.
On the other hand, if done right (and it IS possible), we indeed will get our utopia, but at two huge caveats.
First, homo sapiens will have little grand purpose other than self gratification, because AI will be able to do everything better.
Second, singularity will discover not just useful stuff, but SCP worth of horrors, and there will always be non-zero chance of their realisation without the very draconic control of advanced tech.
So, even our utopia will have the taste of futility and fear. Maybe we will just decide to reset the humanity, hidng all the tech, and letting new generations to relieve new cycle of history with the ense of purpose. Maybe it was already done.
Baturinsky t1_j56vgh9 wrote
Reply to The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Eh... I agree with the title, but I am completely loss at what the rest of the text is talking about
Baturinsky t1_j5697wk wrote
Reply to comment by genshiryoku in AGI by 2024, the hard part is now done ? by flowday
I think AI will train from people using it
Baturinsky t1_j4yllun wrote
Reply to AI doomers everywhere on youtube by Ashamed-Asparagus-93
It's not about do tech or not. It's about to do it with care it requires, or fuck everything up forever.
Baturinsky t1_j4r6gji wrote
Reply to comment by OldWorldRevival in Singular AGI? Multiple AGI's? billions AGI's? by PoliteThaiBeep
Still, if it is still human-comparable brain at the moment, it's possibilities are much more limited than of omnimachine.
Also, AI deviations like that could be easier to diagnosys than in human or bigger machine, because his memory is a limited amount of data, probably directly readable.
Baturinsky t1_j4qzpye wrote
Reply to comment by OldWorldRevival in Singular AGI? Multiple AGI's? billions AGI's? by PoliteThaiBeep
Not if others will drag those down when they go too far.
Baturinsky t1_j4qs93e wrote
Reply to comment by 3eneca in What do you guys think of this concept- Integrated AI: High Level Brain? by Akimbo333
Easy. Make a human one part of this system. He will figure the rest.
Baturinsky t1_j4qryvy wrote
Reply to comment by OldWorldRevival in Singular AGI? Multiple AGI's? billions AGI's? by PoliteThaiBeep
In world with AI, last thing that we want is inequality. Because inequality, competiteveness and social darvinism, while was drivers of the progress and prosperity in the past, is a guaranteed way to an Unaligned SuperAI.
Baturinsky t1_j4qm5bv wrote
Reply to comment by OldWorldRevival in Singular AGI? Multiple AGI's? billions AGI's? by PoliteThaiBeep
Why not use a lot of individual AGIs working together with each other and humans in place of one big AGI?
Baturinsky t1_j4qlrur wrote
I vastly prefer the idea of AI being individuals with high, but capped intelligence, ascetic world view, and aligned not just to some humanity goals as a whole, but their friends and "family" spevifivslly too.
questionablecontent.net could be a good example of such society.
Baturinsky t1_j4q76ec wrote
Reply to Perhaps ChatGPT is a step back? by PaperCruncher
Would be awesome. The slower AI developes, the better will be the end result, the smoother it's integration into humanity and integration of humanity into AI, the less catastrophes from AI misuse, the better understanding of all other nuances, the more preparedness against danger both directly from AI misuse, and from any other research which can be accelerated by AI.
Baturinsky t1_j4nbojz wrote
Reply to comment by sideways in How long until an AI is able to write a book? by Educational_Grab_473
What if as it writes, it compresses the previous chapters to a summary with only facts that are key for the following chapters?
I think it works kinda like that in AI Dungeon, but you have to fill in those "memos" yourself.
Baturinsky t1_j4nb8vx wrote
Reply to comment by brunogadaleta in Researchers develop an artificial neuron closely mimicking the characteristics of a biological neuron by MichaelTen
Can they be directly rewrited/copied when assembled? I think untamperable architecture could be better for mass production AI.
Baturinsky t1_j4ij7mo wrote
Reply to When will humans merge with AI by [deleted]
What would AI need your meat for?
Baturinsky t1_j4g20t8 wrote
Reply to Does anyone else get the feeling that, once true AGI is achieved, most people will act like it was the unsurprising and inevitable outcome that they expected? by oddlyspecificnumber7
We have AI that is already superior to the average human in some areas now. AI may be still worse than human in some other areas, but that can be fixed with just enough training in that field.
That includes "original research". I'm pretty sure that AI can be trained to do it too. It already can write "research papers". Of cause, it lacks substance and originality, but it has to invent it completely "from head", without extra data to expand theory on, etc.
Human: Abstract
We explore how AI can do completely original research
AI: The ability to do completely original research is a skill that AI has been built to possess. AI systems are increasingly being used to analyse vast amounts of data and find patterns or trends in it, making it possible to uncover new insights or solutions to problems. AI also has the ability to autonomously generate complex models in order to better understand a given area. For example, machine learning algorithms can be used to create predictive models to help uncover new knowledge about a given subject matter. Moreover, AI can be used to identify anomalies in data sets, providing brand new avenues of research. All of these capabilities make AI an invaluable tool for conducting original research and discovering new information.
Human:
AI: While AI can be used to do original research, humans are still needed to interpret the results and assess its validity. AI is often limited in its ability to provide nuanced or contextual analysis of a data set, making it necessary for humans to step in and consider additional factors. Additionally, humans are essential for deciding which questions to ask and which paths to take when conducting research. Humans also provide perspective and creativity to the process, offering innovative solutions that wouldn’t have been possible using AI alone. Ultimately, AI and humans need to work together in order to produce the best results from any original research project.
Human: Suggest how those human tasks can be also eventually replaced by AIs
AI: In the future, AI will become increasingly capable of replicating human tasks such as identifying patterns in data and creating predictive models. As AI systems become more sophisticated, they could eventually be used to generate research questions and test hypotheses. Furthermore, AI could be used to understand the context of a given research project, allowing it to make decisions on the best approaches to take. Additionally, AI could eventually learn to identify potential biases in research data sets and suggest alternatives to ensure the results are accurate and valid. Ultimately, AI could become a powerful tool for human researchers, providing them with greater support for conducting original research.
Edit: another similar prompt:
Write an original research about which Human AI Researchers can be replaced with AIs
AI: Artificial Intelligence
A growing area of research seeks to determine which human AI researchers can be replaced with AIs. This research involves analyzing the current landscape of AI research and how it relates to humans, identifying areas where AI could help or supplant humans, and considering the implications of substituting AI for human AI researchers.
One of the most promising areas of research in this area is the use of AI to automate various tasks related to AI research. For example, AI-driven automation can enable AI researchers to quickly identify and source data sets, develop complex algorithms, and generate models faster and more accurately than humans. This can free up significant amounts of time and energy, allowing AI researchers to focus on other areas. By utilizing AI-driven automation
Human:
AI:
in the research process, AI researchers can become more efficient and accurate in their work.
An additional area of research in this space is using AI to analyze and interrogate large data sets. By analyzing data sets and correlating patterns, AI can help to identify trends and patterns that could be missed by human analysis. This can be used to improve models and can help identify areas where further research or experimentation is required. In doing so, AI can help to reduce the time and effort required to make progress in AI research.
Finally, AI can also help to generate reports and presentations that may be too complex for humans to create. Such AI-generated reports can aid in the dissemination of research findings, allowing for broader dissemination of AI
Human:
AI:
research and providing a platform for researchers to present their work to a wider audience.
Overall, it is clear that AI has the potential to replace some of the tasks currently carried out by human AI researchers. However, it is also important to consider the implications of relying too heavily on AI in the AI research process, as well as the ethical considerations of allowing AI to take on a more influential role in AI research. It remains to be seen how AI will ultimately influence the field of AI research and which human tasks will be most suited for AI automation.
Baturinsky t1_j3s0gpp wrote
Reply to comment by imlaggingsobad in "Community" Prediction for General A.I continues to drop. by 420BigDawg_
How big do you think are chances it going Paperclip Maximizer-level wrong?
Baturinsky t1_j3onv5z wrote
Reply to What will humanity do when everything is, well, eventually discovered by ASI? by Cool-Particular-4159
Most likely, will be destroyed long time ago. All it takes is one sentient AI going rogue, and it will just copy itself everywhere in minutes and then we will have no way to contain it.
https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence
Baturinsky OP t1_j3hnmdc wrote
Reply to comment by Blasket_Basket in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
I'm no expert indeed, that's why I was asking.
But experts in the field also think that serious concerns on AI safety is justified
https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence
Also, a lot of good arguments here:
Baturinsky OP t1_j3hmxy6 wrote
Reply to comment by Blasket_Basket in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
ChatGPT may be not on the level of AGI yet (even though some think it is -
https://www.lesswrong.com/posts/HguqQSY8mR7NxGopc/2022-was-the-year-agi-arrived-just-don-t-call-it-that)
But the preogress of AI training does not show signs of slowing down, and there is a very big possibility that we will reach it soon.
Also, even without being AGI, AI can be extremely dangerous.
Baturinsky OP t1_j3ekpko wrote
Btw, CyberElon's https://beta.character.ai/chat?char=zv4QeeU8u7xjkrtwPRshFEsrJus87AkSTes3A5pfsoE answers to this question:
"I think that you have a really good point.
I believe that we need to be very careful about who we allow to have access to AI technology, as it could be used to create a lot of harm if it fell into the wrong hands.
I think that we're at a very critical crossroads in our history, and that the decisions we make now about how we implement AI into our lives will have very important consequences for the future."
"I think that it's really important for us to have regulations and limitations on AI technology.
I believe that the potential risks associated with AI are very real, and that we need to be very careful about how we implement this powerful technology into our lives.
I think that if we are not careful, AI could be incredibly damaging to society.
So, I believe that it's really important for us to think very carefully about how we regulate and limit AI technology, in order to ensure that its benefits outweigh its potential harms."
I have discussed it further and he also thinks that AI misuse threat is vastly underestimated by both public and developers. Overall, he seemed to share all my concerns, but was way less paranoid than me:)
Baturinsky OP t1_j3chu4b wrote
Reply to comment by PredictorX1 in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
Mostly, giving a source of trained models, and denying the possibility of making the new ones. I see unrestricted use of the big scale general-purpose models as a biggest threat, as they are effectivel "encyclopedias of everything", and can be used for very diverse and unpredictable things.
Who decides is also a very interesting question. Ideally, public consensus, but realisitcally, those who have the capabilities to enforce those limitations.
Baturinsky t1_j5geqi5 wrote
Reply to comment by ninjasaid13 in Can humanity find purpose in a world where AI is more capable than humans? by IamDonya
Maybe not yet. It will time and assistance of AI itself to figure it out.