Liberty2012
Submitted by Liberty2012 t3_11ee7dt in singularity
Liberty2012 t1_ja9qf44 wrote
Reply to comment by Ok_Sea_6214 in Singularity claims its first victim: the anime industry by Ok_Sea_6214
>By the time people have made the shift, AI will take that over as well.
Yes, this is the new rat race. Someone posted elsewhere they had spent the last 3 months working on new AI projects which all became obsolete before they could finish.
Liberty2012 t1_ja9o3vy wrote
I think it is rather the majority of individuals just want to pursue other work or interests that they hope AI will provide in some manner directly or indirectly.
As to whether this will work out as some hope is certainly worthy of thought exploration, but I think the motives for most are not exactly as you have stated them.
Liberty2012 t1_ja14e8j wrote
Reply to comment by RemyVonLion in The 2030s are going to be wild by UnionPacifik
I agree, although when I began writing on these topics recently I have been somewhat hopeful as more people are aware of concerns than I had anticipated.
There is an enormous hype storm right now, but I see a bit of reality and more careful reflection joining conversations. Hopefully with enough discussion and dialog we can bring as much reason as possible into focus.
FYI, in the event you are interested. This is a bit lengthy, but it is a summary of some recent thoughts about the evolution of AI as we continue to push forward in this endeavor. I'm always looking for more feedback and conversation.
Liberty2012 t1_ja0x51p wrote
Reply to comment by Impressive_Chair_187 in The 2030s are going to be wild by UnionPacifik
I think I have arrived at the point of being optimistically afraid.
The problems look too complex to be solved, but we apparently aren't going to stop so how much should I worry into futility.
Liberty2012 t1_ja0vrjq wrote
Reply to The 2030s are going to be wild by UnionPacifik
Much of what you write is in line with what many hope AI will bring to the world. It is the instinctive concept that appears within our minds when we imagine what could be.
However, 'what could be' and 'what will be' are often stark contradictions. Given enough contemplation the negative possibilities begin to become a bit more concerning. Have you looked deeper into the potential unwanted side effects and have any thoughts thereof?
Liberty2012 t1_j9uoyuw wrote
Reply to What are the big flaws with LLMs right now? by fangfried
On the topic of bias, this is going to be very problematic issue for AI. It technically is not solvable in the way that some people think it should be. The machine will never be without bias, we only have a set of "bad" choice of bias to choose from.
I've written in more depth about the Bias Paradox here FYI - https://dakara.substack.com/p/ai-the-bias-paradox
As for the flaws in LLMs. There was a good publication here covers some of that in detail - https://arxiv.org/pdf/2302.03494.pdf
Liberty2012 t1_j9u3ov6 wrote
Reply to comment by strongaifuturist in The Sentient Search Engine? How ChatGPT’s Insane Conversation Reveals the Limits and Potential of Large Language Models by strongaifuturist
> Now it's only a matter of time before the kinks get ironed out.
Yes, that is the point of view of some. However, it is not the point of view of all. Meaning that if this is a core architecture problem of LLMs, it will not be solvable without a new architecture. So, yes it can be solved, but it won't be an LLM that solves it.
But yes, I'm more concerned about the implications of what comes next when we do solve it.
Liberty2012 t1_j9u06qk wrote
Reply to The Sentient Search Engine? How ChatGPT’s Insane Conversation Reveals the Limits and Potential of Large Language Models by strongaifuturist
The hallucination problem seems to be a significant obstacle that is inherit in the architecture of LLMs. Their application is going to be significantly more limited than the current hype as long as that remains unresolved.
Ironically, when it is resolved, we get a whole lot of new problems, but more in the philosophical space.
Liberty2012 OP t1_jadnx3q wrote
Reply to comment by phaedrux_pharo in Is the intelligence paradox resolvable? by Liberty2012
Certainly there is a spectrum of behavior for which we would deem allowable or not allowable. However, that in itself is an ambiguous set of rules or heuristics for which there is no clear boundary and presents the risk of leaks of control due to not well defined limits.
However, for whatever behavior we set within the unallowable, that must be protected such that it can not be self modified by the AGI. By what mechanism do we think that will be achievable?