No_Ninja3309_NoNoYes
No_Ninja3309_NoNoYes t1_j9xfrml wrote
OpenAI is just trying to generate hype now. This could mean that they need to find more investors. When companies start doing that l, it usually is bluff. They probably realised that getting good clean data is going to get exponentially harder. So they have to pay humans to help them acquire the data somehow.
No_Ninja3309_NoNoYes t1_j9xf0hv wrote
It's not that people lack imagination. They just imagine different things than you. Most of them don't imagine a world with AI.
No_Ninja3309_NoNoYes t1_j9xf060 wrote
It's not that people lack imagination. They just imagine different things than you. Most of them don't imagine a world with AI.
No_Ninja3309_NoNoYes t1_j9vlwyv wrote
Reply to What are the big flaws with LLMs right now? by fangfried
Some LLMs are not trained with the right amount of parameters or the right learning rate. But the static nature of LLMs is the biggest problem. You need neuromorphic hardware and spiking neural networks to address the issue. In the meantime I think quick fixes will be attempted such as forward 2x passes. My friend Fred says that just adding small random Gaussian noise to the parameters can also help. Obviously human brains are very noisy but somehow very efficient too.
No_Ninja3309_NoNoYes t1_j9t153x wrote
Reply to What do you expect the most out of AGI? by Envoy34
-
FDVR
-
Automatic translation real time like in Star Trek
-
Some sort of recommendation engine/ wizard for all situations. I think mostly social interactions.
-
A tool that does planning.
-
A tool that gathers news.
-
A tool that takes care of my financial situation. If money is still relevant.
-
A tool that that takes care of my health.
No_Ninja3309_NoNoYes t1_j9sn350 wrote
Reply to New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
Need neuromorphic hardware and spiking neural networks and quantum computers. Even if qubits double every two years, it will take a while. GPT is just static parameters. You need some way to constantly update them. Anyway LLM is one of thousands required systems. We don't have thousands of labs doing all the required projects. They are doing more or less the same. We are nowhere near that point.
No_Ninja3309_NoNoYes t1_j9psywn wrote
Reply to How long do you estimate it's going to be until we can blindly trust answers from chatbots? by ChipsAhoiMcCoy
It depends. If you have a fact with a certain probability p, it could have been confirmed n times. The standard error if I remember correctly is the square root of p(1-p)/n. So let's say p = 0.9, the std is like a margin of error sqrt(0.9 * 0.1/n). You want n to be high to have a low std.
No_Ninja3309_NoNoYes t1_j9njjqj wrote
Reply to Can someone fill me in? by [deleted]
AGI would be like us but with extra ability. ASi will be able to do much more. AGI could mean lost jobs and high suicide rates. ASI could mean mass extinctions.
No_Ninja3309_NoNoYes t1_j9niq88 wrote
Reply to Why are we so stuck on using “AGI” as a useful term when it will be eclipsed by ASI in a relative heartbeat? by veritoast
We can have AGI in three years but if we get there it won't be a good AGI. If we take our time, it would be a proper AGI. The road to ASI will then be full of ANI and AGI. The thing that we say is AGI has to be nothing like AGI. However, AGI is nothing like ASI or ANI.
My friend Fred says that LLM would be nothing like XLLM. And XLLM will be nothing like SLLM. For one XLLM will likely use forward 2x instead of backprop. And SLLM will have spiking neural networks.
IMO SLLM will be part of AGI. ASI would be too weird to even imagine. AGI would require quantum computers and ANI to operate the QM. With Winograd FFT. ASI could use something wilder than QM.
No_Ninja3309_NoNoYes t1_j9ngnli wrote
The Asian science institute exists.
No_Ninja3309_NoNoYes t1_j9li35x wrote
Reply to Ramifications if Bing is shown to be actively and creatively skirting its own rules? by [deleted]
So basically GPT3+ is like a microbrain that is as good as dead. Human brains have way more 'parameters' and they change in value all the time. Otherwise it would be hard to have an original thought. So the ramifications are pretty trivial.
No_Ninja3309_NoNoYes t1_j9jtiy0 wrote
Reply to A German AI startup just might have a GPT-4 competitor this year. It is 300 billion parameters model by Dr_Singularity
Static parameters are meaningless. Human brains are not static until after death. Besides modeling reality requires more than a bit of algebra.
No_Ninja3309_NoNoYes t1_j9ix5p4 wrote
32K context is the new 600K RAM. The bigger the model the more resources you need to support it and the more expensive it gets. Without any guarantee about the quality. For example ChatGPT would produce code like:
int result= num1 + num2; return result;
That's in itself not technically wrong but it is unnecessary long. Any static analysis tool would have nagged about this. Also, unit tests or compilers would have caught any actual errors. The OpenAI culture is of PhDs with a certain background. They work in Jupiter notebooks and don't know about standard Dev tools.
My friend Fred says that he can add value with his code generation startup because of that. I also think that LLMs and more traditional technology combined are the way to go.
No_Ninja3309_NoNoYes t1_j9erd4f wrote
Reply to Does anyone else have unrelenting hope for the technological singularity because they’ve lost faith in everything else? by bablebooee
Hey now! If you really feel this way, you should talk to someone in person. Most people who have problems don't talk about them and seem happy on the outside. I think that is a big mistake!
No_Ninja3309_NoNoYes t1_j9e9g1u wrote
People are interested in sports, gossip, politics and maybe seven other things. AI is never one of them. My friend Fred says that the early internet was like that too. IMO ChatGPT is like the Java applets of those days. Now obsolete because of better technology. However, the number of internet users doubled each year back then. According to what I have read online ChatGPT has 100 million daily users. This is much more than the number of web users in the beginning of the internet.
So Fred says that we will get a 10x boost by EOY in performance. It could come from model compression or something else. If Bard, Claude, Ernie, and the other bots are any good, we could expect a billion daily AI users. Fred wants to do a code generation startup in one programming language. It's a niche product, but he says that we can have a million users. Obviously there's companies who can do the same, but on the other hand Twitter started out like a basic app and look where it is now.
No_Ninja3309_NoNoYes t1_j971wsr wrote
Goebbels was very intelligent. Intelligence and ethics are not the same thing. Whether you can create Frankenstein's monster with HIA, IDK. But I think HIA is less developed than AI. However, with AI able to reason about proteins, one day pills could be made that can give us near genius abilities. Imagine giving the pills to an army. Suddenly every conscript could become a Napoleon. I think that this scenario is more likely than what you describe.
No_Ninja3309_NoNoYes t1_j96v1q7 wrote
Reply to Brain implant startup backed by Bezos and Gates is testing mind-controlled computing on humans by Tom_Lilja
This is great almost like telekinesis. Next century people might receive implants directly after birth. I think being a cyborg has many advantages. You can participate in a hive mind, learn faster, and communicate faster.
No_Ninja3309_NoNoYes t1_j95k1m5 wrote
Reply to Update on Deepmind’s Gato? by Sharp_Soup_2353
That's reinforced learning, right? My friend Fred says that RL is more fragile than supervised learning. Has to do with the flexible nature of RL. It's good enough for some games, though.
No_Ninja3309_NoNoYes t1_j94y51w wrote
Reply to Do you think the military has a souped-up version of chatGPT or are they scrambling to invent one? by Timely_Hedgehog
They could have one if they want. You only need 40M dollars to buy a thousand A100s. They might already have them. Or they could be paying OpenAI to help.
Palantir can predict protests based on social media. I'm sure it works a bit like Bing. You say 'Hi, what is up?' It says 'There could be a riot in X soon.' Replace social media with reports from commanders in the field and you can do something similar. The system can say 'I think there's a major enemy offensive in Y'.
My friend Fred says that the rules don't apply to the military. They can do whatever they want whereas civilians have to worry about regulations. But that has never stopped anyone for long.
No_Ninja3309_NoNoYes t1_j8tmvw3 wrote
Reply to What if Bing GPT, Eleven Labs and some other speech to text combined powers... by TwitchTvOmo1
IDK... I tend to be rude to my computer. Could get banned for this. I shared an office with someone who kicked his computer when he was frustrated. And then there's ambient noise...
No_Ninja3309_NoNoYes t1_j8ozkmh wrote
I am not a firm believer in the literal technological singularity. Moore's law and the knowledge of human brains currently don't really support it. Quantum computers might change that.
But if I look at my friend Fred, he is excited. I'm also excited but not as much. There was a German company that we thought would bring a ChatGPT like bot in 2020, but that didn't happen. So it looks like that you can't give it too much freedom. You have to guide it. This makes generating code currently the best option because it follows rigid rules. Will this code lead to a self reinforced feedback loop? There's no way to tell.
No_Ninja3309_NoNoYes t1_j8n6me8 wrote
My friend Fred wants to focus on code, preferably one programming language. The plan for his possible startup is to do unit tests, correctness proof, and linters to assure quality.
Related to LLMs, I have been thinking about news, tweets, and blogs. Pictures would work too I think. But killer apps tend to be video related these days like YouTube and TikTok. So you need an intermediate step to get some text.
No_Ninja3309_NoNoYes t1_j8f0s4i wrote
Reply to Is society in shock right now? by Practical-Mix-4332
The world is burning. By the end of the year, ChatGPT will be ten times faster. Bing, Bard, Claude, Ernie, Galactica, Baba will take over. Soon they will be 100x more powerful, meaning less than a second per request. Speech recognition will be so good that you only need to mouth a bit and the AIs will create games and books and software and art for you. My friend Fred says that he will quit his job when the new Nvidia GPUs become available. They are basically the replicators of Star Trek. You don't need anything else.
No_Ninja3309_NoNoYes t1_j8ebgx0 wrote
If you follow the bread crumbs back, you will find artificial neural networks decades ago, but computers were slow and had megabytes of memory. Data points in the past offer no guarantee for the future. Even if you can stack neural layers as though they were dirty dishes, you are just doing statistics. Which is fine but there are many different methods to reason that would work better.
No_Ninja3309_NoNoYes t1_j9xqanz wrote
Reply to I am truly both entertained and terrified... let me explain by Otherwise-Ad5053
You can't predict a pandemic. And you can't predict the singularity. Even if you could, there is no way to prepare. The AI models are growing rn. But there is no guarantee that they will continue growing. Quite the opposite actually. Everything has a limit, but it will take time to reach it.