No_Ninja3309_NoNoYes
No_Ninja3309_NoNoYes t1_j89s90r wrote
Reply to Preparing everyone for the singularity by cocopuffs239
You can't prepare for the singularity. If you believe in a collapse, you can prepare for this. I personally don't. Even if it was certain, I can't be bothered. You can save, invest, and be frugal, but if UBI arrives, it would have been for nothing. My friend Fred says that the singularity is like nuclear fusion, always decades away. I don't really know. An infinite rate increase seems physically impossible because of physical constraints.
No_Ninja3309_NoNoYes t1_j7yxtlj wrote
Reply to The copium goes both ways by IndependenceRound453
Well IDK. I can't speak for other people. My friend Fred says that as long as he meets new people and learns new things he's happy. Others say that they like to travel. I mean if you have time and money, you can do all that. I think I was happiest when I was much younger, during summer holidays playing outside with my friends. But I think if I do that now, I would look silly. Adults are supposed to work and add value. The money bags won't even talk to you if you are not helping them in some way. And why would they give up their way of life and privilege? Anyway IMO ChatGPT is to the eventual language component that goes into AGI as early Java applets were to AJAX. Or something like that. So I think we're having a premature discussion.
No_Ninja3309_NoNoYes t1_j7kblef wrote
10K LoC? Sure if someone writes hundreds of supporting toolkits for that first. My friend Fred says that the pseudo code for better LLMs is just a few lines:
- Use AI to generate candidate rules like P(People eat sandwiches) >> P(Sandwiches eat people)
- Hire lots of humans. Get them to process the data from 1 and produce rules like P(Sandwiches eat people) = 0.
- Feed the rules to the AI of rule 1
So let's say that you need one cent for each rule for a total of billion rules. With a thousand workers each producing 100K rules a year... It's doable for a billionaire. And you need seven similar schemes for other types of data. However I think AGI is not feasible in a decade. The hardware, software, data, and algorithms are not ready yet.
No_Ninja3309_NoNoYes t1_j7jy3lr wrote
Reply to Who do you think will have a better/more popular AI search assistant, Google or Microsoft? by HumanSeeing
It's going to be expensive. I grabbed a huge envelope. Even assuming one cent per request, Google would have to pay tens of millions per hour at least. How will they earn it back? I think they won't for several years. Unless they keep the service minimal and do something clever. My friend Fred says that Facebook will surprise us with something in a few months. And they might even offer images alongside the text albeit not high resolution.
No_Ninja3309_NoNoYes t1_j75ugmk wrote
Reply to Possible first look at GPT-4 by tk854
It doesn't make any sense to me. Why go for technology that is not proven yet? The ChatGPT servers are overwhelmed with a model that presumably is simpler than GPT 4. And they are going to roll out a GA product just like that? I think that they are doing whatever perplexity.ai and lexii are doing. So no code generation in Bing. Maybe in Copilot. But I think the use case will be: I want to do test driven development. I wrote the tests, now give me the actual code. Or the opposite.
I don't believe that you can give GPT 4 a user story and that it can churn out code. How would it know the business specific terms? Product owners are really good at making them up, you know? My friend Fred says that it will come down to creating lists of sentences and feeding them to sentence transformers. That's doable. You need some GPUs and time. Hopefully this will help translate from the domain to general for GPT 4.
No_Ninja3309_NoNoYes t1_j7481os wrote
I have no PhD in economics, but in IMO our brains as means of production still beat AI. Our neurons are fully optimised and don't need activation function tweaking. ChatGPT can talk the talk, but you can't show it a screenshot. It can't walk the walk.
Unfortunately there is always someone willing to do whatever any of us can do cheaper. With globalisation maximizing for cheap labor, low taxes, and high productivity has become easy. Politicians will play into these fears and go for protectionism. But in the long term that doesn't work.
Trying to do it outside corporations and governments is not feasible right now and is liable to be exploited. I mean, look at what OpenAI did to the open source community. But there's still hope. For instance more affordable small scale models like sentence transformers for example.
No_Ninja3309_NoNoYes t1_j6ykf7b wrote
Reply to The next Moravec's paradox by CharlisonX
Altman is not a superhero. He can't take on the whole world. GPT currently are too inefficient to be the road to AGI. Maybe neuromorphic hardware and spiking neural networks can do better. AI can't really deal with all use cases right now because it needs a lot of data and the world is moving too fast. Look at ChatGPT. It is lagging the world. It is not as efficient as a search engine.
No_Ninja3309_NoNoYes t1_j6w3aa9 wrote
Reply to Let's create a super list! Drop all your favorite AI websites/tools below by intergalacticskyline
Perplexity, lexii, dreamily, character all ending with ai. I used word tune in Google doc but it is not that good any more. Quillbot is much better. Also Google translate and deepl.com. I stopped using ChatGPT because it is too slow and it doesn't list references like perplexity and lexii.
No_Ninja3309_NoNoYes t1_j6mxb0s wrote
It's simple really. Look at self driving cars. It took a long time to develop them and they are not exactly replacing people. ChatGPT requires millions to train and run. People are now focusing on Generative AI. It will take them years to get out of this phase because of the sheer complexity and cost.
I know people who were trained as mechanical engineers, but they work in unrelated sectors. My friend, Fred, said that doing bespoke AI engineering in specific companies is not going to work due to the lack of experts and machines. But that is what it will take to make an impact. We don't have AGI yet, so you need specialized data and custom models for each company and job to do proper downsizing. Driving is one of the few exceptions and as I said it has not been successful yet. I don't have a PhD in economics, but I would not worry too much if I were you.
No_Ninja3309_NoNoYes t1_j6egav1 wrote
Reply to Acceleration is the only way by practical_ussy
It's simple, really. Acceleration costs energy. Energy is not free. Furthermore technology leaves a footprint. Nothing is perfect. Technology certainly isn't.
No_Ninja3309_NoNoYes t1_j6cw969 wrote
Reply to New York Times [July, 1997] 'Computer needs another century or two to defeat Go champion' LMAOOO this is so hilarious to read looking back by Phoenix5869
There were so many AI experts trying to beat Go that they saw many, many problems. So the lesson is that computers can get really good at one thing, providing that there are clear rules.
I think that Generative AI will crash and burn soon. I mean, look at ChatGPT. You need top GPUs to work a long time using a huge network that is not even trained on all the text on the Web. You could maybe increase the size of the network a thousand times, but you will need more than a thousand times more GPUs. Much more. And at inference time you still need the parameters. I am afraid it will not be enough to accommodate multimodal abilities and larger context windows.
No_Ninja3309_NoNoYes t1_j6c383s wrote
Reply to I’m ready by CassidyHouse
In a few decades organs printing and the first cyberpunk implants. In a few centuries healing nanobots. In a thousand years a hive mind in a growing Dyson swarm. In a million years no more need for bodies. Nervous tissue, hardware, and software will become one.
No_Ninja3309_NoNoYes t1_j6a7t2u wrote
AI will be bigger in 2033, but I am afraid that it will run out of steam. The neural networks that are built today are like ladders to the moon. We need rockets and some sort of fuel. But I bet that if someone figures it out, it will seem pretty obvious in hindsight.
The rest is politics and tradition. Almost no one can compete with Silicon Valley. Some governments try, but it is not a priority for them.
No_Ninja3309_NoNoYes t1_j69esum wrote
Reply to MULTI·ON: an AI Web Co-Pilot powered by ChatGPT that browses the web and automates the tasks by Schneller-als-Licht
I don't understand this use case. Instead I would like a better Google Alert/News. You tell the bot I want to be informed about X and it collects the relevant webpages presenting them in a readable summaries format with links. Or compares prices of Y across webshops. Or a shopping assistant that fills in shopping baskets with the essential groceries on regular basis. But it shouldn't actually buy anything. I don't trust the bots enough yet.
No_Ninja3309_NoNoYes t1_j6973c0 wrote
Reply to Myth debunked: Myths about nanorobots by kalavala93
I am all for it, medical nanobots, if they could help me avoid undergoing colonoscopy. But there must be downsides to nanobots. I never hear anyone talk about them. Except for gray goo scenarios.
No_Ninja3309_NoNoYes t1_j67m1kf wrote
Reply to Don't despair; there is decent likelihood that an extremely large amount of resources will flow from AGI to the common man (even without UBI) by TheKing01
Everything is possible. It is also possible that the people you mentioned never get to see an AGI. Maybe zillion dollars has only 0.0000000134% chance of leading to AGI. The problem with nonexistent numbers is that the maths involved is also nonexistent.
No_Ninja3309_NoNoYes t1_j64kqd5 wrote
It is not so much about the algorithms as the combination of algorithms, hardware, and software. IDK how likely it is that Skynet hacks the nukes. But during the Cold War we came very close to mutual assured destruction. So if we don't go into deescalation mode how can we prevent this?
No_Ninja3309_NoNoYes t1_j63qx9b wrote
Why do you care about OpenAI? Why do you care about Microsoft? Do you think the open source community will profit? Do you think that the average citizen of the world will profit?
No_Ninja3309_NoNoYes t1_j63gdlm wrote
Reply to Asking here and not on an artist subreddit because you guys are non-artists who love AI and I don't want to get coddled. Genuinely, is there any point in continuing to make art when everything artists could ever do will be fundamentally replaceable in a few years? by [deleted]
Art is about personal discovery and connecting to previous generations. But if you don't agree, I can't convince you.
No_Ninja3309_NoNoYes t1_j631aib wrote
Reply to If given the chance in your life time, will join a theoretical transhumanist hive mind? by YobaiYamete
Sadly a hive mind is the only way to align human values with an artificial supercomputer intelligence. I will do my patriotic duty if it is required. I imagine that together with the likes of Mr. Twitter and Sama we Hivers will be parabiologically linked to our clones. We will receive replacement organs now and then. Of course part of them will be fully synthetic implants. Once Mars is terraformed, we will say goodbye to Earth. Of course Mars will be full of supercomputers. Next a Dyson swarm.
No_Ninja3309_NoNoYes t1_j60yndz wrote
Reply to How life with UBI could look like by Financial_Donut_64
Endless summer or endless competition without meaning depending on personality and circumstances.
No_Ninja3309_NoNoYes t1_j5sn2ii wrote
Reply to Anyone else kinda tired of the way some are downplaying the capabilities of language models? by deadlyklobber
We started with people saying that ChatGPT is ASI. Then a week later it was AGI. Then ASI would arrive in 2023. Now we're at AGI would be here in 2024. It's all clickbait for Medium.
Good governments try to educate people; most tech companies try to put whatever sensational fluff is popular in front of them. But it turns out education is boring whereas speculation about AI is fun. This is why the oligarchs will win. You can't fight human nature.
No_Ninja3309_NoNoYes t1_j5pwpci wrote
No_Ninja3309_NoNoYes t1_j5o1169 wrote
Reply to how will agi play out? by ken81987
I find it hard it look forward. Grim dark and utopia scenarios seem equally likely. If we look back at the Internet, what do we learn? The internet started out as a project for scientists and to some extent the military to exchange information. Now it is a place where you can post funny pictures or clickbait articles. And more importantly big tech companies dominate the landscape.
Your scenario, if I understand it correctly, speaks of a single entity in control of AGI and therefore the world. I don't think it matters whether it is one entity or seven or some other small number. The problem is that you can have unintended consequences.
The internet has been used for propaganda and to recruit terrorists. If you make a system that can connect people and let them search for information, bad actors could take advantage. Make no mistake, AGI is as much a tool as a weapon. So really just to be safe we want AGI to be in the right hands. That doesn't mean giving it to everyone, but also not to a happy few.
No_Ninja3309_NoNoYes t1_j8d9lii wrote
Reply to This is Revolutionary?! Amazon's 738 Million(!!!) parameter's model outpreforms humans on sience, vision, language and much more tasks. by Ok_Criticism_1414
This is the principle of separation of concerns at work. Many focused capsules working together are stronger than one huge LLM. And it is easier to fine tune and prune individual capsules than one giant black box. It makes sense at inference time too. Eventually you could have a minimal model running locally with the sole purpose of figuring out which web services to contact for a given request.