Anjz
Anjz OP t1_jdxlw8f wrote
Reply to comment by danellender in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
No, ChatGPT is closed source and we don't have the weights for it. Plus, it's probably too big to query with consumer GPUs.
Stanford University came up with Alpaca, which is a lighter weight model trained from Facebook's LLaMa but still functionally works as good as earlier iterations of GPT. This one you can run locally given some knowhow.
Anjz OP t1_jdwb284 wrote
Reply to comment by Kinexity in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
While it is true that you can download the entire English Wikipedia in a relatively small size, it does not diminish the potential of AI and LLMs. Wikipedia is a static collection of human-generated knowledge, while AI, such as LLMs, can actively synthesize, analyze, and generate new insights based on available information. AI has the potential to connect disparate pieces of knowledge, create context, and provide personalized assistance. Thus, the comparison between Wikipedia and AI should not be based on size alone, but also on the dynamic capabilities and potential applications that AI offers.
For example, can you infer to Wikipedia to tell you how to distil water in a step by step process given only certain tools, or what process to disinfect a wound when there are no medication available? Sure you can find information on it, but a lot of people won't know what to do with it given the information.
That is the difference between Knowledge and Wisdom.
Anjz OP t1_jdw5teo wrote
Reply to comment by BangEnergyFTW in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
While I agree with your statement somewhat and it is true that the internet contains noise, it also offers unprecedented access to information and diverse perspectives.
The key is to develop critical thinking and discernment, which can transform data into meaningful understanding. Technology, such as AI, can help us navigate, process, and synthesize vast amounts of information. We should not view AI as replacing human wisdom, but as a tool that can complement and enhance our collective knowledge, while still valuing experience and human insight.
Granted it's put in the hands of the right individuals. Some people will take a stick and see only a stick for what it is, a collection of biological matter prone to rotting. Whereas some will see it as a transformative tool that could amount to much more than face value, a fishing rod or a handle for a hammer.
Given this context, at what point can you infer true wisdom? Does a child at 3 years old reflect true wisdom? Is there a certain point where you could legitimately exclaim that an AI is now fully understanding context and inferring true wisdom? Or is this subjective?
Just my two cents, but embracing technology does not necessitate abandoning true wisdom; it can assist in our quest for it.
Anjz OP t1_jdw2cgm wrote
Reply to comment by skob17 in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
Would it be terrifying if that's all you know? Perhaps from our viewpoint, but growing up from birth and knowing that this is your purpose being guided by near omniscient AI could be a different story.
Anjz OP t1_jdvkutn wrote
Reply to comment by timtulloch11 in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
It's not as good as ChatGPT but it's much lighter. Granted it's just a small copy of fine tuning from GPT-3 API, given more parameters for fine tuning on GPT-4 it would probably be a whole different beast. It has something like 10x less data if not more. We're fully capable of creating something much better, it's just a matter of open source figuring out how and catching up to these companies keeping the Krabby Patty secret formula. Turns out for profit companies don't like divulging world changing information, who woulda thought?
If you take a look at Youtube there are a couple demos from people running it on rPI, granted at the moment it's at a snails pace - this could be a different story a year or so from now. It works decently well with a laptop.
Anjz OP t1_jdvj7ry wrote
Reply to comment by bustedbuddha in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
With it running on phones, laptops and raspberry pi's, a solar panel would be sufficient to power small devices.
If you've tried GPT-4 its propensity to hallucinate is so much less than previous iterations that errors would be negligible. We have Alpaca now, but could very well have something like GPT-4 locally in the near future if we look at the pace of how fast things are improving.
Anjz OP t1_jdu1bhe wrote
Reply to comment by No_Nefariousness1441 in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
Search up the Alpaca Lora model, you can run it with a UI like text generation UI.
Anjz OP t1_jdtycqs wrote
Reply to comment by Kujo17 in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
I think very soon, there will be ASIC(Application-specific integrated circuit) low powered devices that can run powerful language models locally.
It's within our grasp. Might be integrated into our smartphones sooner than later actually.
Anjz OP t1_jdtx7bl wrote
Reply to comment by MagnateDogma in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
Oh shit, guess I'm not that original after all hahaha.
At least I have a new show to watch.
Anjz OP t1_jdtwn7m wrote
Reply to comment by Embarrassed_Bat6101 in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
I'd pay for a Stephen Fry voiceover to narrate my interactions with ChatGPT.
Anjz OP t1_jdtvzt8 wrote
Reply to comment by Veei in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
Fully local! Not as good as inferring compared GPT-4 or as fast... yet. But it's very functional and does not require internet.
Anjz OP t1_jdtva5c wrote
Reply to comment by keeplosingmypws in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
That's pretty crazy now that you got me thinking deeper.
Civilizations in the future could send out cryostatic human embryo pods to planets billions of lightyears away that are suitable 'hosts' with AI with the collective knowledge of humanity as we know it that will teach them from birth and restart civilization.
Or maybe we don't even need biological bodies at that point.
Fuck that would be a killer movie plot.
I'm thinking way too ahead, but I love sci-fi concepts like this.
Anjz OP t1_jdtth3u wrote
Reply to comment by Embarrassed_Bat6101 in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
In another line of thought similar to what you've just said, we've always had robotic responses from text to speech, but if we apply what we have with current machine learning foundations and train it with huge amounts of audio data on how people talk..
That will be a bit freaky I would think. I would be perplexed and amazed.
Anjz OP t1_jdts7xi wrote
Reply to comment by Embarrassed_Bat6101 in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
I'd say we already have something very similar with Alpaca running on Raspberry Pi's! Just not as cool and witty... yet.
In that sense, I'm ready to reread Hitchikers Guide to the Galaxy now.
Anjz OP t1_jdtqjm4 wrote
Reply to comment by ArcticWinterZzZ in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
I think past a certain point, hallucinations would be infinitely small that it won't matter.
Obviously in the current generation it's still quite noticeable especially with GPT-3, but think 5 years or 10 years down the line. The margin of it being erroneous would be negligible. Even recent implementation of the 'Reflection' technique cuts down greatly on hallucination for a lot of queries. And if you've used it, GPT-4 is so much better at inferring truthful response. It comes down to useability when shit hits the fan, you're not going to be looking to Wikipedia to search how to get clean drinking water.
I think it's a great way of information retrieval without the usage of networks.
Anjz OP t1_jdtpqcq wrote
Reply to comment by AbeWasHereAgain in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
It is, and I'd imagine other companies hiring devs from OpenAI or even devs from OpenAI diverging information to open source to create something as good as GPT-4.
Even leakage of instruction from GPT-3 like Stanford's training was hugely useful.
Anjz OP t1_jdtnx32 wrote
Reply to comment by ArcticWinterZzZ in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
Wikipedia will tell you the history of fishing, but it won't tell you how to fish.
For example, GPT-4 has open source knowledge of the fishing subreddit, fishing forums, stackexchange etc. Even Wikipedia. So it infers based on the knowledge and data on those websites. You can ask it for the best spots to fish, what lures to use, how to tell if a fish is edible, how to cook a fish like a 5 star restaurant.
Imagine that localized. It's beyond a copy of Wikipedia. Collective intelligence.
Right now our capabilities to run AI locally is limited to something like Alpaca 7b/13b for the most legible AI, but in the near future this won't be the case. We might have something similar to GPT-4 in the near future running locally.
Anjz t1_jdsynyi wrote
Reply to comment by Sigma_Atheist in J.A.R.V.I.S like personal assistant is getting closer. Personal voice assistant run locally on M1 pro/ by Neither_Novel_603
You mean you don't like MODOK?
How about we just name it Dan? Dan's a cool guy.
Anjz t1_jcsktsf wrote
Reply to comment by throwaway957280 in [P] The next generation of Stanford Alpaca by [deleted]
It's probably untested in courts, there's so many loopholes and variables too, what's considered a competing AI model? Companies usually just spew a bunch of stuff in their terms of use, some of which have no legal basis.
Anjz t1_jc758w9 wrote
Reply to [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
Blows my mind, they used a large language model to train a small one.
>Fine-tuning a 7B LLaMA model took 3 hours on 8 80GB A100s, which costs less than $100 on most cloud compute providers.
Now imagine what's possible with GPT-4 training a smaller language model and a bigger instruction sample with corporate backing to use hundreds of A100's at the same time for days at a time?
We're already in reach of exponential growth for low powered devices, it's not going to take years like people have predicted.
Anjz t1_jc66z62 wrote
Reply to [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
This works really well, feels so much more coherent than the unturned LLaMA.
Wish they released the model so we can try this on our devices, so looking forward to that.
Anjz t1_jedk8g5 wrote
Reply to comment by inglandation in [P] Introducing Vicuna: An open-source language model based on LLaMA 13B by Business-Lead2679
I gave GPT-4 a task to name 5 novel camelids and it gave me:
Guanacorn - A playful name that combines "guanaco" with "unicorn," giving it a mythical and fantastical appeal.
Vicunimbus - A name inspired by "vicuña" and "nimbus," invoking the image of a camelid that roams among the clouds.
Zephyrella - A name that combines "zephyr" (a light wind) with "llama," suggesting a graceful and ethereal creature. Dromedazzle - A name that blends "dromedary" with "dazzle," evoking a sense of wonder and awe.
Moonvicuna - A name inspired by the moon, symbolizing the beauty and serenity of the night.
Alpacalypse - A playful and imaginative name that combines "alpaca" with "apocalypse," giving it a bold and adventurous vibe.
Personally, I like Alpacalypse.
Also, I think this will replace whoever makes Pokemon names.