blueSGL
blueSGL t1_jc5s56i wrote
Reply to comment by phire in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
Less than $100 to get this sort of performance out of a 7B parameter model and from the LLaMA paper they stopped training the 7B and 13B parameter models early.
Question is now just how much better can small models get. (lawyer/doctor/therapist in everyone's pocket, completely private?)
blueSGL t1_jc5rpta wrote
Reply to comment by v_krishna in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
could even have it regenerate the conversation prior to the vocal synt if the character fails to mention the keyword (e.g. map) in the conversation.
You know, like a percentage chance skill check. (I'm only half joking)
blueSGL t1_jb6h9jc wrote
Reply to comment by CertainMiddle2382 in What might slow this down? by Beautiful-Cancel6235
>Seeing the large variance in the hardware cost/performance of current models, Id think the progression margin for software optimization alone is huge.
>I believe we already have the hardware required for one ASI.
Yep, how many computational "ah-ha" moment tricks are we away from running much better models on the same hardware.
Look at stable diffusion how the memory requirement fell through the floor. We already are seeing similar with LLaMA now getting into public hands (via links from pull requests on Facesbooks github lol) there are already tricks getting implemented in front ends for LLMs that allow for lower VRAM usage.
blueSGL t1_jaq3156 wrote
Reply to comment by NanditoPapa in Figure: One robot for every human on the planet. by GodOfThunder101
> A properly designed robot can open doors, use tools, climb stairs, lift boxes, and more without a human-like form.
can it do that in a "designed for human" spaces along with being general purpose to switch between tasks
blueSGL t1_jaq2ray wrote
Reply to comment by EnomLee in Figure: One robot for every human on the planet. by GodOfThunder101
> compete at the national, or maybe even international level
speed of light hasn't changed. Networks get better throughput but latency remains.
For work where you need to have dexterity and reflexes locally piloted will be better. (though not everything will need that level of feedback)
blueSGL t1_jad4xt4 wrote
Reply to comment by dasnihil in Context-window of how many token necessary for LLM to build a new Google Chrome from scratch ? by IluvBsissa
> Once people figure out how to recursively call the LLM inside of a larger system that's keeping track of longterm memory/goals/tools/modalities/etc it will suddenly be a lot smarter
something along these lines?
blueSGL t1_jaa1x49 wrote
Reply to comment by jaydayl in Using this in the near future with AR glasses would be great. So much time is wasted finding the correct aisle in a shop or in a mall. by Dalembert
Sticking staples in the corners away from the entrance so you need to traverse the entire floor is a common trick.
Also changing up locations of other products so you always know where the staples are but need to search for other things.
blueSGL t1_ja6pgm2 wrote
Reply to comment by dwarfarchist9001 in Large language models generate functional protein sequences across diverse families by MysteryInc152
Listening to Neel Nanda talk about how models form structures to solve common problems presenting in training, no wonder they are able to pick up on patterns better than humans, that's what they are designed for.
and I believe that training models with no intention of running them purely to see what if any hidden underlying structures humanity has collectively missed is called something like 'microscope AI '
blueSGL t1_ja65i86 wrote
Reply to comment by aquarain in Caught between Microsoft's and Google's search war, the ad industry grapples with a 'exciting and terrifying' new reality by marketrent
> There is no search war.
yep nothing to see here, move along...
https://www.cnet.com/tech/services-and-software/chatgpt-caused-code-red-at-google-report-says/
blueSGL t1_ja5d7u0 wrote
Reply to comment by Zer0D0wn83 in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
> Check out Apple's AI-read audiobooks
elevenlabs... https://web.archive.org/web/20230125023726/https://www.youtube.com/watch?app=desktop&v=VTTtLMbRwA4
blueSGL t1_ja27gk8 wrote
Reply to comment by Akimbo333 in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
You need to request access.
blueSGL t1_ja1vmoa wrote
Reply to comment by Yuli-Ban in AI is accelerating the loss of individuality in the same way that mass production and consumerism replaced craftsmanship and originality in the 20th century. But perhaps there’s a silver lining. by SpinCharm
What about this, with the same prompt/model/seed/...'settings'... combination you can pull the same image out of an image model as someone else
I can easily see there be a time where people generate [music/tvshows/movies/etc] themselves but share the created media and have other people vote and rank it.
e.g. head over to a website that hosts ratings for... AI generated Simpsons episodes and share all the 'settings' needed to load into your own system to recreate it.
Then you can brows by popular generated content, circa whatever month you happen to be in, or by all time, or whatever other metrics you can think of.
Everyone has the capability to generate new stuff and then has the ability to share it. Good stuff gets popular and becomes zeitgeist-y for a while, bad stuff just exists.
blueSGL t1_ja00p4i wrote
Reply to [R] [P] New ways of breaking app-integrated LLMs with prompt injection by taken_every_username
I first saw this mentioned 9 days ago by Gwern in the comment here on LW
>"... a language model is a Turing-complete weird machine running programs written in natural language; when you do retrieval, you are not 'plugging updated facts into your AI', you are actually downloading random new unsigned blobs of code from the Internet (many written by adversaries) and casually executing them on your LM with full privileges. This does not end well."
This begs the question, how are you supposed to sanitize this input whilst still keeping them useful?
blueSGL t1_j9w5qm1 wrote
Reply to comment by ActuatorMaterial2846 in Open AI officially talking about the coming AGI and superintelligence. by alfredo70000
it's the AI effect writ large
> "AI is anything that has not been done yet."
blueSGL t1_j9uzuc6 wrote
Reply to comment by povlov0987 in World’s first on-device demonstration of Stable Diffusion on an Android phone by redditgollum
blueSGL t1_j9uzh3n wrote
Reply to Autonomous drones use AI and computer vision to harvest fruits and veggies. In last year's demo, they only flew one drone now they can fly an entire fleet. In 5 years' time it could become truly impressive. by Dalembert
another company that is looking to do things with 6 axis arms on a motorized gantry is Advanced Farm
blueSGL t1_j9umbty wrote
Reply to comment by TeamPupNSudz in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
> which seems so extreme its almost outlandish.
reminder that GPT3 was datastarved as per the Chinchilla scaling laws.
blueSGL t1_j9ukv6h wrote
Reply to comment by beders in New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
I always found that silly.
What individual parts of the brain are conscious? or is it only the brain as a gestalt that is conscious ?
blueSGL t1_j9rf2n4 wrote
Reply to comment by HelloGoodbyeFriend in New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
Now come on, be fair. You know that's not the point I'm making at all.
It's people working in ML research being unable to accurately predict technological advancements, not user numbers.
You might find this section of an interview with Ajeya Cotra (of biological anchors for forecasting AI timelines fame)
Starts at 29.14 https://youtu.be/pJSFuFRc4eU?t=1754
Where she talks about how several benchmarks were past early last year that surveys of ML workers had a median of 2026.
Also she casts doubt on people that are working in the field but are not working on specifically forecasting AGI/TAI directly as a source for useful information.
blueSGL t1_j9qru8n wrote
Reply to New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
I want to know how many peoples timelines predicted ChatGPT or Dalle2 or Alphafold happening when they did.
Otherwise it's just the classic "predict a game changer is going to happen once I've retired"
blueSGL t1_j9qa8nk wrote
Reply to comment by gantork in "Robot waifus with their perfect hands" coming soon by DonOfTheDarkNight
Teledildonics was a thing in the 90's, shit has bound to have gotten a lot more advanced by now. The simple fact that VR headsets are common tells me that someone (hell likely entire industries) already have something that works.
blueSGL t1_j9q9wvo wrote
Reply to comment by BigAlDogg in "Robot waifus with their perfect hands" coming soon by DonOfTheDarkNight
need noise canceling headphones specifically tuned to the sounds of stepper motors and pneumatic systems.
blueSGL t1_j9q9euw wrote
Reply to comment by Hodoss in And Yet It Understands by calbhollo
Can't stop the signal, Mal
or
the internet AI treats censorship as damage and routes around it
blueSGL t1_j9mdxby wrote
Reply to comment by maskedpaki in Why are we so stuck on using “AGI” as a useful term when it will be eclipsed by ASI in a relative heartbeat? by veritoast
Again I think we are running up against a semantics issue.
What percentage of human activity would you need to class the thing as 'general'
Because some people argue anything "below 100%" != 'general' and thus 'narrow' by elimination.
Personally I think it's reasonable if you've loaded a system with all the ways ML works currently/all the published papers and task it with spitting out a more optimal system it just might do so. All without being able to do a lot of the things that would be classed as human level intelligence. There are whole swaths of data concerning human matters that it would not need to train on or that the system would in no way need to be middling-expert at.
blueSGL t1_jcb52b5 wrote
Reply to comment by gantork in GPT4 makes functional Flappy Bird AND an AI that learns how to play it. by gantork
Looks like Connor Leahy was right.
https://www.reddit.com/r/singularity/comments/10sb5x6/chatgpt_is_great_for_snippets_of_code_gpt4_can/