blueSGL
blueSGL t1_j9hu8f1 wrote
Reply to comment by spryes in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
A developer does not give two shits about any nature or wildlife in the way of the worksite unless 1. it directly impacts it, or 2. forced to via regulation. (agreeably this could be seen as a subset of 1)
What makes you think ASI would be any different?
blueSGL t1_j9h0drv wrote
Should be listened to, and if anyone thinks he's wrong I'd say they could make a lot of money selling their working alignment tech (which you'd need in order to prove him wrong) to any of the big players.
blueSGL t1_j9h04sj wrote
Reply to comment by obfuscate555 in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
it's up to something like 6.5mil right now, so off by what? one order of magnitude?
blueSGL t1_j9gutu0 wrote
Reply to comment by limpbizkit4prez in [R] ChatGPT for Robotics: Design Principles and Model Abilities by CheapBreakfast9
> Why not just write the 5-10lines of code?
In order to write 5-10 lines of code, you need to know how to code.
I know how to code, if I can avoid writing more code than needed I do.
blueSGL t1_j96yan4 wrote
Reply to comment by yoshiwaan in [D] Toolformer implementation using only few-shot prompting by MysteryInc152
sorry from what I understand it goes something like this:
LLM processes prompt, formats output as per the initial few shot demos.
This output is an intermediary step in plain text including keywords that then get picked up by Toolformer
Toolformer goes off does the search things and returns predefined chunks formatted from the search results
The prompt is then stuffed with those chunks and asked the question again with the added retrieved search context
(and I'm sure there is more pixie dust sprinkled in somewhere. )
blueSGL t1_j96kgc1 wrote
Reply to comment by TFenrir in What’s up with DeepMind? by BobbyWOWO
It's all fine and good being a benevolent company that decides it's going to fund (but not release) research.
Are the people actually developing this researches going to be happy grinding away at problems at a company and not have anything they've created shared?
and see another research institute gain kudos for something they'd already created 6months to a year prior but it's locked in the google vault?
blueSGL t1_j94yv6s wrote
Reply to comment by MysteryInc152 in [D] Toolformer implementation using only few-shot prompting by MysteryInc152
any idea how they format the search results, because out of all of them that would seem to be the most tricky. No idea if the google summery text preview contains the answer or enough context to get the answer. If it needs to actually go to the website the tool has no knowledge of how the website will be formatted or length of the site. (potential context window issues)
blueSGL t1_j94bno5 wrote
Reply to comment by MysteryInc152 in [D] Toolformer implementation using only few-shot prompting by MysteryInc152
Let me see if I get this right.
Toolformerzero is a layer between the LLM and the user.
That layer picks up keywords, performs the search and then returns a predefined chunk formatted from the search results
Then the LLM's prompt is stuffed with that chunk and asked the question again?
and it just works?
blueSGL t1_j921j8u wrote
Reply to comment by Optimal-Asshole in [D] Please stop by [deleted]
> Be the change you want to see in the subreddit.
For that to work I'd need to script up a bot, sign up to multiple VPNs, curate an army of aged accounts and flag from a control panel new low quality posts to be steadily hit with downvotes, and upvotes to be given to new high quality posts.
Otherwise you are just fighting with the masses that are upvoting posts that are causing the problems and ignoring higher quality posts.
Thought provoking 2 hour in depth podcast with AI researchers working at the coal face: 8 upvotes, Yet another ChatGPT screenshot: hundreds of votes.
This is an issue on every sub on reddit.
blueSGL t1_j8yfre4 wrote
Reply to comment by Ezekiel_W in ChatGPT AI robots writing sermons causing hell for pastors by Ezekiel_W
> I thought truck drivers were going to go long before the rabbi, in terms of losing our positions to artificial intelligence.
That seems to be a common refrain from a lot of workers, they are not wrong and this is just the start.
blueSGL t1_j8xfc70 wrote
Reply to comment by el_chaquiste in Microsoft Killed Bing by Neurogence
> I'm sure some people would pay for a version with longer memory, with eccentricities and all. the kinks.
come on man, the pun was right there!
blueSGL t1_j8vorqx wrote
Reply to comment by -ipa in What if Bing GPT, Eleven Labs and some other speech to text combined powers... by TwitchTvOmo1
My point is more. If country X disallows AI on their home soil there is nothing stopping a company shopping around in AI friendly nations unless that too is prevented under the law.
blueSGL t1_j8us8wx wrote
Reply to comment by was_der_Fall_ist in What if Bing GPT, Eleven Labs and some other speech to text combined powers... by TwitchTvOmo1
You have to wonder, how would it be monitized? how much would you be willing to pay a month for a full fledged digital assistant that was not shit and did not push products and services on to you.
You can bet employees at (at least) 3 companies where choosing the right price point and time to release is keeping them up at night.
They know that Cortana or Siri or (whatever google calls theirs) will be out at some point soon.
blueSGL t1_j8urq1q wrote
Reply to comment by TwitchTvOmo1 in What if Bing GPT, Eleven Labs and some other speech to text combined powers... by TwitchTvOmo1
> I haven't tried Bing yet but with ChatGPT it's always 5+ seconds. > > > > For a "realistic" conversation with an AI to be immersive, you need realistic response time.
"just a second..."
"keyboard clacking.... mouse clicks.... another mouse click.... more keyboard noises"
"Sorry about all this the system is being slow today, can I put you on hold"
5 seconds is faster than some agents I've dealt with (not their fault, computer systems can be absolute shit at times)
blueSGL t1_j8urdhs wrote
Reply to comment by -ipa in What if Bing GPT, Eleven Labs and some other speech to text combined powers... by TwitchTvOmo1
> I strongly believe that legislation must step in and protect the workforce for now, letting them use AI as a tool for the employee, but not to entirely replace a position. I'm all for progress, but this will again make the rich richer and the poor poorer.
What happens when the "Call Center" (ai servers) are in India?(or whatever countries don't ban AI) They'd need to make sure laws prevented companies from outsourcing.
blueSGL t1_j8g6qtf wrote
Reply to comment by belarged in Is society in shock right now? by Practical-Mix-4332
> "how long until this is good enough to take over"
Depends on your definition of "Take over"
I suspect there is going to be a lot of layoffs in the call center sector as soon as a customer service LLM company gets spun up with competitive rate for per company fine tune and maintenance cost.
As soon as one company does it the rest will follow swiftly. Leaving a skeleton crew of humans to verify the large changes to accounts whilst everything else is handled automatically by a LLM and speech synthases software.
Same thing will likely happen where any ridged formal structure is, law for example. A lot of stuff happens outside the court room that could be automated and likely with those branches of law that don't hold peoples lives in the balance. (e.g. corporate merges/acquisitions vs criminal trials.)
blueSGL t1_j8et22l wrote
Reply to comment by wren42 in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
I'm not going to decry tech that generates stuff based on past context without, you know, seeing the past context. It would be down right idiotic to do so.
It'd be like showing a screenshot of a google image search results and it's all pictures of shit but cutting off the search bar from the screenshot and claiming it just did it on its own and that you never asked for shit.
blueSGL t1_j8en280 wrote
Reply to comment by wren42 in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
> but the chat brought up the relationship, love, and sex without the user ever mentioning it.
Without the full chat log you cannot say that, you just have to take their word that they didn't prompt some really weird shit before the screenshots started.
blueSGL t1_j8c26i1 wrote
Reply to comment by turnip_burrito in This is Revolutionary?! Amazon's 738 Million(!!!) parameter's model outpreforms humans on sience, vision, language and much more tasks. by Ok_Criticism_1414
> Experimental Settings
> As the Multimodal-CoT task re- quires generating the reasoning chains and leveraging the vision features, we use the T5 encoder-decoder architec- ture (Raffel et al., 2020). Specifically, we adopt UnifiedQA (Khashabi et al., 2020) to initialize our models in the two stages because it achieves the best fine-tuning results in Lu et al. (2022a). To verify the generality of our approach across different LMs, we also employ FLAN-T5 (Chung et al., 2022) as the backbone in Section 6.3. As using im- age captions does not yield significant performance gains in Section 3.3, we did not use the captions. We fine-tune the models up to 20 epochs, with a learning rate of 5e-5. The maximum input sequence length is 512. The batch sizes for the base and large models are 16 and 8,respectively. Our experiments are run on 4 NVIDIA Tesla V100 32G GPUs.
So the GPUs were used in training, there is nothing to say what the system requirements will be for inference.
blueSGL t1_j876jmh wrote
Reply to comment by FarFuckingOut in Recursive self-improvement (intelligence explosion) cannot be far away by Kaarssteun
entirely depends on having a good discriminator, look at the work going on in stable diffusion where outputs of the model are fed back in for further fine tuning.
or some of the work on doing automated dataset creation for fine tunes by prompting the model in a certain ways so it 'self corrects' and then collect the output and use [correction + initial question] for fine tunes.
blueSGL t1_j86n7e4 wrote
Reply to comment by levoniust in Everybody is always talking about AGI. I'm more curious about using the tools that we have now. by levoniust
Exactly my point, any teacher gearing up to be responsive to ChatGPT by molding coursework around its shortcomings are going to have an endless string of 'new and improved' coming their way scuppering the plan if they don't project out capabilities and plan accordingly,.
blueSGL t1_j860ynw wrote
Reply to Everybody is always talking about AGI. I'm more curious about using the tools that we have now. by levoniust
Honestly with the speed that things are going at whatever the changes are is going to be trying to hit a moving target.
ChatGPT can answer things confidently incorrect.
Now what if you have the same system but it can check it's work against the internet?
What about the next model that has even more fine tuning around ranked search results (e.g. trust the *math results from Wolfram Alpha higher than anywhere else etc) and maybe even more emergent capabilities?
Whatever the new structure is needs to be formatted in such a way that should capabilities increase from here it's not kneecapped e.g. generating an essay via ChatGPT and getting the students to grade and correct it completely falls by the wayside when the generated document is 100% factually correct.
blueSGL t1_j800g8l wrote
it depends what level of abstraction you are taking from the raw actions within the program.
A lot of 3D stuff that can be automated already is, you can write scripts.
Having an AI 'script writer' helper that takes in natural language and produces a python script can already be done. It's my go to thing to test chat bots, to ask them to generate simple scrips for Maya. (the you.com one got a bit better at that recently)
If however you are asking for something like 'create me a full sci-fi environment' or 'rig this model' or 'animate this armature like this' and it just does, well we are not there yet. There are scripts, asset libraries, etc... that streamline these processes but nothing end to end driven by natural language with zero manual input from a human.
blueSGL t1_j7rqa0q wrote
Reply to Based on what we've seen in the last couple years, what are your thoughts on the likelihood of a hard takeoff scenario? by bloxxed
I think solutions are going to be found to a lot of things that people currently ascribe as needing AGI some time before AGI itself is created.
blueSGL t1_j9i64l3 wrote
Reply to comment by Akimbo333 in OpenAI has privately announced a new developer product called Foundry by flowday
CallcenterGPT