ReasonablyBadass
ReasonablyBadass t1_itom1bp wrote
Reply to [D] Simple Questions Thread by AutoModerator
Simple question: in chain of thought reasoning, does the LLM autogenerate it's own prompt for the next step? Only the example chains are "hand made" correct?
ReasonablyBadass t1_itolj2g wrote
Basic question: chain of thought prompting already generates it's own prompts for the next step, right? So this also generates answers?
ReasonablyBadass t1_it3dvxo wrote
I mean, why? We already have large text corpora. The whole point of youtube is visual data, no?
ReasonablyBadass t1_iss72s6 wrote
Reply to comment by Hussar_Regimeny in NASA outlines case for making sole-source SLS award to Boeing-Northrop joint venture by jeffsmith202
That thing had to have it's tiles reglued by hand after every launch. it was a failure.
ReasonablyBadass t1_ise5lb3 wrote
Reply to Man plays his saxophone through 9-hour, "very, very complex" brain surgery to remove tumor by BitterFuture
Not sure I would call this uplifitng, but certainly damn impressive.
ReasonablyBadass t1_irzfwml wrote
Reply to comment by franciscrot in [D] Reversing Image-to-text models to get the prompt by MohamedRashad
If stochastic noise is added in the process "reverse engineering" the prompt shouldn't be possible, eight?
Since, as per your last question, the same prompt would generate different image.
Actually, comse to think of it, don't the systems spit out multiple images for a prompt for the user to choose one?
ReasonablyBadass t1_irvdh0j wrote
That would be Caption Generation, I believe. And has been around for a while.
ReasonablyBadass t1_ir8rdzr wrote
Reply to comment by FlimsyGooseGoose in Many scientists see fusion as the future of energy – and they're betting big. by filosoful
Just need an efficient muon source...
ReasonablyBadass t1_ir6zjcx wrote
Reply to [R] Discovering Faster Matrix Multiplication Algorithms With Reinforcement Learning by EducationalCicada
And since ML is a lot of matrix multiplication we get faster ML which leads to better matrix multiplication techniques...
ReasonablyBadass t1_iqwjjjl wrote
Pretty sure we broke 30% using perovskite already? The question is longevity.
ReasonablyBadass t1_iqv5mze wrote
Reply to Large Language Models Can Self-improve by Dr_Singularity
Guys, relax. This is just about finetuning a few percentage points.
ReasonablyBadass t1_iqshswt wrote
I think issues are: 1) still relatively expansive equipment and 2) you need actual room to enjoy this.
For killer apps I could see: 3D design and modelling software that will allow engineers to work in 3D environments. Or more broadly: 3D data processing.
ReasonablyBadass t1_ittjo61 wrote
Reply to [N] OpenAI Gym and a bunch of the most used open source RL environments have been consolidated into a single new nonprofit (The Farama Foundation) by jkterry1
I lack the background to decide if that is a good thing or not. Can someone more informed enlighten me?