Borrowedshorts
Borrowedshorts t1_jed5gja wrote
Reply to The next step of generative AI by nacrosian
I think multistep tasks and the ability to understand context across applications is the next paradigm to solve. GPT 4 with plug-ins can do this in a rudimentary sense, but I think it will take specific training and architecture within the generative model itself to start to replace FTE workers.
Borrowedshorts t1_jeabhvm wrote
Reply to comment by Trackest in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
ITER is a complete disaster. If people thought NASA's SLS program was bad, ITER is at least an order of magnitude worse. I agree AI development is going extremely fast. I disagree there's much we can do to stop it or even slow it down much. I agree with Sam Altman's take, it's better these AI's to get into the wild now, while the stakes are low, than to have to experience that for the first time when these systems are far more capable. It's inevitable it's going to happen, it's better to make our mistakes now.
Borrowedshorts t1_je9zb9x wrote
Reply to comment by Trackest in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
ITER is a complete joke. CERN is doing okay, but doesn't seem to fit the mold of AI research in any way. There's really no basis for holding these up as the models AI research should follow.
Borrowedshorts t1_je74dwd wrote
Reply to Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
This is why you never sign an open letter even if you do agree with it. There's a very high chance of something going wrong.
Borrowedshorts t1_jdygg1l wrote
Reply to comment by gunbladezero in [P] two copies of gpt-3.5 (one playing as the oracle, and another as the guesser) performs poorly on the game of 20 Questions (68/1823). by evanthebouncy
I would have never guessed Sofia Coppola no matter how many questions you gave me, so I don't know if it performed that poorly.
Borrowedshorts t1_jdwdkxz wrote
Reply to comment by 94746382926 in A Wharton professor gave A.I. tools 30 minutes to work on a business project. The results were ‘superhuman’ by exstaticj
I don't disagree.
Borrowedshorts t1_jdvp5mr wrote
Reply to comment by tatleoat in Can AI run a F500 company? Any interesting articles out there? by D2MAH
AI can just treat it as a game, the game of accumulating capital, and probably be far more effective than any single human ever has. Might make Rockefeller or JP Morgan look small in comparison.
Borrowedshorts t1_jdvojtc wrote
Reply to comment by bjdkdidhdnd in A Wharton professor gave A.I. tools 30 minutes to work on a business project. The results were ‘superhuman’ by exstaticj
Nearly 50% of managers of publicly traded companies come from business schools. They must be doing something right.
Borrowedshorts t1_jdu1o78 wrote
So if you're using this for academic research, you can put in your original prompt and then tell it to only return references with a confidence score > .5. Neat little trick.
Borrowedshorts t1_jds6tbd wrote
Reply to Why is maths so hard for LLMs? by RadioFreeAmerika
Math is hard for people too, and I don't think GPT 4 is worse than the average person when it comes to math. In many cases, math requires abstract multiple step processing which is something LLM's typically aren't trained on. If these models were trained on processes rather than just content, they'd likely be able to go through the steps required to perform mathematical operations. Even without specific training, LLM's are starting to pickup the ability to perform multiple step calculations, but we're obviously not all the way there yet.
Borrowedshorts t1_jdqyly5 wrote
Reply to comment by No_Ninja3309_NoNoYes in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
80% is a wild stab just as any projection is a wild stab, but Goertzel has studied the problem as much as anyone.
Borrowedshorts t1_jd1qkg4 wrote
Reply to comment by TheSecretAgenda in How long till until humanoid bots in supermarkets? by JosceOfGloucester
Not happening. This entire project was derived from work on legs. I'm not a fan of the backwards leg design myself, but they've had this project ongoing for 10 years, and for over half of that time, they didn't even have an upper body.
Borrowedshorts t1_jd1iyx0 wrote
Reply to Replacing the CEO by AI by e-scape
It wouldn't be a high bar to meet and most CEOs are just glorified figureheads anyway. Would enable a ton of cost savings and likely better decision making if they were replaced.
Borrowedshorts t1_jc7lyrg wrote
Rollable phones. Don't know how foldable phones are seemingly winning that battle right now, because they seem to be ugly and impractical with a noticeable crease that makes an expensive phone seem like a cheap one.
The other big thing will be AI edge capability. There's a huge gap in performance between what phones need now for contemporary applications, and what they will need to run AI applications on the edge device. I really think this will be the driver in the need to keep upgrading to the latest phones, as even my middle tier device does everything I need.
Borrowedshorts t1_jadchbs wrote
Reply to Context-window of how many token necessary for LLM to build a new Google Chrome from scratch ? by IluvBsissa
Humans don't have anywhere close to a 32,000 token context window, at least in terms of performing useful output from learned context. You don't need that big of a context window, you break the problem down into manageable steps to solve a problem.
Borrowedshorts t1_ja3hcfq wrote
Yes probably. You don't need to learn anything to be generally intelligent if you've already been trained on the entirety of human knowledge.
Borrowedshorts t1_ja3gt9c wrote
Maybe mud brick houses?
Borrowedshorts t1_j9zly5c wrote
Reply to comment by NanditoPapa in Been reading Ray Kurzweil’s book “The Singularity is Near”. What should I read as a prerequisite to comprehend it? by Golfer345
This is actually wrong. As someone who has read it in full, Das Kapital will tell you exactly the conditions that will take place as they are the same conditions happening now.
Borrowedshorts t1_j9var8b wrote
Why are people so skeptical of published results? This is how exponential progress works, smaller models today can perform better than larger models of a couple years ago.
Borrowedshorts t1_j9ldhl5 wrote
Reply to comment by dwarfarchist9001 in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
Yes they actually do.
Borrowedshorts t1_j9kcmhm wrote
Reply to comment by turnip_burrito in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
Humans finetune to the test as well.
Borrowedshorts t1_j9kao68 wrote
Reply to comment by Lawjarp2 in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
Why is it recommended to study 1.5 months for Series 7 if it's just a multiple choice test?
Borrowedshorts t1_j9ka0ta wrote
Reply to comment by WithoutReason1729 in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
I don't think that's true, but I do believe it was finetuned on the specific dataset to achieve the SOTA result they did.
Borrowedshorts t1_j9k8nor wrote
It's still garbage. They improved coversation limit by 1, big freaking deal. I won't use it until they remove conversation limits completely.
Borrowedshorts t1_jegu043 wrote
Reply to ChatGB: Tony Blair backs push for taxpayer-funded ‘sovereign AI’ to rival ChatGPT by signed7
The British have always been towards the forefront in computing technology. This seems to fit the mold.