gaudiocomplex
gaudiocomplex t1_j4yjsa8 wrote
Reply to comment by technofuture8 in OpenAI's CEO Sam Altman won't tell you when they reach AGI, and they're closer than he wants to let on: A procrastinator's deep dive by Magicdinmyasshole
Just an evocative way of saying I wasn't claiming Sankar knows anything about this space as an SME. I'm saying he's in a very tight circle of people who are in the know and big secrets are hard to keep.
gaudiocomplex t1_j4yae51 wrote
Reply to comment by [deleted] in OpenAI's CEO Sam Altman won't tell you when they reach AGI, and they're closer than he wants to let on: A procrastinator's deep dive by Magicdinmyasshole
- No more of a conspiracy theory than your poor reading of human nature. And:
- You don't need credibility if you have ears and an ass in the right place, you stupid fuck. π
gaudiocomplex t1_j4y4cmm wrote
Reply to comment by [deleted] in OpenAI's CEO Sam Altman won't tell you when they reach AGI, and they're closer than he wants to let on: A procrastinator's deep dive by Magicdinmyasshole
I stopped reading when I realized you're a cunt. So, a few words in. π€·ββοΈ
Edit: ah what the hell I feel like jumping in on at least the first part. I read that much.
It just goes to show you how very little you understand about the world (which also explains the cuntiness, no doubt) when you can't grasp the notion that many Silicon Valley CEOs are quite chummy with each other. They attend the same parties, restaurants, gyms,, the same book club, even. They sit on each other's boards.
At that, Rippling isn't just another HR startup. It's a unicorn. And well engrained in tech culture.
And as such, that offers the C Suite a certain level of access that can provide the kind of information he could get and carelessly post on Twitter.. because who doesn't like breaking a big story?
gaudiocomplex t1_j4xljx5 wrote
Reply to comment by Magicdinmyasshole in OpenAI's CEO Sam Altman won't tell you when they reach AGI, and they're closer than he wants to let on: A procrastinator's deep dive by Magicdinmyasshole
Well another problem here is that they've really just completely destroyed their own moat with 3.5. unless again... They know they have 4 and they're not worried about somebody else getting there in the interim. I don't know if there's much proprietary here for them... That's what's the head scratcher for me.
gaudiocomplex t1_j4xkj75 wrote
Reply to comment by WaveyGravyyy in OpenAI's CEO Sam Altman won't tell you when they reach AGI, and they're closer than he wants to let on: A procrastinator's deep dive by Magicdinmyasshole
It may be multimodal. And that may have been the difference in achieving some semblance of AGI. That is 100% speculation, but I worked with an NLP for a long time that focused on human level metadata editing of sound files at scale. There is plenty of data out there to feed into the machine.
But on a more certain level, you have to realize that language itself models reality and LLM's when they are able to more accurately model language itself, they're able to produce a more real reality. Some of the things that is doing right now in terms of errors and dumb mistakes, those won't be happening anymore. We will have a lot more difficult of the time sussing out what's real and what's not. The banal ways that it communicates now... I don't think that that will be the case either.
gaudiocomplex t1_j4xi8tl wrote
Reply to OpenAI's CEO Sam Altman won't tell you when they reach AGI, and they're closer than he wants to let on: A procrastinator's deep dive by Magicdinmyasshole
The CEO of Rippling already came out and said that 4 is basically AGI. My guess is he got drunk one night and spilled the beans on Twitter and then deleted the tweet when he realized he pissed off his silicon valley bros.
It's a pretty common belief right now in the right circles that 4 is going to be problematic to society. I think all indications point to 3.5 being a trial balloon for the ways that the common folk will receive it. I've been in tech marketing for quite a long time and my mind could not wrap around the notion of introducing a half-cocked product (to describe the chat as lightweight is generous) when you have another one that is clearly superior only two quarters away.
And then to tease it as though 2022 is going to be a "sleepy year" by comparison? I don't think you need to look into the non-verbal cues here. It's pretty clear that Altman knows what's going on and he's sitting on something big.
What's problematic here is... If this is indeed AGI or an AGI proximate, there's not a lot that they're going to hold back if they're in competition with deepmind. There's too much money at stake to be the kind of careful they need to be.
Another thing that I'm not hearing about right now is if the Department of Defense is involved. It's hard to imagine AGI being privately developed without them putting their thumb on the scale.
Edit: grammar.
gaudiocomplex t1_j4tpirw wrote
It's a very short-lived take, I'll say. The rate of progress is going to be very fast unless we become unusually careful and elaborate about any progress. Soon enough, the bots will be so good that they will be indistinguishable from human writing and detection like this will be far more difficult.
gaudiocomplex t1_j3cw1wf wrote
Reply to comment by MajorUnderstanding2 in Now thatβs pretty significant! (By Anthropic) by MajorUnderstanding2
Thank you very much! That's awesome π I'll keep an eye on it.
gaudiocomplex t1_j3ctyon wrote
Just went to their website... Where is this available, or is it not ready for the public yet?
gaudiocomplex t1_j35k2cn wrote
Reply to comment by DoktoroKiu in The Expanding Dark Forest and Generative AI by Warriohuma
Thanks for the reply! And I largely agree. r/controlproblem has been a great sub to follow for some disturbing reads lately, if you're into that sort of thing.
Just wanted to add "Before I get that out of your mouth" was from voice to text for my kid this morning as I read this and she, 4, was trying to eat leaves.
Ironically enough, a robot wouldn't have been able to reproduce that weird blunder so I guess you know I'm human
gaudiocomplex t1_j34dds0 wrote
Reply to comment by Jaszuni in The Expanding Dark Forest and Generative AI by Warriohuma
Most people have no idea how it works and don't care for that reason. I suspect this change to be much more... let's just say public and the technology more... nefarious seeming. GPT-4 is coming out in a few months. You'll see. :)
gaudiocomplex t1_j31p3mt wrote
Reply to comment by Jaszuni in The Expanding Dark Forest and Generative AI by Warriohuma
The perception is that if an AI is writing it then the intent is likely to exploit you (and others like you at scale). Particularly in the commerce space: There are a lot of trust mechanisms that have to kick in before you use your hard-earned money on something.
gaudiocomplex t1_j31or6r wrote
I love this. A couple things to point out that I would like to see this author address later in a follow-up?
-
How long will it matter that the source of our content is human? At some point I think there will be a gesture towards considering AI a comparable mind and asking them for their experiences and sharing in the joint qualms of existence. I do think that there will be a rubber band snap back of sorts whenever we realize they're smarter than the collective whole of humanity and then our ability to relate will be minimal. Also again, the trust factor of being exploited goes up dramatically.
-
I personally think SEO is on its way out because of these trust issues and any verification system will be so suspect that people won't buy into it either. So what will people do when they want to exchange money for goods or services? There's an immense amount of trust that has to go into that change of hands and meeting in person won't scale. You have to imagine that the market will create a sort of Luddite influencer culture in response, right? (" I would never use AI and this is me and I'm real and I can tell you what to buy because you can trust me" kinda thing).
-
What happens to the creative class and downstream professions generally? It's hard to imagine that the volume of jobs in this space will continue at its current clip.
Edit: deleted some weird text
gaudiocomplex t1_j31n52i wrote
I work in content generation and have worked with AI content generating tools on the inside and I will tell you that this is about as spot on as it gets.
This part here really cuts to the heart of the matter.
We're about to drown in a sea of pedestrian takes. An explosion of noise that will drown out any signal. Goodbye to finding original human insights or authentic connections under that pile of cruft.
It's hard to imagine a world where SEO and its current state is able to work. Given that these bots will exploit the ranking algorithm in a way that probably won't serve the end user.
gaudiocomplex OP t1_j2588q4 wrote
Reply to comment by HugeHans in What, exactly, are we supposed to do until AGI gets here? by gaudiocomplex
Where am I doing anything detrimental here? Would love to know
gaudiocomplex OP t1_j2569gu wrote
Reply to comment by gleamingthenewb in What, exactly, are we supposed to do until AGI gets here? by gaudiocomplex
Interestingly enough, I'm actually from the little town where the Drake equation was developed π
gaudiocomplex OP t1_j255lq8 wrote
Reply to comment by iCantPauseItsOnline in What, exactly, are we supposed to do until AGI gets here? by gaudiocomplex
Somebody's butthurt they wasted a year in crappy MA program
gaudiocomplex OP t1_j254npn wrote
Reply to comment by ralpher1 in What, exactly, are we supposed to do until AGI gets here? by gaudiocomplex
where to start... Tenure track positions are few and far between. I make about three times more than I did in that field. It's well documented that academia is suffering, particularly in my area of expertise.
gaudiocomplex OP t1_j254dqz wrote
Reply to comment by South_Ear6167 in What, exactly, are we supposed to do until AGI gets here? by gaudiocomplex
Be sure to like and subscribe.
gaudiocomplex OP t1_j2545p4 wrote
Reply to comment by gleamingthenewb in What, exactly, are we supposed to do until AGI gets here? by gaudiocomplex
This was super helpful for the anxiety. Thank you. :)
gaudiocomplex OP t1_j25417d wrote
Reply to comment by noniboi in What, exactly, are we supposed to do until AGI gets here? by gaudiocomplex
Don't be daft. Who said I was unsuccessful? I have a masters of fine arts from the best writing school in the country, eight awards from the associated press, was head writer for a humongous media outlet, and now I make $200k in a field that nobody makes that kinda money in. I run a side marketing firm and make an extra $60k. The industries I pointed to are well documented to be on a decline. It's not controversial, you're just stupidly behind the times.π
gaudiocomplex OP t1_j24dqe3 wrote
Reply to comment by Celestrael in What, exactly, are we supposed to do until AGI gets here? by gaudiocomplex
I'm free to ask a legitimate, heartfelt question about the future π€·ββοΈ
gaudiocomplex OP t1_j23xyas wrote
Reply to comment by Celestrael in What, exactly, are we supposed to do until AGI gets here? by gaudiocomplex
And here you are wasting your time commenting π€
gaudiocomplex OP t1_j23rttr wrote
Reply to comment by daveescaped in What, exactly, are we supposed to do until AGI gets here? by gaudiocomplex
I'd say the companies that use AI fundamentally from the onset will bypass the orgs that can't transform fast enough.
gaudiocomplex t1_j4ytdxy wrote
Reply to comment by TheKnightIsForPlebs in OpenAI's CEO Sam Altman won't tell you when they reach AGI, and they're closer than he wants to let on: A procrastinator's deep dive by Magicdinmyasshole
GPT4.