TemetN

TemetN t1_iyuh1he wrote

Reply to comment by PotatoJosukeMan in I'm scared.... by [deleted]

Maybe? My default example would be daycare honestly whereas there's been significant focus on automating healthcare (though it's largely on specific areas), but I'll note here that if your concern is just getting out of your country, it's probable that either one would work (presuming they do otherwise anyways).

1

TemetN t1_iyufw55 wrote

If you want to leave ASAP, tech is a fine area to focus on - you can probably get out on that still unless things move even faster. And honestly, once you're rendered obsolete UBI should be taking off.

​

If on the other hand your question is more 'what is the most future proof job I can take in terms of being among the last to be automated', then the answer is likely either a job that humans would hesitate to automate, or one that is a physically complex, context sensitive job.

1

TemetN t1_iys1ys4 wrote

An old friend of mine (an acupuncturist) swore by this, but I'd still want a larger study, and that's before I scrutinize it for any other problems. Hard to tell if something works without adhering to gold standards.

8

TemetN t1_ixkw80g wrote

As others have mentioned, parts of America's infrastructure is reinforced - of particular note, the internet specifically was born out of ARPAnet and substantial portions of it are still likely to reinforced. It might not even go down, just... you probably wouldn't have access to it.

​

More pointedly, a lot of the actual high risk areas for such as event (the one that immediately jumps to mind is transformers) can be protected with minimal preparation, and we'd have warning (this is something that gets tracked). What this amounts to in practice is that it'd likely cause an absolutely immense amount of damage, but major areas of vulnerability would largely be able to mitigate impacts through disconnection/reinforcement/etc. As a result while it'd be a disaster the likes of which has no modern equivalent, it'd be recoverable.

​

If you meant more on a personal level? It'd largely be shutting things down/disconnecting them.

1

TemetN t1_ixdcnuf wrote

This is an interesting one, since both Millennials and Gen Z set records for mental health problems/report rates. But given the difference in situations I'm not entirely sure (or at least I don't think I've seen data) on whether or not it's skewed by culture or the difference in access to reporting.

That and it could also be because Millennials were the first generation economically less well off than their parents. Still, all in all the nature of news has definitely changed enormously since the 90s, and it could very well have had a large impact on the attitude of the public in more areas than have been looked at.

1

TemetN t1_ix8qpuh wrote

To date the only leak I believe is the one about model size, and that may very well have changed by now it's been so long (and since new scaling laws came out). I am admittedly anticipating it though (or more accurately, frustrated it isn't out yet and tamping down enthusiasm).

3

TemetN t1_ix8q9be wrote

Technically such cases could make a disaster in the nations with them - this type of case is entirely capable of effectively ending generative AI models in the nations with such legal precedents, but they'd just continue in other nations.

​

I occasionally wonder if jumping to attempts to prevent data use was deliberate in attempts to destroy generative models, or if it's just people lashing out. In either case, these are potentially very dangerous, but yes the models and companies would just head elsewhere likely.

4

TemetN t1_ix1fdw6 wrote

Reply to comment by -ZeroRelevance- in 2023 predictions by ryusan8989

This. Plus I think that volition is unlikely to be simply emergent, which means that it's likely to take its own research. And I don't see a lot of call for, or effort at researching in such a direction (Numenta? Mostly Numenta).

5

TemetN t1_ix0s6u3 wrote

Reply to comment by SoylentRox in 2023 predictions by ryusan8989

Kind of and not really? I (along with everyone else) was awaiting DALLE-2, but the explosion did come out of left field. That said, I don't think I had a prediction on that, and my only predictions prior to that were either high level (AGI median 2024) or framed differently (I have a number of predictions on Metaculus from that period for example).

As for whether they're 'too conservative', honestly while it'd be nice, I can't (or at least won't) make predictions without some basis for extrapolation. So things that are out of the blue (such as the aforementioned explosion of image generation models) aren't really likely to show up in that context. I can acknowledge they happen, but they aren't easily modeled generally speaking.

7

TemetN t1_ix0okxi wrote

Reply to comment by michael_mullet in 2023 predictions by ryusan8989

I'm (repeatedly) on record as expecting AGI (as in the Metaculus weak operationalization) by 2025. So while I broadly agree with this, I do think it only applies to a relatively specific and closer to the original use of the term, rather than the more volitional use.

11

TemetN t1_ix0dajz wrote

  1. Progress on generative audio/video to a similar point to last summer was at in generative images.
  2. Gato 2 (or whatever they call the scaled Gato they're working on) drops, confirms scale is all we need.
  3. Breakthroughs in data (one or more of synthetic, access to more through opening up video content, transfer learning, etc).
  4. Model size begins to grow significantly again.
  5. Further expansion (as in new cities) for robotaxies, I'd particularly watch Waymo.
  6. Rapid increase in competition in cultured meat.
  7. Further integration of generative models into other products.
  8. Something comes out of the investment into public R&D in ML.

There's honestly a lot of other stuff on my bingo card too that I'm less certain of (and to be fair, this stuff is mostly just 'things I think are substantially more likely than not'). But past this I'll also be watching for things like repeatable ignition, early immunotherapy results, a humanoid robotics jump, a quantum tolerant scalability breakthrough in quantum computing, etc.

47

TemetN t1_iwrqcxl wrote

This is proliferating faster than the public seems to realize - I've seen people 'predicting' on tech think that level 4 isn't even here. We're going to see mass adoption in major cities within the next couple years honestly, and I wouldn't be surprised to see level 5 later this decade.

6

TemetN t1_iwhul0v wrote

AI supercomputers are not the same as a normal supercomputer (though the author may not have intended to refer to normal supercomputers given the response). Different precision, AI supercomputers already way ahead. That said, a lot of this stuff already did happen.

2

TemetN t1_iw80gra wrote

You're at once giving them too much and too little credit, while it's unusual to see a midterm result like this, it doesn't (for now at least) appear to have ended Trumpism in the GOP. Ended Trump perhaps, but that's not the same thing. And honestly, I sincerely doubt they'd have managed to get anything done except propaganda without the presidency anyways. As it is, as sad as it sounds, we're going to have to keep voting ad nauseum against this nonsense until people drop the whole attacks on democracy idea.

​

On the plus side, the participation is an implicit endorsement of American democracy though. In a lot of ways this is a sort of 'social trust' issue. Democracy is basically just people willing to work through the system, which requires expecting the system to be at least minimally above the board.

1

TemetN t1_ivzqvpz wrote

Hype article. Don't get me wrong, I'd be happy if it was true, because I was one of the (many) people disappointed by OpenAI abandoning its scale obsession - and frankly cutting training costs that much would possibly be the most significant part of such a model (it'd be an absolutely huge change to the field). Nonetheless, this is... dubiously sourced lets say, despite how interesting the whole Gwern rumor thing was.

13