TemetN
TemetN t1_iyufw55 wrote
Reply to I'm scared.... by [deleted]
If you want to leave ASAP, tech is a fine area to focus on - you can probably get out on that still unless things move even faster. And honestly, once you're rendered obsolete UBI should be taking off.
​
If on the other hand your question is more 'what is the most future proof job I can take in terms of being among the last to be automated', then the answer is likely either a job that humans would hesitate to automate, or one that is a physically complex, context sensitive job.
TemetN t1_iys1ys4 wrote
Reply to Researchers claim a human trial with 90 people has shown a simple laser therapy improves short-term memory by 25%. The treatment, called transcranial photobiomodulation (tPBM), has had claims in previous studies to also improve reaction times, accuracy and attention by lughnasadh
An old friend of mine (an acupuncturist) swore by this, but I'd still want a larger study, and that's before I scrutinize it for any other problems. Hard to tell if something works without adhering to gold standards.
TemetN t1_iydt1ci wrote
Ongoing discussion - it's a process, not an instant event (well barring an extreme example of the intelligence explosion model anyways). Now after all that, I really have no idea. Discussion of topics that came out of it?
TemetN t1_iyb7ibh wrote
Reply to comment by maxtility in From NeurIPS 2022 poster session: "[Google] Minerva author on AI solving math: IMO gold by 2026 seems reasonable, superhuman math in 2026 not crazy" by maxtility
I find it interesting when that happens, thanks for the headsup (reposted it to the question in question).
TemetN t1_iyb320n wrote
Reply to From NeurIPS 2022 poster session: "[Google] Minerva author on AI solving math: IMO gold by 2026 seems reasonable, superhuman math in 2026 not crazy" by maxtility
Was this intended as a comment on the Metaculus prediction? Yes though, I'm currently predicting a bit before that (community is a bit after).
TemetN t1_iy0jqkm wrote
Better medical care.
​
In truth, it would be interesting to see AI reactions to all these posts at various periods.
TemetN t1_ixvnvvg wrote
Reply to comment by cy13erpunk in Sharing pornographic deepfakes to be illegal in England and Wales by Shelfrock77
I partially agree with you? I don't think they can police deepfake porn effectively, but I do think they could police harassment with it effectively. Those are two very different situations. Which mostly points out it'd probably be easier to police this under the standards of something like revenge porn.
TemetN t1_ixkw80g wrote
Reply to If a solar flare were to wipe most if not all technology, what plans/countermeasure could be taken to slow rebuild things like the internet? by Zak_the_Reaper
As others have mentioned, parts of America's infrastructure is reinforced - of particular note, the internet specifically was born out of ARPAnet and substantial portions of it are still likely to reinforced. It might not even go down, just... you probably wouldn't have access to it.
​
More pointedly, a lot of the actual high risk areas for such as event (the one that immediately jumps to mind is transformers) can be protected with minimal preparation, and we'd have warning (this is something that gets tracked). What this amounts to in practice is that it'd likely cause an absolutely immense amount of damage, but major areas of vulnerability would largely be able to mitigate impacts through disconnection/reinforcement/etc. As a result while it'd be a disaster the likes of which has no modern equivalent, it'd be recoverable.
​
If you meant more on a personal level? It'd largely be shutting things down/disconnecting them.
TemetN t1_ixdcnuf wrote
Reply to comment by XoxoForKing in Would like to say that this subreddit's attitude towards progress is admirable and makes this sub better than most other future related discussion hubs by Foundation12a
This is an interesting one, since both Millennials and Gen Z set records for mental health problems/report rates. But given the difference in situations I'm not entirely sure (or at least I don't think I've seen data) on whether or not it's skewed by culture or the difference in access to reporting.
That and it could also be because Millennials were the first generation economically less well off than their parents. Still, all in all the nature of news has definitely changed enormously since the 90s, and it could very well have had a large impact on the attitude of the public in more areas than have been looked at.
TemetN t1_ix8s1yh wrote
Reply to Would like to say that this subreddit's attitude towards progress is admirable and makes this sub better than most other future related discussion hubs by Foundation12a
It's also partially a time thing, while I haven't seen direct data on it I suspect the mental health impacts of the pandemic drove a lot of people into doomer-ism. Or at least I've seen a lot more of it the last couple years.
TemetN t1_ix8qpuh wrote
Reply to GPT-4 is Almost Here, And it Looks Better than Anything Else - As GPT-3 remains a lot ambiguous, the new model could be a fraction of the futuristic bigger models that are yet to come. by izumi3682
To date the only leak I believe is the one about model size, and that may very well have changed by now it's been so long (and since new scaling laws came out). I am admittedly anticipating it though (or more accurately, frustrated it isn't out yet and tamping down enthusiasm).
TemetN t1_ix8q9be wrote
Reply to comment by ChadFuckingThunder in This Copyright Lawsuit Could Shape the Future of Generative AI by Gari_305
Technically such cases could make a disaster in the nations with them - this type of case is entirely capable of effectively ending generative AI models in the nations with such legal precedents, but they'd just continue in other nations.
​
I occasionally wonder if jumping to attempts to prevent data use was deliberate in attempts to destroy generative models, or if it's just people lashing out. In either case, these are potentially very dangerous, but yes the models and companies would just head elsewhere likely.
TemetN t1_ix6ws1k wrote
Reply to Metaculus community prediction for "Date Weakly General AI is Publicly Known" has dropped to Oct 26, 2027 by maxtility
Still centered at 2024 on there, but I did make an adjustment early in the year down from 2025. It is worth noting however, that this is a specific operationalization.
TemetN t1_ix1fdw6 wrote
Reply to comment by -ZeroRelevance- in 2023 predictions by ryusan8989
This. Plus I think that volition is unlikely to be simply emergent, which means that it's likely to take its own research. And I don't see a lot of call for, or effort at researching in such a direction (Numenta? Mostly Numenta).
TemetN t1_ix1f7fl wrote
Reply to comment by -ZeroRelevance- in 2023 predictions by ryusan8989
Still not sure if it'll come out this year - or more precisely I think it's more likely than not to come out this year (if only slightly).
TemetN t1_ix0s6u3 wrote
Reply to comment by SoylentRox in 2023 predictions by ryusan8989
Kind of and not really? I (along with everyone else) was awaiting DALLE-2, but the explosion did come out of left field. That said, I don't think I had a prediction on that, and my only predictions prior to that were either high level (AGI median 2024) or framed differently (I have a number of predictions on Metaculus from that period for example).
As for whether they're 'too conservative', honestly while it'd be nice, I can't (or at least won't) make predictions without some basis for extrapolation. So things that are out of the blue (such as the aforementioned explosion of image generation models) aren't really likely to show up in that context. I can acknowledge they happen, but they aren't easily modeled generally speaking.
TemetN t1_ix0okxi wrote
Reply to comment by michael_mullet in 2023 predictions by ryusan8989
I'm (repeatedly) on record as expecting AGI (as in the Metaculus weak operationalization) by 2025. So while I broadly agree with this, I do think it only applies to a relatively specific and closer to the original use of the term, rather than the more volitional use.
TemetN t1_ix0dajz wrote
Reply to 2023 predictions by ryusan8989
- Progress on generative audio/video to a similar point to last summer was at in generative images.
- Gato 2 (or whatever they call the scaled Gato they're working on) drops, confirms scale is all we need.
- Breakthroughs in data (one or more of synthetic, access to more through opening up video content, transfer learning, etc).
- Model size begins to grow significantly again.
- Further expansion (as in new cities) for robotaxies, I'd particularly watch Waymo.
- Rapid increase in competition in cultured meat.
- Further integration of generative models into other products.
- Something comes out of the investment into public R&D in ML.
There's honestly a lot of other stuff on my bingo card too that I'm less certain of (and to be fair, this stuff is mostly just 'things I think are substantially more likely than not'). But past this I'll also be watching for things like repeatable ignition, early immunotherapy results, a humanoid robotics jump, a quantum tolerant scalability breakthrough in quantum computing, etc.
TemetN t1_iwrqcxl wrote
Reply to Motional and Lyft will launch a robotaxi service in Los Angeles - The Autonomous Vehicle operator is a joint venture between Hyundai and Aptiv. LA will be its second robotaxi market with Lyft, after launching a service in Las Vegas earlier this year. by izumi3682
This is proliferating faster than the public seems to realize - I've seen people 'predicting' on tech think that level 4 isn't even here. We're going to see mass adoption in major cities within the next couple years honestly, and I wouldn't be surprised to see level 5 later this decade.
TemetN t1_iwhul0v wrote
Reply to comment by Phoenix5869 in My predictions for the next 30 years by z0rm
AI supercomputers are not the same as a normal supercomputer (though the author may not have intended to refer to normal supercomputers given the response). Different precision, AI supercomputers already way ahead. That said, a lot of this stuff already did happen.
TemetN t1_iw80gra wrote
Reply to Does anyone else feel like we just avoided a high-tech evil fascist dystopia due to the midterm election results? by [deleted]
You're at once giving them too much and too little credit, while it's unusual to see a midterm result like this, it doesn't (for now at least) appear to have ended Trumpism in the GOP. Ended Trump perhaps, but that's not the same thing. And honestly, I sincerely doubt they'd have managed to get anything done except propaganda without the presidency anyways. As it is, as sad as it sounds, we're going to have to keep voting ad nauseum against this nonsense until people drop the whole attacks on democracy idea.
​
On the plus side, the participation is an implicit endorsement of American democracy though. In a lot of ways this is a sort of 'social trust' issue. Democracy is basically just people willing to work through the system, which requires expecting the system to be at least minimally above the board.
TemetN t1_iw08nun wrote
Reply to The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by Dr_Singularity
Saw this elsewhere, but while nice if true (and the Gwern thing is interesting) it seems like a hype article. It would be a huge deal if they really did manage to do that to training costs though.
TemetN t1_ivzqvpz wrote
Reply to The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
Hype article. Don't get me wrong, I'd be happy if it was true, because I was one of the (many) people disappointed by OpenAI abandoning its scale obsession - and frankly cutting training costs that much would possibly be the most significant part of such a model (it'd be an absolutely huge change to the field). Nonetheless, this is... dubiously sourced lets say, despite how interesting the whole Gwern rumor thing was.
TemetN t1_ivpj7wb wrote
Reply to According To This New AI Research At MIT, Machine Learning Models Trained On Synthetic Data Can Outperform Models Trained On Real Data In Some Cases, Which Could Eliminate Some Privacy, Copyright, And Ethical Concerns by Shelfrock77
Frankly I'm not concerned about copyright, but synthetic data is a promising area given how hungry models have gotten post new scaling laws.
TemetN t1_iyuh1he wrote
Reply to comment by PotatoJosukeMan in I'm scared.... by [deleted]
Maybe? My default example would be daycare honestly whereas there's been significant focus on automating healthcare (though it's largely on specific areas), but I'll note here that if your concern is just getting out of your country, it's probable that either one would work (presuming they do otherwise anyways).