TemetN
TemetN t1_j56on15 wrote
As I said elsewhere, better than what Hassabis said, but I'll wait and see. And the mention of copyright pandering does not fill me with confidence.
TemetN t1_j56nsxf wrote
That seems at odds with Hassabis' recent statements, but admittedly it'd be a relief if at least the rest of Google tried to push forward.
TemetN t1_j47ntuo wrote
Reply to What advancements in AI technology will have the biggest impact on our daily lives in the next 5-10 years? by No-Meeting-7740
As impressive as AI's potential in drug development and medical care are, it's worth a reminder that both of these systems are highly sclerotic, and that drug development in particular requires a lengthy clinical testing process. As a result, they're unlikely to be the biggest impact on our daily lives (particularly in the shorter part of that time period, the longer may have slightly more point).
​
In practice, there are a few areas where people are more likely to interact with AI significantly in the range we're discussing here. Basically they're automation/robotics, and software. And the latter will be much faster than the former (which is still going to take substantial rollout time). Nonetheless, one of the growing indicators is the gradual rollout we're seeing of robotaxis. Which will likely be endemic within cities by then.
TemetN t1_j3hwecv wrote
This is one of the areas where I actually agree with the alarmism - it isn't even hypothetical, this tech already causes problems with its current deployment. While I still expect that everything is going to wind up monitored in the future, this area needs regulation because it's both directly, practically dangerous, and prone to mistakes.
TemetN t1_j25ivw1 wrote
I agree with the title premise, but the articles discussion of attempts to attack the public over copyright if anything disturbs me. I'm entirely fine with AI remaining outside copyright honestly, or just doing away with the system in general.
TemetN t1_j1y6gry wrote
It's an interesting point, I suppose my response would be to ask whether and how much demand there is for them. I mean, it's certainly doable, but the question would be whether people would want to pay for them. At least in an intermediate term, once costs come down further it might proliferate just out of novelty.
TemetN t1_j1vpsqx wrote
Reply to When do you think we will be able to cure serious mental health issues like major depression, schizophrenia, alzheimer's, personality disorders etc... by Ortamis
Define cure I guess. There are already treatments showing more promise than SSRIs, but I'm not sure how effective they'll end up being. I suppose I'd expect this once we get a better handle on them, and how to treat them. Someone mentioned nanomedicine, which is a decent idea but I'm honestly unsure what breakthrough would do this. I'd probably still say I expect it within a couple decades, due to the integration of AI into R&D gradually speeding up progress into the singularity, but medical advancement tends to be delayed due to testing/approval (to be clear, this is why I say a couple decades, rather than one).
TemetN t1_j1vpe38 wrote
Reply to Considering the recent advancements in AI, is it possible to achieve full-dive in the next 5-10 years? by Burlito2
Kind of a good news/bad news thing. We've actually had the ability to control video games with our minds for a while (there was a kickstarter for it a couple years ago), but on the other hand that also points out that advancements need to be done in BCI. AI could certainly help, but the big area to watch here is BCI. And more pointedly it's probably going to take most of this decade to handle integrating AI into R&D - it takes a while to get full benefit out of new technology.
TemetN t1_j1tcy8g wrote
Reply to Will the singularity require political revolution to be of maximum benefit? If so, what ideas need to change? by OldWorldRevival
In the intermediate term UBI is necessary to mitigate the damages of the transitory period in between when mass automation occurs, and when scarcity is eliminated. In the long term frankly more protections are needed for individual freedom, I expect further challenges to come there, and not just in the intermediate term.
TemetN t1_j1tcf2c wrote
This seems like a bit of a mess honestly, as if you tossed together a few economic variants and some strange random ones. Frankly, I suspect for example that transhumanism will exist with other options. Further, some of these (such as UBI) are transitory policies until scarcity is done away with more than politics.
​
In the long run I expect the 'winning' ideology would be one based on personal freedom and the protection thereof. Since absent scarcity, there's not a lot of call for much else in the way of government.
TemetN t1_j1rwj3k wrote
Reply to Genuine question, why wouldn’t AI, posthumanism, post-singularity benefits etc. become something reserved for the elites? by mocha_sweetheart
Self interest. Why in the world would they not sell it to everyone? Further this stands in ignorance of how many of these systems work. Health benefits for example are covered in most developed nations - and even in the US most of the population has access to them. Fundamentally these arguments rely on the reader being unwilling to analyze them, in practice tech will continue to improve society.
TemetN t1_j0vezph wrote
Reply to Prediction: De-facto Pure AGI is going to be arriving next year. Pessimistically in 3 years. by Ace_Snowlight
I agree with the starting premise, but the implicit assumption it'll be able to rapidly and recursively self improve is dubious in my view. An intelligence explosion seems like the least likely way to reach the singularity honestly.
​
That said yes, people are getting wild about what AGI is/will mean, when in practice both some of the more operationalized and broader definitions will be met within a year or two most likely.
TemetN t1_j0rdzzr wrote
Reply to comment by 96suluman in Will agi immediately lead to singularity? by 96suluman
I've tried to game out a long time line, and it just doesn't work. Even presuming a horribly destructive world war, or running into a yet unseen bottleneck, I can't game out anything close to that. The problem for assuming such a thing is that just what's been individually shown to date indicates we should reach AGI shortly. So while I could see it being delayed to later in the decade, anything past that seems like a one percent or less situation.
TemetN t1_j0qzrb3 wrote
Reply to Will agi immediately lead to singularity? by 96suluman
Improbable. I've increasingly come around to the view that the integration of narrow AI into R&D is the form building blocks of the singularity will take. Contrastingly, while I expect AGI by the middle of the decade, I think it's more likely to be along the weak definition (ala Metaculus or its historical meaning). What this means is we're likely to see a gradual increase in the pace of technological advancement, but it's likely to start in parallel to AGI.
TemetN t1_j0jgd3t wrote
Reply to how to stay on top of advances? by [deleted]
Last Week in AI, multiple subs which post news articles, a collection of serious media sources (The Economist, IEEE Spectrum, etc), and the occasional poke around arXiv.
TemetN t1_j0hjyad wrote
Funding. I'd probably run a company composed of a combination of moonshots and common sense studies that should've already been run (admittedly not largely in AI, there's a lot of stuff in healthcare/nutrition that is just based on assumptions though). The moonshots would include a combination of things like scale is all you need and chip design, as well as more out there things.
​
Honestly I'd probably prove that yes you can burn through hundreds of billions of dollars, but hopefully discover something to fund more testing during it.
TemetN t1_j03mm5q wrote
This made me think - honestly I normally consider near future predictions to be on much stabler ground than later ones, but I usually predict on scale like this only over the course of multiple years. And I realized after running through this that actually trying to predict a list of specific events a year out is... problematic in a lot of ways. Still, I suppose I can give it a try.
​
- Gridlock prevails in US politics, Moore vs Harper does not overturn electoral law (narrow ruling, no precedent, or outright reject).
- Gato 2 and GPT4 both drop, AI continues its march into public awareness.
- Generative audio finally gets its time in the spotlight.
- A breakthrough in synthetic data is made.
- Ditto a breakthrough in transfer learning.
- Chinese unrest continues, but the CCP keep the lid on.
- Ukraine war drags on.
- No coup in Russia though Putin health issues are possible.
- Further breakthroughs in fusion, fueling speculation of implementation this decade despite cautions.
- 2023 is hotter than 2022, and wildfires grow.
- The Iran protests grow.
- This flu season is one of the worst on modern record.
- Investment in fission (SMRs), alongside increased growth in solar/wind.
There are quite a few others that are sort 'maybe that year' or continuing situations - something that meets Metaculus weak AGI standards, breakthroughs in aging, new MRNA vaccines, etc.
TemetN t1_j02ccz0 wrote
I don't expect significant (or even necessarily detectable from background shifts) changes in the labor force participation rate by 2025. By 2030 that's another matter - I expect to see the labor force participation rate fall at least in the 50% range by then. Also obviously it won't be ChatGPT having this effect (at least unless they train something under the same name or the like).
TemetN t1_izyce6v wrote
Reply to comment by eatingcheeseeater in I don't want AI to do all our jobs for us by [deleted]
If you're asking just for general AI based sci-fi stuff, maybe Orion's Arm? I've only glanced at it, but it seems in that general area. Not sure if you're after something else.
TemetN t1_izy6b52 wrote
Reply to I don't want AI to do all our jobs for us by [deleted]
Honestly, I feel like you're contradicting yourself here, you say you want to have a positive impact on the world, but get angry that AI is. I think what you're after here is more accurately captured by your focus on success - and honestly that can come from more than jobs. I'm sure things like competitions and other endeavors of human comparison will still exist, but the state of the world is such that it's incredible important to push forward and fix the existing problems - over 95% of the world is suffering from health issues, most people do not live anything resembling a good life. So yes, I understand that the human mind wants something more, but it can get it from other areas than employment.
TemetN t1_izf375j wrote
Reply to Microsoft CTO Kevin Scott: “2023 is going to be the most exciting year that the AI community has ever had” by ThePlanckDiver
It's a fair argument, and not just because of how fast this year was in terms of major model drops - public funding in AI research is finally starting, and unlike other areas of tech AI continued to accrete investment this last year, all while having people join it en masse.
TemetN t1_izcgvck wrote
Reply to comment by asschaos in How will the transition between scarcity-based economics and post-scarcity based economics happen? by asschaos
That seems very dubious. Why in the world would the government collapse? It isn't in keeping in any way with either previous behavior or any of the implications of the situation. If anything, even in regards to concern over results you are likely looking at the opposite of what you might want to be concerned about (there's a reason for the sayings about government increases in authority during crisis). More practically it's likely they'll be rather useless until they gradually fumble their way to things that work.
TemetN t1_izcbxkm wrote
Reply to How will the transition between scarcity-based economics and post-scarcity based economics happen? by asschaos
Messily. Based on previous economic crises I expect politicians to run around like headless chickens in response to the cratering labor force participation rate before eventually implementing UBI to handle the transition.
TemetN t1_izaj1tx wrote
Reply to Ben & Jerry's owner may launch ice cream made from cow-free dairy | The potential rise of lab-grown milk could result in amazing advances in the world of ice cream by chrisdh79
Honestly I'm back and forth on this - it's an interesting area, but I'm allergic to yeast. To be fair, I'm unclear on whether there's any actually in the product, but as is kind of taking a wait and see approach despite my generally appreciative attitude towards modern advancements in (healthy) food.
TemetN t1_j6c4vhe wrote
Reply to Assume the future's history books will in hindsight agree about what the first publicly-released AGI was. At the time of that AGI's release, what percentage of early-adopters will consider it to be AGI? by Z3F
Asked when? Because I suspect a lot of people will suddenly recall something different.
​
And honestly I don't think most people will really think it through that way, it's worth a reminder that just because someone uses one of these, doesn't mean they necessarily pay attention to futurology.
​
There's also the point that there may very well be AGI well before there's publicly released AGI, particularly if DeepMind manages it first, which would twist the question since anyone paying attention would be aware then.