Recent comments in /f/singularity
Aromatic_Highlight27 OP t1_jegyip1 wrote
Reply to comment by TemetN in How soon will people be confortable with being treated only by machines, as opposed to AI-assisted human medical doctors? by Aromatic_Highlight27
A pill printer meaning people would be able to manufacture drugs at home? Even ignoring the feasibility, do you think this kind of devices would be legal themselves? Seems even worse than Ai-prescriber to me. Also, do you think this kind of capability will be available by mid 2020s as well?
MarcusSurealius t1_jegyff1 wrote
Reply to comment by MajesticIngenuity32 in ChatGB: Tony Blair backs push for taxpayer-funded ‘sovereign AI’ to rival ChatGPT by signed7
If it's from the government it's probably PropagandaGPT.
[deleted] t1_jegybpe wrote
[deleted]
TemetN t1_jegy8rk wrote
Reply to comment by Aromatic_Highlight27 in How soon will people be confortable with being treated only by machines, as opposed to AI-assisted human medical doctors? by Aromatic_Highlight27
That's an interesting question, but I think it's probably even harder to answer honestly since that's largely a matter of social/cultural change. I'd particularly note how messy and incoherent our drug laws are in America in this case.
In practice I might actually expect something like a pill printer to leave this obsolete rather than it happening in some other way.
simmol t1_jegy6d4 wrote
Reply to Opinions on TaskMatrix.ai by iuwuwwuwuuwwjueej
Basically, I feel like if you are going to give LLMs much more capabilities through utilizing 3rd party plugins, then you should probably use a weaker version of the LLM to save computational power. The amount of computation involved in answering a single prompt is much higher for LLMs with larger number of parameters compared to that of smaller number. However, you are seemingly getting better/more accurate answers as a result of using GPT-4 vs say GPT-3. But if the 3rd party apps can compensate for the LLMs in thousands of different ways, it would be prudent to use GPT-3 with TaskMatrix.ai as opposed to GPT-4. At least that is how I see it.
Aromatic_Highlight27 OP t1_jegy36p wrote
Reply to comment by Pallidus127 in How soon will people be confortable with being treated only by machines, as opposed to AI-assisted human medical doctors? by Aromatic_Highlight27
Do you really have this kind of trust in the CURRENT systems? I'm not thinking of knowledge here, but of reasoning capabilities. Current systems do have a lot of limitations and make mistakes, don't they? Of course a human expert can also get wrong, but are we really at the point where a machine error is less likely, and less likely catastrophic? Keep in mind I'm comparing pure AI vs AI-assisted doctors.
Also, since you say you'd already trust a medical AI, can you please tell me which one is already powerful enough to gain such trust from you?
MaisonIvoire t1_jegxy8i wrote
Reply to comment by SprayOnMe43 in What do I do? by SprayOnMe43
English/Philosophy majors can work in lots of different positions, even non-writing oriented ones. And experience and internships will matter more than your degree.
Also seconding someone else: don’t take career advice from this sub.
On another note, where are you considering studying English and Philosophy? I’m looking to return to university to study the same dual major.
[deleted] t1_jegxu9x wrote
[deleted]
Educational-Net303 t1_jegxm3h wrote
This is just connecting GPT to huggingface models. OpenAI probably experimented with this years ago considering GPT4's vision abilities.
earthsworld t1_jegxl57 wrote
oh cool, the daily "imagine the video games, bro!" post.
Andynonomous t1_jegxl0f wrote
Reply to Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
Ok, so there were a bunch of fake signatures. The post you linked also says "Edit: Just to clarify, the open letter is real and most of the signatures are real (Elon Musk, Gary Marcus, Emad (stablediffusion creator) all did sign it and fully support to ban research on GPT models stronger than GPT4 for at least 6 months". So my point stands.
Krunkworx t1_jegxh31 wrote
Reply to comment by MajesticIngenuity32 in ChatGB: Tony Blair backs push for taxpayer-funded ‘sovereign AI’ to rival ChatGPT by signed7
Oi u dizzy bruv?! Do you wanna go some ya silly muppet.
Pallidus127 t1_jegxed9 wrote
Reply to comment by Aromatic_Highlight27 in How soon will people be confortable with being treated only by machines, as opposed to AI-assisted human medical doctors? by Aromatic_Highlight27
Personally, I’d want a biopsy to confirm, but after that I’d follow the AI prescribed course of treatment.
If a human doctor disagreed, I’d want them to chat and figure out why. The theoretical “medical model“ is going to know A LOT more than the human doctor, but maybe the human doctor has made a creative leap to some conclusion. So let them talk and find out why they disagree.
johngrady77 OP t1_jegx7fk wrote
Reply to comment by HeinrichTheWolf_17 in What I wrote in an article three months ago, and what happened today. by johngrady77
Very true.
AsthmaBeyondBorders t1_jegx580 wrote
Reply to comment by sillprutt in Sam Altman's tweet about the pause letter and alignment by yottawa
About 1% of the general population are psychopaths. About 12% of corporate C-suite are psychopaths. It's their values that have a higher priority as of today.
PolishSoundGuy t1_jegx4cc wrote
Reply to comment by broadenandbuild in ChatGB: Tony Blair backs push for taxpayer-funded ‘sovereign AI’ to rival ChatGPT by signed7
I agree with you that it should be open, but it’s not “entirely trained on public data”. In order for the model to be useful, someone has to feed it prompts and ideal responses, which actually were tens of thousands of people employed by OpenAI. They had fine tuned the model to what it is today.
simmol t1_jegx2xm wrote
Imo, the emphasis on education should be less on details on more on grasping the big picture. Right now, the system is such that students put a lot of emphasis on knowing all the details in college and then building upon that knowledge to grasp the big picture when they are employed for at least 5-10 years in the same industry. Given that the AI will handle a lot of these details, the current education system that emphasizes gaining knowledge at this refined level is obsolete and useless. And if you de-emphasize the details, then you can spend a lot more time, looking more at the big pictures and as such accelerate the student's understanding and progression towards essentially managerial roles.
TinyBurbz t1_jegx1b7 wrote
Reply to The Luddites by scarlettforever
Imagine thinking eliminating labor and thus the bargaining power of the lower classes would somehow HARM the status quo.
activatore t1_jegwzwc wrote
Reply to comment by TMWNN in What were the reactions of your friends when you showed them GPT-4 (The ones who were stuck from 2019, and had no idea about this technological leap been developed) Share your stories below ! by Red-HawkEye
You get it dude don’t listen to them. I had this revelation recently too, while trying to explain the implications of AI. I realized that they couldn’t understand what I was talking about simply because they don’t actually generate novel things. Predictors can be very intelligent, but at the end of the day they are not much more than big encyclopedias that draw on genuine thinkers’ knowledge. It is not wrong to acknowledge this and not waste your time speaking into air.
MasterFruit3455 t1_jegwqrj wrote
Reply to This concept needs a name if it doesn't have one! AGI either leads to utopia or kills us all. by flexaplext
It's called a coinflip.
Aromatic_Highlight27 OP t1_jegwpz1 wrote
Reply to comment by TemetN in How soon will people be confortable with being treated only by machines, as opposed to AI-assisted human medical doctors? by Aromatic_Highlight27
Let's put in another way. How long before it will be legal for a hospital (or a company), say, to make diagnosis and prescribe drugs without human doctors being involved in the process at any point?
HarbingerDe t1_jegwjgf wrote
Reply to comment by Nanaki_TV in I have a potentially controversial statement: we already have an idea of what a misaligned ASI would look like. We’re living in it. by throwaway12131214121
>If I propose to end slavery in 1800s you’re objection to “who would pick the cotton!?” is not a rebuttal.
Typical right-wing / conservative move of, "uhh actually we're totally the ones who are against slavery... Yeah... It was us..."
The scenarios are not analogous at all.
>New horizons will be created. What they will be I cannot even begin to guess.
You are fundamentally at odds with the premise of the sub, this seems to be the biggest thing you're not grasping.
If you believe we're on the cusp of developing a self improving entity that is more intelligent, more creative, and all around more capable than a human at any given task then there cannot be any new horizons that an AI wouldn't better be able to take advantage of.
agorathird t1_jegwf73 wrote
Reply to comment by Nanaki_TV in I have a potentially controversial statement: we already have an idea of what a misaligned ASI would look like. We’re living in it. by throwaway12131214121
It's not naive you have not thought through the implications of what AGI means. You are also ignorant of what is doable with the current technology. Artificial general intelligence is equal to us but also inherently superior due to its computational capacities. There is no need for us after that.
You literally are not describing any useful idea of AGI and are only describing the most surface level uses of text-modality only LLMs in your responses.
The r/futurology work week stuff you talk about is possible right now with current public models of chatgpt. It's been possible for a while. But it's not implemented due to greed and beauruacrats being steadfast in their ways. Luckily, not implementing a change hasn't been critically dire for mass swaths of people thus far.
TemetN t1_jegw78p wrote
Reply to How soon will people be confortable with being treated only by machines, as opposed to AI-assisted human medical doctors? by Aromatic_Highlight27
Define people I guess? A fifth? Half? Almost all? Like another commenter said, some people are already comfortable, and it's worth a reminder that in certain cases machine surgeons have been shown to outperform human ones. That said, even after that takes off, and ignoring the considerations of how much of the population, a huge amount of comfort will depend on soft factors such as early societal reactions and media.
​
I do think people will at least start to be comfortable in significant numbers sooner rather than later. Mid 2020s perhaps for enough for it to be relatively common (fifth-ish, enough so to not be shocking), and by 2030 for general acceptance (majority might consider one).
simmol t1_jegylbm wrote
Reply to How soon will people be confortable with being treated only by machines, as opposed to AI-assisted human medical doctors? by Aromatic_Highlight27
I would be very comfortable if there are layers of safety in play such that I am not getting an opinion on just one a single machine. For example, multiple independent AIs that come to the same conclusion would be reassuring and can be done readily. A reflective module that checks these answers can be useful as well. Once you add multiple layers of protection and this system is proven to be very safe, then I no longer need a human doctor.