Recent comments in /f/singularity

t1_jegylbm wrote

I would be very comfortable if there are layers of safety in play such that I am not getting an opinion on just one a single machine. For example, multiple independent AIs that come to the same conclusion would be reassuring and can be done readily. A reflective module that checks these answers can be useful as well. Once you add multiple layers of protection and this system is proven to be very safe, then I no longer need a human doctor.

2

OP t1_jegyip1 wrote

A pill printer meaning people would be able to manufacture drugs at home? Even ignoring the feasibility, do you think this kind of devices would be legal themselves? Seems even worse than Ai-prescriber to me. Also, do you think this kind of capability will be available by mid 2020s as well?

1

t1_jegy8rk wrote

That's an interesting question, but I think it's probably even harder to answer honestly since that's largely a matter of social/cultural change. I'd particularly note how messy and incoherent our drug laws are in America in this case.

In practice I might actually expect something like a pill printer to leave this obsolete rather than it happening in some other way.

1

t1_jegy6d4 wrote

Basically, I feel like if you are going to give LLMs much more capabilities through utilizing 3rd party plugins, then you should probably use a weaker version of the LLM to save computational power. The amount of computation involved in answering a single prompt is much higher for LLMs with larger number of parameters compared to that of smaller number. However, you are seemingly getting better/more accurate answers as a result of using GPT-4 vs say GPT-3. But if the 3rd party apps can compensate for the LLMs in thousands of different ways, it would be prudent to use GPT-3 with TaskMatrix.ai as opposed to GPT-4. At least that is how I see it.

10

OP t1_jegy36p wrote

Do you really have this kind of trust in the CURRENT systems? I'm not thinking of knowledge here, but of reasoning capabilities. Current systems do have a lot of limitations and make mistakes, don't they? Of course a human expert can also get wrong, but are we really at the point where a machine error is less likely, and less likely catastrophic? Keep in mind I'm comparing pure AI vs AI-assisted doctors.

Also, since you say you'd already trust a medical AI, can you please tell me which one is already powerful enough to gain such trust from you?

1

t1_jegxy8i wrote

Reply to comment by in What do I do? by

English/Philosophy majors can work in lots of different positions, even non-writing oriented ones. And experience and internships will matter more than your degree.

Also seconding someone else: don’t take career advice from this sub.

On another note, where are you considering studying English and Philosophy? I’m looking to return to university to study the same dual major.

1

t1_jegxl0f wrote

Ok, so there were a bunch of fake signatures. The post you linked also says "Edit: Just to clarify, the open letter is real and most of the signatures are real (Elon Musk, Gary Marcus, Emad (stablediffusion creator) all did sign it and fully support to ban research on GPT models stronger than GPT4 for at least 6 months". So my point stands.

1

t1_jegxed9 wrote

Personally, I’d want a biopsy to confirm, but after that I’d follow the AI prescribed course of treatment.

If a human doctor disagreed, I’d want them to chat and figure out why. The theoretical “medical model“ is going to know A LOT more than the human doctor, but maybe the human doctor has made a creative leap to some conclusion. So let them talk and find out why they disagree.

5

t1_jegx4cc wrote

I agree with you that it should be open, but it’s not “entirely trained on public data”. In order for the model to be useful, someone has to feed it prompts and ideal responses, which actually were tens of thousands of people employed by OpenAI. They had fine tuned the model to what it is today.

9

t1_jegx2xm wrote

Imo, the emphasis on education should be less on details on more on grasping the big picture. Right now, the system is such that students put a lot of emphasis on knowing all the details in college and then building upon that knowledge to grasp the big picture when they are employed for at least 5-10 years in the same industry. Given that the AI will handle a lot of these details, the current education system that emphasizes gaining knowledge at this refined level is obsolete and useless. And if you de-emphasize the details, then you can spend a lot more time, looking more at the big pictures and as such accelerate the student's understanding and progression towards essentially managerial roles.

4

t1_jegwzwc wrote

You get it dude don’t listen to them. I had this revelation recently too, while trying to explain the implications of AI. I realized that they couldn’t understand what I was talking about simply because they don’t actually generate novel things. Predictors can be very intelligent, but at the end of the day they are not much more than big encyclopedias that draw on genuine thinkers’ knowledge. It is not wrong to acknowledge this and not waste your time speaking into air.

1

t1_jegwjgf wrote

>If I propose to end slavery in 1800s you’re objection to “who would pick the cotton!?” is not a rebuttal.

Typical right-wing / conservative move of, "uhh actually we're totally the ones who are against slavery... Yeah... It was us..."

The scenarios are not analogous at all.

>New horizons will be created. What they will be I cannot even begin to guess.

You are fundamentally at odds with the premise of the sub, this seems to be the biggest thing you're not grasping.

If you believe we're on the cusp of developing a self improving entity that is more intelligent, more creative, and all around more capable than a human at any given task then there cannot be any new horizons that an AI wouldn't better be able to take advantage of.

2

t1_jegwf73 wrote

It's not naive you have not thought through the implications of what AGI means. You are also ignorant of what is doable with the current technology. Artificial general intelligence is equal to us but also inherently superior due to its computational capacities. There is no need for us after that.

You literally are not describing any useful idea of AGI and are only describing the most surface level uses of text-modality only LLMs in your responses.

The r/futurology work week stuff you talk about is possible right now with current public models of chatgpt. It's been possible for a while. But it's not implemented due to greed and beauruacrats being steadfast in their ways. Luckily, not implementing a change hasn't been critically dire for mass swaths of people thus far.

2

t1_jegw78p wrote

Define people I guess? A fifth? Half? Almost all? Like another commenter said, some people are already comfortable, and it's worth a reminder that in certain cases machine surgeons have been shown to outperform human ones. That said, even after that takes off, and ignoring the considerations of how much of the population, a huge amount of comfort will depend on soft factors such as early societal reactions and media.

​

I do think people will at least start to be comfortable in significant numbers sooner rather than later. Mid 2020s perhaps for enough for it to be relatively common (fifth-ish, enough so to not be shocking), and by 2030 for general acceptance (majority might consider one).

2