RaccoonProcedureCall
RaccoonProcedureCall t1_jaqiwnb wrote
Reply to comment by moses420bush in FDA reportedly denied Neuralink's request to begin human trials of its brain implant | The agency cited 'dozens' of safety issues that must be resolved before moving forward. by chrisdh79
It’s also scary to think of a world in which this technology becomes necessary to be competitive. I hate to imagine what would happen if no company was willing to hire someone who can’t interact with a computer as quickly as they can think, yet some people still refused to use the technology for various reasons.
RaccoonProcedureCall t1_jac9g9x wrote
Reply to comment by goteamnick in Students can quote ChatGPT in essays as long as they do not pass the work off as their own, international qualification body says by Parking_Attitude_519
Yeah, ChatGPT definitely isn’t the authority that some people think it is.
RaccoonProcedureCall t1_j9xmk3o wrote
Reply to comment by skraddleboop in Archiving your mind, mentality and voice after death. Tell me how you feel about this. by Dimitar_Drew
I find it difficult to identify precisely what I dislike about the idea of a digital simulacrum of me in some way taking my place after my death, so I can’t offer much help with that if you don’t see any reasons why it could be objectionable. Nevertheless, I think most people agree that certain wishes of a deceased person ought to be respected even if the deceased person is no longer around to care (e.g., whether one wants to be buried, cremated, etc.), and I would hope that could extend to this issue.
As far as why one might want to interact with the simulation—I think that’s much easier to see, though specifics would depend on how far the technology goes. On the simpler end, a basic chatbot that simulates the deceased’s voice might at least be comforting to someone grieving. I know people who say they would like to use similar technology to have one last chance to talk to someone they loved, even if they knew it was fake. On the more sophisticated (and much more hypothetical) end, I suppose such a simulation could allow some bereaved to function almost as though their loved one never died. Hopefully it’s easy to see why someone might want to live their life as though their dead friends or family were still living.
RaccoonProcedureCall t1_j9rcsir wrote
Reply to comment by wbsgrepit in Question for any AI enthusiasts about an obvious (?) solution to a difficult LLM problem in society by LettucePrime
Yeah, and I believe the author of that blog post acknowledges as much. I suppose being able to detect some text is better than being able to detect no text. Maybe that’s why watermarking is being pursued, but I can hardly speak for that author or for OpenAI.
RaccoonProcedureCall t1_j9o1gl1 wrote
Reply to comment by adt in Question for any AI enthusiasts about an obvious (?) solution to a difficult LLM problem in society by LettucePrime
Forgive me for not reading the entire post you linked, but is the plan that this watermarking would not be detectable by the general public out of concerns for “privacy”? Also, has this been implemented with ChatGPT (or do we know)?
Also, it surprises me that someone from OpenAI would acknowledge the shortcomings of their current measures for identifying AI-generated content.
RaccoonProcedureCall t1_j7f71qa wrote
Reply to comment by Veleric in The Future of AI Detection is Bleak by smswigart
I disagree that learning to use ChatGPT should replace learning other skills for students, but your point about there being an incentive for OpenAI to make a bad detector is a good one that I hadn’t considered before.
I guess expecting OpenAI to make a good detector is a bit like expecting a site that allows students to pay for homework answers to include a service to help teachers identify answers taken from the site. Any site that would try such a thing would quickly become unpopular with students looking to cheat, and they’d take their business elsewhere.
RaccoonProcedureCall t1_jaqjl6r wrote
Reply to comment by wambulancer in FDA reportedly denied Neuralink's request to begin human trials of its brain implant | The agency cited 'dozens' of safety issues that must be resolved before moving forward. by chrisdh79
Yeah, I get the excitement there is for this tech, but it seems to me that even the slightest scrutiny reveals grave risks at practically every level from immediate health hazards to potential societal problems. I think there are some non-technological challenges that really ought to be addressed before we consider incorporating this kind of technology into our lives.