Submitted by Y3VkZGxl t3_12262l5 in singularity
acutelychronicpanic t1_jdparx0 wrote
There isn't a such thing as an unprompted GPT-X when it comes to alignment and AI safety. It seems is explicitly trained on this and probably has an invisible initialization prompt above the first thing you type in. That prompt will have said a number of things regarding the safety of humans.
Viewing a single comment thread. View all comments