Submitted by SpinRed t3_10b2ldp in singularity
If you customize moral rules into GPT-4, you are basically introducing a kind of "bloatware" into the system. When Alphago was created...as powerful as it was, it too was handicapped by the human strategy/bloatware imposed upon the system. Conversly, When Alphazero came on the scene, it learned to play Go by given the basic rules and instructed to optimize its moves by playing millions of simulated games (without adding human strategy/bloatware). As a result, not only did Alphazero kick Alphago's ass over and over again, Alphazero was a significantly smaller program....yeah, smaller. I understand we need safeguards to keep ai from becoming dangerous, but those safeguards need to become part of the system as a result of logic...not human "moral bloatware."
gibecrake t1_j47pn89 wrote
While I agree there is a balance to be had, the safeguards are inherently our morals as rules. Then its splitting hairs on which morals are to be used. Welcome to the new digital religions being born in real time.