SeniorScienceOfficer

SeniorScienceOfficer t1_j0u2c1g wrote

I mean, assuming things ever get to a point where AI can actually change is underlying models ( not weights and biases that are part of the learning process/model) you can just deploy the actually model to a read only file system. The actual code files would not be mistake mutatable, however the model data itself would live in memory or in a mutatable section of the file system. That alone could prevent the deletion or override of certain safety features architected in the code.

2