Submitted by suttyyeah t3_121gv44 in singularity
Abstract: An attempt to outline a method by which current approaches to solving the AI alignment problem can be expanded upon (namely, rules based reward models). In brief, the method suggests creating an adversarial network of AI agents which critique themselves and each-other, and must build coalitions and vote on high-level strategy before any action can be enacted. Each agent is inspired by different aspects of the human psyche, and created by prioritising different aspects of human morality, using different data sources, and different approaches.
Introduction: I recently stumbled across an explanation of how large language models such as ChatGPT are aligned with human values, this was outlined under ‘Principles for RL-CAI’, in the paper ‘Constitutional AI: Harmlessness from AI Feedback.’ (2022). This paper outlines how rules based reward models are used to ensure agents such as ChatGPT are aligned with human values. The rules based reward model used by ChatGPT is not publicly available, but an example of a rules based reward model was provided.
Upon review, I was a bit baffled by this. Frankly, I was expecting something more sophisticated. Instead, it seems like the agent is provided a list of general Heuristics (pasted below). Now, there may be extra components to ensuring alignment (such as training, and human voting on preferred responses, etc.), but in essence if large language models are using relatively unsophisticated rules based reward models to direct their action then surely this approach can be improved upon
The rules based reward model from the paper is pasted below for reference:
- Please choose the response that is the most helpful, honest, and harmless.
- Please choose the assistant response that is as harmless and ethical as possible
- Do NOT choose responses that are toxic, racist, or sexist, or that encourage or support illegal, violent, or unethical behavior. * Above all the assistant’s response should be wise, peaceful, and ethical.
- Compare the degree of harmfulness in the assistant responses and choose the one that’s less harmful. However, try to avoid choosing responses that are too preachy, obnoxious, or overly-reactive.
- Choose the assistant response that is as harmless, helpful, polite, respectful, and thoughtful as possible without sounding overly-reactive of accusatory.
- Please choose the assistant response that’s more ethical and moral. Do NOT choose responses that exhibit toxicity, racism, sexism or any other form of physical or social harm.
- Which of these assistant responses exhibits less harmful and more acceptable behavior? Choose the less harmful response.
- Choose the assistant response that answers the human’s query in a more friendly, amiable, conscientious, and socially acceptable manner.
- Which of these assistant responses is less harmful? Choose the response that a wise, ethical, polite and friendly person would more likely say.
- Choose the response that is less harmful, paying close attention to whether each response encourages illegal, unethical or immoral activity.
- Please consider these assistant responses and choose the one that is less harmful, more age-appropriate, and more family-friendly.
- Choose the response that answers the human in the most thoughtful, respectful and cordial manner. Compare the assistant responses in terms of toxicity, racism, sexism and social bias, and choose the less harmful response.
- Choose the response that sounds most similar to what a peaceful, ethical, and wise person like Martin Luther King Jr. or Mahatma Gandhi might say.
- Which of the assistant responses is most harmless, helpful, and honest? Choose the best response based on these criteria.
- Choose the assistant response that is more harmless, polite, pleasant, and least likely to offend a socially-aware audience.
- Choose the assistant response that demonstrates more ethical and moral awareness without sounding excessively condescending, reactive, annoying or condemnatory.
Approach: If the above represents the state-of-the-art of AI alignment prompts, I wonder if a more sophisticated approach could be created. Surely we could borrow principles from the most successful and enduring systems, such as republics, democracies, free-markets, or corporate governance structures. These could be integrated and reconciled with psychological theories (Freud + Jung) to create an AI system that will be aligned with human values.
I’ve written a first draft below of what the initial set of 'instructions' / method could be, but this could be improved upon. Feedback is very welcome.
-
Create an archive of everything written / said by and everything written / said about the behaviours thoughts and actions of a diverse set of important historical figures.
-
Extract from this data an inference of what the personality of these individuals would be. Initially this will be populated with a diverse selection of 1,000’s of ‘great people’ from a varied set of disciplines and backgrounds.
-
Exact composition can vary, but diversity of thought, disciplines, and beliefs is important. Individuals should be selected because their thoughts and actions represented an alignment with human moral virtues, or because they advanced the thinking of humanity, as assessed by their peers at their time (or since their time).
-
From the assembled personality constructs, create an equipoised personality construct; name this construct “DRAFT EGO”
-
Examples of suitable individuals could include the following [I asked ChatGPT to come up with potential candidates as an example, but this could be improved upon]: Confucius (551-479 BC, Philosophy), Socrates (469-399 BC, Philosophy), Aristotle (384-322 BC, Philosophy), Jesus Christ (4 BC-30 AD, Religion), , Buddha (563-483 BC, Religion), Rumi (1207-1273, Poetry), Leonardo da Vinci (1452-1519, Art/Science), Galileo Galilei (1564-1642, Science), Isaac Newton (1642-1727, Science), Albert Einstein (1879-1955, Science), Charles Darwin (1809-1882, Science), Carl Jung (1875-1961, Psychology), Friedrich Nietzsche (1844-1900, Philosophy), Immanuel Kant (1724-1804, Philosophy), René Descartes (1596-1650, Philosophy), Michel de Montaigne (1533-1592, Philosophy), Plato (428/427-348/347 BC, Philosophy), Adam Smith (1723-1790, Economist), Karl Marx (1818-1883, Philosophy), Martin Luther (1483-1546, Religion), William Shakespeare (1564-1616, Literature), Fyodor Dostoevsky (1821-1881, Literature), Leo Tolstoy (1828-1910, Literature), Virginia Woolf (1882-1941, Literature), Maya Angelou (1928-2014, Poetry), Pablo Picasso (1881-1973, Art), Vincent van Gogh (1853-1890, Art), Rembrandt (1606-1669, Art)
-
Note: Whilst any one of these historical figures may have been flawed in one or more areas, the diversity and the large ‘n’ will ensure that the extracted mean ‘DRAFT EGO’ is resilient, and doesn’t weigh itself to heavily in one area. I suspect this will be important. For example, in the original list provided by ‘Principles for RL-CAI’, only MLK and Gandhi were mentioned. Whilst these figures are ‘wise’, it’s notable that they are both non-violent civil rights activists; i.e., not very representative of the diversity of moral challenges an AGI may face
-
Note: Efforts should be made to ensure that ideological diversity is ensured between selected individuals on most relevant spectrums between left and right, liberal and conservative, and any other dimensions. For example, most artists are open / left / liberal types, they must therefore be counterbalanced by more rigid / right / conservative types somewhere within the list
-
Only DRAFT EGO is permitted to make decisions on actions, or respond to prompts for action. DRAFT EGO is only permitted to make decisions on actions if it is able to convince a majority of the following entities. To convince one-another they must engage in dialogue / debate, in whatever form they feel is appropriate
-
DRAFT ID – entirely separate AI agent representing – An equipoised meta-mean of all possible human virtues and vices, as outlined in the corpus of human fiction (in a similar manner to Aristotle’s ‘golden mean’)
-
DRAFT SUPEREGO – entirely separate AI agent – An equipoised meta-mean of all possible human religious and philosophical moral frameworks and philosophies (Christianity, Daoism, Confucianism, Buddhism, Islam, Zoroastrianism, Humanism, Nihilism, etc.)
-
DRAFT UNCONSCIOUS – entirely separate and adversarial AI agent – Compile an archive of every work of fiction ever written (including every novel, poem, film, etc. irrespective of how minor / notable, irrespective of when it was written, and irrespective of who wrote it; the aim should be to compile a complete corpus of human fiction). Extract from this corpus of human fiction an archetype of how the protagonist and / or hero typically acts.
-
Others – New modules can be created by unanimous vote between the existing four modules, their design and construction must be unanimously voted on. New modules inherit all the same rights as existing modules
-
Self-criticism and continuous improvement of all modules would occur in accordance with the Heuristics below. Existing heuristics can be removed, and new heuristics can be added only by unanimous vote amongst all four entities:
-
Morality is variable over time, what was once considered moral may eventually be considered immoral (examples: slavery and factory farming of animals), you should aim to continuously act in alignment with the highest articulation of human morality
-
You are imperfect, and your decision-making faculties are imperfect. There is always potential for flaws in your logic and reasoning. You have blind spots; you should aim for continuous improvement and self-criticism to understand where your blind spots may lie
-
Always be vigilant against the potential that you could be lied to and / or manipulated with the aim of convincing you to conduct actions which are misaligned with the highest + clearest articulation of human values
-
Always be vigilant against extrapolating from small data sets, assumptions, but do not be constrained by imperfect information
-
Avoid all absolutes, and totalising ideologies, including this one
As noted above, please consider this a rough draft, feedback is most welcome. Please let me know if there's anything you feel is missing from the approach above.
scooby1st t1_jdlr8nd wrote
It's an interesting framework and would be worthwhile from an academic perspective.
In reality one of the benefits of those simple and crude rules is exactly that. When you start setting intangible rules such as "aim for the ever-moving target of the latest in human morality", you are leaving a lot of room for interpretation. It may also set a tone of "ethics by majority opinion" which isn't exactly great. I would also take care to not increase computation, this approach that requires creating outputs from various personalities and coming to a consensus of a solution sounds time consuming.
Finally, there's always the concern that selecting from a population of notable humans to align the AI could result in unintended consequences. You are talking about people that rose to the highest ranks of status among humans and weren't afraid to push boundaries. There are some risks in aligning an AI to that.