0.2 Summary of the report
0.2.1 Summary of section 1
0.2.2 Summary of section 2
0.2.3 Summary of section 3
0.2.4 Summary of section 4
0.2.5 Summary of section 5
0.2.6 Summary of section 6
1. Scheming and its significance
1.1 Varieties of fake alignment
1.1.3 Power-motivated instrumental training-gamers, or “schemers”
1.1.4 Goal-guarding schemers
1.2 Other models training might produce
1.2.1 Terminal training-gamers (or, “reward-on-the-episode seekers”)
1.2.2 Models that aren’t playing the training game
1.2.2.2 Misgeneralized non-training-gamers
1.2.3 Contra “internal” vs. “corrigible” alignment
1.2.4 The overall taxonomy
1.3 Why focus on schemers in particular?
1.3.1 The type of misalignment I’m most worried about
1.3.2 Contrast with reward-on-the-episode seekers
1.3.2.1 Responsiveness to honest tests
1.3.2.2 Temporal scope and general “ambition”
1.3.2.3 Sandbagging and “early undermining”
1.3.3 Contrast with models that aren’t playing the training game
1.3.4 Non-schemers with schemer-like traits
1.4 Are theoretical arguments about this topic even useful?
1.5 On “slack” in training
2. What’s required for scheming?
2.1 Situational awareness
2.2.1 Two concepts of an “episode”
2.2.1.1 The incentivized episode
2.2.1.2 The intuitive episode
2.2.2 Two sources of beyond-episode goals
2.2.2.1 Training-game-independent beyond-episode goals
2.2.2.1.1 Are beyond-episode goals the default?
2.2.2.1.2 How will models think about time?
2.2.2.1.3 The role of “reflection”
2.2.2.1.4 Pushing back on beyond-episode goals using adversarial training
2.2.2.2 Training-game-dependent beyond-episode goals
2.2.2.2.1 Can gradient descent “notice” the benefits of turning a non-schemer into a schemer?
2.2.2.2.2 Is SGD pulling scheming out of models by any means necessary?
2.2.3 “Clean” vs. “messy” goal-directedness
2.2.3.1 Does scheming require a higher standard of goal-directedness?
2.2.4 What if you intentionally train models to have long-term goals?
2.2.4.1 Training the model on long episodes
2.2.4.2 Using short episodes to train a model to pursue long-term goals
2.2.4.3 How much useful, alignment-relevant cognitive work can be done using AIs with
2.3 Aiming at reward-on-the-episode as part of a power-motivated instrumental strategy
2.3.1 The classic goal-guarding story
2.3.1.1 The goal-guarding hypothesis
2.3.1.1.1 The crystallization hypothesis
2.3.1.1.2 Would the goals of a would-be schemer “float around”?
2.3.1.1.3 What about looser forms of goal-guarding?
2.3.1.1.4 Introspective goal-guarding methods
2.3.1.2 Adequate future empowerment
2.3.1.2.1 When is the “pay off” supposed to happen?
2.3.1.2.2 Even if the model’s values survive this generation of training, will they survive long
2.3.1.2.3 Will escape/take-over be suitably likely to succeed?
2.3.1.2.4 Will the time horizon of the model’s goals extend to cover escape/take-over?
2.3.1.2.5 Will the model’s values get enough power after escape/takeover?
2.3.1.2.6 How much does the model stand to gain from not training-gaming?
2.3.1.2.7 How “ambitious” is the model?
2.3.1.3 Overall assessment of the classic goal-guarding story
2.3.2 Non-classic stories
2.3.2.2 AIs with similar values by default
2.3.2.3 Terminal values that happen to favor escape/takeover
2.3.2.4 Models with false beliefs about whether scheming is a good strategy
2.3.2.6 Goal-uncertainty and haziness
2.3.2.7 Overall assessment of the non-classic stories
2.4 Take-aways re: the requirements of scheming
3. Arguments for/against scheming that focus on the path that SGD takes
3.1 The training-game-independent proxy-goals story
3.2 The “nearest max-reward goal” story
3.2.1 Barriers to schemer-like modifications from SGD’s incrementalism
3.2.2 Which model is “nearest”?
3.2.2.1 The common-ness of schemer-like goals in goal space
3.2.2.2 The nearness of non-schemer goals
3.2.2.3 The relevance of messy goal-directedness to nearness
3.2.3 Overall take on the “nearest max-reward goal” argument
3.3 The possible relevance of properties like simplicity and speed to the path SGD takes
3.4 Overall assessment of arguments that focus on the path SGD takes
4. Arguments for/against scheming that focus on the final properties of the
4.1 Contributors to reward vs. extra criteria
4.2 The counting argument
4.3.1 What is “simplicity”?
4.3.2 Does SGD select for simplicity?
4.3.3 The simplicity advantages of schemer-like goals
4.3.4 How big are these simplicity advantages?
4.3.5 Does this sort of simplicity-focused argument make plausible predictions about the sort
4.3.6 Overall assessment of simplicity arguments
4.4.1 How big are the absolute costs of this extra reasoning?
4.4.3 Can we actively shape training to bias towards speed over simplicity?
4.4.2 How big are the costs of this extra reasoning relative to the simplicity benefits of
4.5 The “not-your-passion” argument
4.6 The relevance of “slack” to these arguments
4.7 Takeaways re: arguments that focus on the final properties of the model
6. Empirical work that might shed light on scheming
6.1 Empirical work on situational awareness
6.2 Empirical work on beyond-episode goals
6.3 Empirical work on the viability of scheming as an instrumental strategy
6.4 The “model organisms” paradigm
6.5 Traps and honest tests
6.6 Interpretability and transparency
6.7 Security, control, and oversight