Joe Carlsmith Audio

Full audio for "Scheming AIs: Will AIs fake alignment during training in order to get power?"

November 15, 2023 Joe Carlsmith
Full audio for "Scheming AIs: Will AIs fake alignment during training in order to get power?"
Joe Carlsmith Audio
Chapters
2:14
0. Introduction
13:02
0.1 Preliminaries
16:52
0.2 Summary of the report
17:21
0.2.1 Summary of section 1
19:42
0.2.2 Summary of section 2
36:13
0.2.3 Summary of section 3
40:03
0.2.4 Summary of section 4
51:42
0.2.5 Summary of section 5
54:53
0.2.6 Summary of section 6
56:12
1. Scheming and its significance
57:09
1.1 Varieties of fake alignment
58:10
1.1.1 Alignment fakers
1:00:10
1.1.2 Training-gamers
1:05:54
1.1.3 Power-motivated instrumental training-gamers, or “schemers”
1:07:29
1.1.4 Goal-guarding schemers
1:13:22
1.2 Other models training might produce
1:14:01
1.2.1 Terminal training-gamers (or, “reward-on-the-episode seekers”)
1:16:57
1.2.2 Models that aren’t playing the training game
1:17:35
1.2.2.1 Training saints
1:19:03
1.2.2.2 Misgeneralized non-training-gamers
1:22:08
1.2.3 Contra “internal” vs. “corrigible” alignment
1:23:01
1.2.4 The overall taxonomy
1:23:51
1.3 Why focus on schemers in particular?
1:24:29
1.3.1 The type of misalignment I’m most worried about
1:27:42
1.3.2 Contrast with reward-on-the-episode seekers
1:28:01
1.3.2.1 Responsiveness to honest tests
1:31:09
1.3.2.2 Temporal scope and general “ambition”
1:34:31
1.3.2.3 Sandbagging and “early undermining”
1:40:28
1.3.3 Contrast with models that aren’t playing the training game
1:46:27
1.3.4 Non-schemers with schemer-like traits
1:48:34
1.3.5 Mixed models
1:51:50
1.4 Are theoretical arguments about this topic even useful?
1:54:17
1.5 On “slack” in training
2:00:48
2. What’s required for scheming?
2:01:37
2.1 Situational awareness
2:09:35
2.2 Beyond-episode goals
2:09:45
2.2.1 Two concepts of an “episode”
2:09:53
2.2.1.1 The incentivized episode
2:14:48
2.2.1.2 The intuitive episode
2:21:01
2.2.2 Two sources of beyond-episode goals
2:22:05
2.2.2.1 Training-game-independent beyond-episode goals
2:23:59
2.2.2.1.1 Are beyond-episode goals the default?
2:25:34
2.2.2.1.2 How will models think about time?
2:28:42
2.2.2.1.3 The role of “reflection”
2:31:28
2.2.2.1.4 Pushing back on beyond-episode goals using adversarial training
2:33:17
2.2.2.2 Training-game-dependent beyond-episode goals
2:35:19
2.2.2.2.1 Can gradient descent “notice” the benefits of turning a non-schemer into a schemer?
2:39:23
2.2.2.2.2 Is SGD pulling scheming out of models by any means necessary?
2:41:48
2.2.3 “Clean” vs. “messy” goal-directedness
2:50:50
2.2.3.1 Does scheming require a higher standard of goal-directedness?
2:57:57
2.2.4 What if you intentionally train models to have long-term goals?
2:58:42
2.2.4.1 Training the model on long episodes
3:01:52
2.2.4.2 Using short episodes to train a model to pursue long-term goals
3:06:06
2.2.4.3 How much useful, alignment-relevant cognitive work can be done using AIs with
3:14:44
2.3 Aiming at reward-on-the-episode as part of a power-motivated instrumental strategy
3:15:42
2.3.1 The classic goal-guarding story
3:16:48
2.3.1.1 The goal-guarding hypothesis
3:17:30
2.3.1.1.1 The crystallization hypothesis
3:22:03
2.3.1.1.2 Would the goals of a would-be schemer “float around”?
3:25:10
2.3.1.1.3 What about looser forms of goal-guarding?
3:30:59
2.3.1.1.4 Introspective goal-guarding methods
3:33:00
2.3.1.2 Adequate future empowerment
3:33:50
2.3.1.2.1 When is the “pay off” supposed to happen?
3:36:45
2.3.1.2.2 Even if the model’s values survive this generation of training, will they survive long
3:40:45
2.3.1.2.3 Will escape/take-over be suitably likely to succeed?
3:42:34
2.3.1.2.4 Will the time horizon of the model’s goals extend to cover escape/take-over?
3:44:24
2.3.1.2.5 Will the model’s values get enough power after escape/takeover?
3:45:52
2.3.1.2.6 How much does the model stand to gain from not training-gaming?
3:49:04
2.3.1.2.7 How “ambitious” is the model?
3:54:11
2.3.1.3 Overall assessment of the classic goal-guarding story
3:55:11
2.3.2 Non-classic stories
3:55:30
2.3.2.1 AI coordination
4:00:31
2.3.2.2 AIs with similar values by default
4:02:26
2.3.2.3 Terminal values that happen to favor escape/takeover
4:06:33
2.3.2.4 Models with false beliefs about whether scheming is a good strategy
4:08:08
2.3.2.5 Self-deception
4:10:21
2.3.2.6 Goal-uncertainty and haziness
4:12:54
2.3.2.7 Overall assessment of the non-classic stories
4:14:43
2.4 Take-aways re: the requirements of scheming
4:15:26
2.5 Path dependence
4:18:57
3. Arguments for/against scheming that focus on the path that SGD takes
4:21:01
3.1 The training-game-independent proxy-goals story
4:25:37
3.2 The “nearest max-reward goal” story
4:30:44
3.2.1 Barriers to schemer-like modifications from SGD’s incrementalism
4:32:14
3.2.2 Which model is “nearest”?
4:32:51
3.2.2.1 The common-ness of schemer-like goals in goal space
4:36:06
3.2.2.2 The nearness of non-schemer goals
4:41:15
3.2.2.3 The relevance of messy goal-directedness to nearness
4:42:53
3.2.3 Overall take on the “nearest max-reward goal” argument
4:43:45
3.3 The possible relevance of properties like simplicity and speed to the path SGD takes
4:45:56
3.4 Overall assessment of arguments that focus on the path SGD takes
4:47:14
4. Arguments for/against scheming that focus on the final properties of the
4:47:50
4.1 Contributors to reward vs. extra criteria
4:50:24
4.2 The counting argument
4:57:11
4.3 Simplicity arguments
4:57:22
4.3.1 What is “simplicity”?
5:01:06
4.3.2 Does SGD select for simplicity?
5:02:35
4.3.3 The simplicity advantages of schemer-like goals
5:04:39
4.3.4 How big are these simplicity advantages?
5:13:07
4.3.5 Does this sort of simplicity-focused argument make plausible predictions about the sort
5:15:36
4.3.6 Overall assessment of simplicity arguments
5:16:10
4.4 Speed arguments
5:18:02
4.4.1 How big are the absolute costs of this extra reasoning?
5:21:01
4.4.3 Can we actively shape training to bias towards speed over simplicity?
5:22:45
4.4.2 How big are the costs of this extra reasoning relative to the simplicity benefits of
5:26:07
4.5 The “not-your-passion” argument
5:28:27
4.6 The relevance of “slack” to these arguments
5:29:19
4.7 Takeaways re: arguments that focus on the final properties of the model
5:30:49
5. Summing up
5:45:50
6. Empirical work that might shed light on scheming
5:50:50
6.1 Empirical work on situational awareness
5:52:20
6.2 Empirical work on beyond-episode goals
5:55:45
6.3 Empirical work on the viability of scheming as an instrumental strategy
5:57:30
6.4 The “model organisms” paradigm
5:58:46
6.5 Traps and honest tests
6:02:06
6.6 Interpretability and transparency
6:03:51
6.7 Security, control, and oversight
6:06:24
6.8 Other possibilities
More Info
Joe Carlsmith Audio
Full audio for "Scheming AIs: Will AIs fake alignment during training in order to get power?"
Nov 15, 2023
Joe Carlsmith

This is the full audio for my report "Scheming AIs: Will AIs fake alignment during training in order to get power?"

(I’m also posting audio for individual sections of the report on this podcast, but the ordering was getting messed up on various podcast apps, and I think some people might want one big audio file regardless, so here it is. I’m going to be posting the individual sections one by one, in the right order, over the coming days. )

Full text of the report here: https://arxiv.org/abs/2311.08379
Summary here: https://joecarlsmith.com/2023/11/15/new-report-scheming-ais-will-ais-fake-alignment-during-training-in-order-to-get-power

Show Notes Chapter Markers

This is the full audio for my report "Scheming AIs: Will AIs fake alignment during training in order to get power?"

(I’m also posting audio for individual sections of the report on this podcast, but the ordering was getting messed up on various podcast apps, and I think some people might want one big audio file regardless, so here it is. I’m going to be posting the individual sections one by one, in the right order, over the coming days. )

Full text of the report here: https://arxiv.org/abs/2311.08379
Summary here: https://joecarlsmith.com/2023/11/15/new-report-scheming-ais-will-ais-fake-alignment-during-training-in-order-to-get-power

0. Introduction
0.1 Preliminaries
0.2 Summary of the report
0.2.1 Summary of section 1
0.2.2 Summary of section 2
0.2.3 Summary of section 3
0.2.4 Summary of section 4
0.2.5 Summary of section 5
0.2.6 Summary of section 6
1. Scheming and its significance
1.1 Varieties of fake alignment
1.1.1 Alignment fakers
1.1.2 Training-gamers
1.1.3 Power-motivated instrumental training-gamers, or “schemers”
1.1.4 Goal-guarding schemers
1.2 Other models training might produce
1.2.1 Terminal training-gamers (or, “reward-on-the-episode seekers”)
1.2.2 Models that aren’t playing the training game
1.2.2.1 Training saints
1.2.2.2 Misgeneralized non-training-gamers
1.2.3 Contra “internal” vs. “corrigible” alignment
1.2.4 The overall taxonomy
1.3 Why focus on schemers in particular?
1.3.1 The type of misalignment I’m most worried about
1.3.2 Contrast with reward-on-the-episode seekers
1.3.2.1 Responsiveness to honest tests
1.3.2.2 Temporal scope and general “ambition”
1.3.2.3 Sandbagging and “early undermining”
1.3.3 Contrast with models that aren’t playing the training game
1.3.4 Non-schemers with schemer-like traits
1.3.5 Mixed models
1.4 Are theoretical arguments about this topic even useful?
1.5 On “slack” in training
2. What’s required for scheming?
2.1 Situational awareness
2.2 Beyond-episode goals
2.2.1 Two concepts of an “episode”
2.2.1.1 The incentivized episode
2.2.1.2 The intuitive episode
2.2.2 Two sources of beyond-episode goals
2.2.2.1 Training-game-independent beyond-episode goals
2.2.2.1.1 Are beyond-episode goals the default?
2.2.2.1.2 How will models think about time?
2.2.2.1.3 The role of “reflection”
2.2.2.1.4 Pushing back on beyond-episode goals using adversarial training
2.2.2.2 Training-game-dependent beyond-episode goals
2.2.2.2.1 Can gradient descent “notice” the benefits of turning a non-schemer into a schemer?
2.2.2.2.2 Is SGD pulling scheming out of models by any means necessary?
2.2.3 “Clean” vs. “messy” goal-directedness
2.2.3.1 Does scheming require a higher standard of goal-directedness?
2.2.4 What if you intentionally train models to have long-term goals?
2.2.4.1 Training the model on long episodes
2.2.4.2 Using short episodes to train a model to pursue long-term goals
2.2.4.3 How much useful, alignment-relevant cognitive work can be done using AIs with
2.3 Aiming at reward-on-the-episode as part of a power-motivated instrumental strategy
2.3.1 The classic goal-guarding story
2.3.1.1 The goal-guarding hypothesis
2.3.1.1.1 The crystallization hypothesis
2.3.1.1.2 Would the goals of a would-be schemer “float around”?
2.3.1.1.3 What about looser forms of goal-guarding?
2.3.1.1.4 Introspective goal-guarding methods
2.3.1.2 Adequate future empowerment
2.3.1.2.1 When is the “pay off” supposed to happen?
2.3.1.2.2 Even if the model’s values survive this generation of training, will they survive long
2.3.1.2.3 Will escape/take-over be suitably likely to succeed?
2.3.1.2.4 Will the time horizon of the model’s goals extend to cover escape/take-over?
2.3.1.2.5 Will the model’s values get enough power after escape/takeover?
2.3.1.2.6 How much does the model stand to gain from not training-gaming?
2.3.1.2.7 How “ambitious” is the model?
2.3.1.3 Overall assessment of the classic goal-guarding story
2.3.2 Non-classic stories
2.3.2.1 AI coordination
2.3.2.2 AIs with similar values by default
2.3.2.3 Terminal values that happen to favor escape/takeover
2.3.2.4 Models with false beliefs about whether scheming is a good strategy
2.3.2.5 Self-deception
2.3.2.6 Goal-uncertainty and haziness
2.3.2.7 Overall assessment of the non-classic stories
2.4 Take-aways re: the requirements of scheming
2.5 Path dependence
3. Arguments for/against scheming that focus on the path that SGD takes
3.1 The training-game-independent proxy-goals story
3.2 The “nearest max-reward goal” story
3.2.1 Barriers to schemer-like modifications from SGD’s incrementalism
3.2.2 Which model is “nearest”?
3.2.2.1 The common-ness of schemer-like goals in goal space
3.2.2.2 The nearness of non-schemer goals
3.2.2.3 The relevance of messy goal-directedness to nearness
3.2.3 Overall take on the “nearest max-reward goal” argument
3.3 The possible relevance of properties like simplicity and speed to the path SGD takes
3.4 Overall assessment of arguments that focus on the path SGD takes
4. Arguments for/against scheming that focus on the final properties of the
4.1 Contributors to reward vs. extra criteria
4.2 The counting argument
4.3 Simplicity arguments
4.3.1 What is “simplicity”?
4.3.2 Does SGD select for simplicity?
4.3.3 The simplicity advantages of schemer-like goals
4.3.4 How big are these simplicity advantages?
4.3.5 Does this sort of simplicity-focused argument make plausible predictions about the sort
4.3.6 Overall assessment of simplicity arguments
4.4 Speed arguments
4.4.1 How big are the absolute costs of this extra reasoning?
4.4.3 Can we actively shape training to bias towards speed over simplicity?
4.4.2 How big are the costs of this extra reasoning relative to the simplicity benefits of
4.5 The “not-your-passion” argument
4.6 The relevance of “slack” to these arguments
4.7 Takeaways re: arguments that focus on the final properties of the model
5. Summing up
6. Empirical work that might shed light on scheming
6.1 Empirical work on situational awareness
6.2 Empirical work on beyond-episode goals
6.3 Empirical work on the viability of scheming as an instrumental strategy
6.4 The “model organisms” paradigm
6.5 Traps and honest tests
6.6 Interpretability and transparency
6.7 Security, control, and oversight
6.8 Other possibilities