Chapter 6: The Six Hypotheses
The Critical Questions That Define Our Future
Six binary hypotheses capture the fundamental uncertainties of the AI revolution. Each represents a fork in the road where humanity must choose—or have chosen for us—between dramatically different paths.
H1: AI Progress Trajectory
The Question
Will AI capabilities continue their exponential growth, or will we hit fundamental barriers?
Option A: Accelerating Progress (91.1% probability)
Evidence Supporting:
- GPT-3 to GPT-4: 10x parameter increase in 2 years
- Compute availability growing 10x every 18 months
- $200B+ annual investment accelerating
- Breakthrough demonstrations monthly
- No fundamental barriers identified
What This Means:
- Human-level performance in most cognitive tasks by 2035
- Continuous capability surprises
- Rapid deployment across industries
- Accelerating societal transformation
Option B: Fundamental Barriers (8.9% probability)
Evidence Supporting:
- Scaling laws may plateau
- Energy constraints emerging
- Data limitations possible
- Regulatory restrictions growing
What This Means:
- AI remains powerful but limited
- Incremental improvements only
- More time for adaptation
- Current paradigms persist
H2: Intelligence Architecture
The Question
Will we achieve Artificial General Intelligence (AGI) or remain with narrow AI systems?
Option A: AGI Emerges (44.3% probability)
Evidence Supporting:
- Emergent abilities in large models
- Transfer learning improving
- Multimodal integration advancing
- Reasoning capabilities expanding
What This Means:
- Single systems handle diverse tasks
- Human-level general intelligence
- Unprecedented capabilities
- Existential questions arise
Option B: Narrow AI Persists (55.7% probability)
Evidence Supporting:
- Current systems still brittle
- True understanding absent
- Combinatorial explosion remains
- Domain-specific solutions dominate
What This Means:
- Specialized systems for each domain
- Human expertise remains valuable
- More predictable development
- Easier safety management
H3: Employment Dynamics
The Question
Will AI complement human workers or displace them faster than new jobs emerge?
Option A: Complementary Enhancement (25.1% probability)
Evidence Supporting:
- Historical precedent of technology creating jobs
- New role categories emerging
- Human skills remain unique
- Augmentation tools proliferating
What This Means:
- Humans and AI work together
- Productivity dramatically increases
- New job categories emerge
- Skills evolution manageable
Option B: Mass Displacement (74.9% probability)
Evidence Supporting:
- Automation scope unprecedented
- Cognitive tasks now vulnerable
- Speed exceeds retraining capacity
- Network effects concentrate gains
What This Means:
- 21.4% net job losses by 2050
- Structural unemployment rises
- Social safety nets stressed
- Economic restructuring required
H4: Safety and Control
The Question
Can we develop AI safely with proper alignment, or will significant risks materialize?
Option A: Safe Development (59.7% probability)
Evidence Supporting:
- Alignment research progressing
- Safety culture strengthening
- Regulatory frameworks emerging
- Technical solutions advancing
What This Means:
- AI remains under human control
- Risks identified and mitigated
- Beneficial outcomes dominate
- Trust in AI systems grows
Option B: Significant Risks (40.3% probability)
Evidence Supporting:
- Control problem unsolved
- Misalignment examples accumulating
- Dual-use concerns growing
- Accident potential high
What This Means:
- Major incidents likely
- Existential risks possible
- Public backlash probable
- Restrictive regulation follows
H5: Development Paradigm
The Question
Will AI development remain distributed or centralize among few powerful entities?
Option A: Distributed Development (22.1% probability)
Evidence Supporting:
- Open source movement strong
- Academic research continues
- Startup ecosystem vibrant
- International competition
What This Means:
- Innovation from many sources
- Competitive markets preserved
- Democratic access possible
- Resilient ecosystem
Option B: Centralized Control (77.9% probability)
Evidence Supporting:
- Compute costs escalating
- Data moats expanding
- Network effects dominant
- Winner-take-all dynamics
What This Means:
- 2-5 entities control AI
- Monopolistic tendencies
- Power concentration extreme
- Democratic challenges arise
H6: Governance Evolution
The Question
Can democratic institutions adapt to govern AI, or will authoritarian control emerge?
Option A: Democratic Governance (36.1% probability)
Evidence Supporting:
- Democratic resilience historically
- Public awareness growing
- Regulatory efforts underway
- Civil society engaged
What This Means:
- Human rights preserved
- Transparent AI governance
- Public participation maintained
- Individual agency protected
Option B: Authoritarian Drift (63.9% probability)
Evidence Supporting:
- Surveillance capabilities expanding
- Emergency powers normalizing
- Tech-state fusion occurring
- Democratic norms eroding
What This Means:
- AI enables total surveillance
- Social control mechanisms
- Individual freedom curtailed
- Power permanently concentrated
The Interconnected Web
These hypotheses don’t exist in isolation. Their interactions create the complex dynamics of our future:
Critical Relationships
- Progress → Everything: H1A (rapid progress) influences all other outcomes
- AGI → Displacement: H2A makes H3B (job losses) highly probable
- Centralization → Authoritarianism: H5B enables H6B directly
- Displacement → Instability: H3B threatens H6A (democracy)
Feedback Loops
- Authoritarian-Centralization: H6B reinforces H5B and vice versa
- Safety-Trust: H4A builds confidence, enabling positive outcomes
- Risk-Restriction: H4B triggers regulation, slowing progress
What the Probabilities Tell Us
High Confidence Predictions (>75%)
- AI will advance rapidly (91.1%)
- Jobs will be displaced (74.9%)
- Development will centralize (77.9%)
Genuine Uncertainties (40-60%)
- AGI achievement (44.3%)
- Safety outcomes (59.7% safe)
Warning Signals (<40%)
- Democratic preservation (36.1%)
- Distributed development (22.1%)
- Job complementarity (25.1%)
The Composite Picture
Combining these probabilities reveals our most likely future:
- Rapid AI progress continues (H1A)
- Uncertainty about AGI (H2 mixed)
- Significant job displacement (H3B)
- Reasonable safety measures (H4A)
- Centralized development (H5B)
- Democratic erosion (H6B)
This points toward our three scenario clusters, with Adaptive Integration most likely if we act wisely, but Fragmented Disruption probable if we don’t.