The Stanford EA Manifesto
Stanford Effective Altruism
I. The Pivotal Moment
We are living through a hinge of history. The technologies being built today, in labs down the road, in offices across the Bay, will reshape what it means to be human. You can feel it in the air: something unprecedented is happening.
- AI capabilities accelerating fast1, foundation models2, autonomous agents3, scientific AI solving problems that stumped researchers for decades45
- Biotechnology entering a new era6, gene editing78, synthetic biology9, the first real shots at aging itself10
- Space is becoming accessible11, private launch12, satellite constellations13, the early infrastructure for a multiplanetary species14
- Energy transformation is underway, solar cost curves decreasing15, nuclear renaissance with tech giants investing billions16, fusion attracting unprecedented private investment17, the possibility of abundance18
This decade may matter more than any that came before it.
II. Peril and Promise
The future is radically uncertain. What comes next could be a renaissance of human flourishing unlike anything in history, or it could be catastrophe. The difference lies in choices being made right now, by people alive today.
- If we navigate wisely: abundance, diseases cured, suffering reduced, human potential unlocked at scale2223
- If we fumble in the transition: loss of human agency24, geopolitical destabilization25, catastrophic or even existential risks26 realized
- We aren't content to "muddle through" — we want the future to go as well as possible
- The window for shaping these trajectories is limited. Decisions that matter are being made now.
III. Techno-Humanism: A Third Way
We reject both blind techno-optimism and fearful retreat. We have entered an age where powerful technologies bring tremendous benefits and corrosive effects in equal measure, social media connecting billions while fragmenting attention and eroding truth, AI augmenting human capability while concentrating power and displacing workers. The path forward demands wisdom, not nuanceless ideology.
- Technology as a tool, not a destiny — outcomes depend on how we build, deploy, and govern
- Differential acceleration: speed up what helps humanity27, forecast and address negative externalities
- Philosophy, ethics, and empirical rigor must guide this generation of development — not just what we can build, but what should we
IV. Our Stance: Moral Ambition
We aim to build for a tomorrow that leverages innovation and technology to genuinely improve human wellbeing. We take seriously that there are unprecedented challenges and opportunities.
- High agency: We are young but capable, adapting quickly, leveraging new tools. We aim to do great work. We are resourceful and we don't give up
- Reject complacency: waiting is itself a choice, often with catastrophic consequences
- Scope sensitivity: billions of lives hang in the balance of decisions being made today, across the globe, in factory farms, and in future generations
- Ambition tempered by humility: it's a confusing time of polarized media and rapid change. We could be wrong, we try to reason carefully, account for our biases, and update on evidence. In failing we strive to admit when we made a bad call then get up to try again
V. Standing on Shoulders
We inherit a movement that has already changed the world and is not slowing down. The previous generation of effective altruists built the intellectual frameworks and institutional infrastructure we now deploy at scale.
- Doing Charity Better: organizations like GiveWell, Coefficient Giving, and Giving What We Can have directed billions of dollars to evidence-backed interventions, saving hundreds of thousands of lives, and shifted philanthropic culture toward rigorous accountability and effectiveness1 1. GiveWell has moved $2.6 billion to evidence-backed interventions, averting an estimated 340,000 deaths. Giving What We Can has secured over $2.5 billion in lifetime pledges from 10,000+ members across 95 countries. Coefficient Giving (formerly Open Philanthropy) has deployed $4+ billion since 2014.
- AI Safety: taken from a fringe academic concern to mainstream research priority at every major lab
- Biosecurity: prescient warnings about pandemic preparedness, vindicated, with work ongoing
- 80,000 Hours: career capital as leverage, making high-impact paths legible and achievable
VI. Who We Are
We founded SERI (Stanford Existential Risks Initiative). We incubated MATS (ML Alignment Theory Scholars), now one of the premier alignment research programs. We launched EA Virtual Programs, reaching thousands globally. We built the Stanford Alt Protein Project. Our alumni now work at top AI labs, lead nonprofits, and help shape policy.
We are the next generation of high-agency, impact-oriented leaders. Stanford gives us unparalleled access: technical depth, proximity to the builders and funders, and the urgency that comes from being at ground zero of the transformation.
- Moving quickly in what may be decisive years for human history
- Diverse backgrounds united by commitment to impact
- Students, researchers, future founders and future policymakers, building skills that compound
VII. What We Work On
At Stanford and in the EA community at large, we aim to identify pressing global problems on the Pareto frontier of large scope and scale, where progress is tractable, and which remain neglected or underserved by others. Some examples:
- AI Safety & Governance: alignment research, interpretability, policy frameworks, dangerous capability evaluations
- Biosecurity & Pandemic Preparedness: preventing the next catastrophe, whether natural or engineered
- Global Catastrophic Risk: nuclear security, climate tail risks, war & geopolitics, emerging technological threats
- Global Health & Development: saving lives through access to medicine and improving wellbeing through global development, proven interventions that remain tragically neglected
- Animal Welfare: such as the vast scale of factory farming, and cruel treatment of tens of billions of animals annually
- Long-term Institutional Design: building structures that can govern wisely across decades and centuries
We act as proactive pre-professionals: building networks, developing a portfolio of meaningful projects and a track record of impact, and preparing ourselves for high-leverage positions to tackle high-impact problems like these.
Technology as a force multiplier. The answer is not to retreat from progress but to help steer it.
VIII. Our Approach
Effective altruism is a question, not an ideology: "How can we do the most good?" The answer requires evidence, reasoning, and the willingness to change course when we're wrong.
- Cause prioritization over cause loyalty: we go where we can help most, not where it feels comfortable
- Scout mindset: seek the ground truth, update on evidence, hold beliefs provisionally, small identities, big empathy
- Careers as leverage: your 80,000 working hours are your single biggest resource for impact, we plan and test career plans, we motivate and connect each other
- Community as infrastructure: ambitious action requires coordination, support, and shared knowledge
- Bias toward action, tempered by epistemic humility: move fast be courageous in trying to do good, but realize the world is complicated, well meaning interventions can backfire, know what you don't know, anchor your identity in trying hard to be less wrong
IX. Join Us
The work is urgent. The problems are hard. And there is room for many more hands.
The future is not yet written. Let's write it unreasonably well.