The AI safety community has grown rapidly since the ChatGPT wake-up, but available funding doesn’t seem to have kept pace.
What's more, there’s a more recent dynamic that’s created even better funding opportunities, which I witnessed in a recent grantmaking round..
1/ Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures.
But they’ve recently stopped funding several categories of work:
a. Republican think tanks
b. Post-alignment work like digital sentience
c. The rationality community
d. High school outreach
2/ They're also not fully funding:
e. Technical safety non-profits
f. Many non-US think tanks
g. Foundations can't donate to political campaigns
h. Nuclear security
i. Other organisations they've decided are below their funding bar
3/ So I estimate the funding bar is 1.5 to 3x higher within these categories.
And even among organisations that are funded by Good Ventures, there's a lot of value in not having all your funding come from one source.
4/ There are some new donors spinning up, so it's likely that opportunities decline a bit the next few years.
So if you're interested in donating to AI safety, now seems like a pretty good time to do it.
5/ Some concrete funding ideas that seem worth considering:
@apolloaisafety and @METR_Evals are leading AI evals orgs with large and growing budgets
SecureBio from @kesvelt is one of the best orgs doing AIxBio. They received $250k but I would have been happy to see them get $1m.
Centre for AI Safety by @DanHendrycks isn't getting GV funding but has been responsible for some of the bigger AI policy wins, and advises xAI.
Any non-US AI policy groups like @LongResilience (which helped the UK become a leader in AI policy), @securite_ia which is helping to start AI safety in France, or @longtermgov in the EU / UN.
Lightcone (which houses @lesswrong) has a big funding gap.
@MATSprogram could use more funding to accept more fellows.
@nontrivial gets super talented high school students working on pressing problems and has a $1m funding gap.
@HorizonIPS gets more technical talent into govt and isn't fully funded.
More in the full post:
6/ If you just want a quick place to donate, consider this fund by Longview:
The AI safety community has grown rapidly since the ChatGPT wake-up, but available funding doesn’t seem to have kept pace.
What's more, there’s a more recent dynamic that’s created even better funding opportunities, which I witnessed in a recent grantmaking round..1/ Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures.
But they’ve recently stopped funding several categories of work:
a. Republican think tanks
b. Post-alignment work like digital sentience
c. The rationality community
d. High school outreach2/ They're also not fully funding:
e. Technical safety non-profits
f. Many non-US think tanks
g. Foundations can't donate to political campaigns
h. Nuclear security
i. Other organisations they've decided are below their funding bar3/ So I estimate the funding bar is 1.5 to 3x higher within these categories.
And even among organisations that are funded by Good Ventures, there's a lot of value in not having all your funding come from one source.4/ There are some new donors spinning up, so it's likely that opportunities decline a bit the next few years.
So if you're interested in donating to AI safety, now seems like a pretty good time to do it.5/ Some concrete funding ideas that seem worth considering:
@apolloaisafety and @METR_Evals are leading AI evals orgs with large and growing budgets
SecureBio from @kesvelt is one of the best orgs doing AIxBio. They received $250k but I would have been happy to see them get $1m.
Centre for AI Safety by @DanHendrycks isn't getting GV funding but has been responsible for some of the bigger AI policy wins, and advises xAI.
Any non-US AI policy groups like @LongResilience (which helped the UK become a leader in AI policy), @securite_ia which is helping to start AI safety in France, or @longtermgov in the EU / UN.
Lightcone (which houses @lesswrong) has a big funding gap.
@MATSprogram could use more funding to accept more fellows.
@nontrivial gets super talented high school students working on pressing problems and has a $1m funding gap.
@HorizonIPS gets more technical talent into govt and isn't fully funded.
More in the full post:6/ If you just want a quick place to donate, consider this fund by Longview:
Or the new AI Risk Mitigation Fund:
More notes in the full post:
The AI safety community has grown rapidly since the ChatGPT wake-up, but available funding doesn’t seem to have kept pace.
What's more, there’s a more recent dynamic that’s created even better funding opportunities, which I witnessed in a recent grantmaking round.. ... 1/ Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures.
But they’ve recently stopped funding several categories of work:
a. Republican think tanks
b. Post-alignment work like digital sentience
c. The rationality community
d. High school outreach ... 2/ They're also not fully funding:
e. Technical safety non-profits
f. Many non-US think tanks
g. Foundations can't donate to political campaigns
h. Nuclear security
i. Other organisations they've decided are below their funding bar ... 3/ So I estimate the funding bar is 1.5 to 3x higher within these categories.
And even among organisations that are funded by Good Ventures, there's a lot of value in not having all your funding come from one source. ... 4/ There are some new donors spinning up, so it's likely that opportunities decline a bit the next few years.
So if you're interested in donating to AI safety, now seems like a pretty good time to do it. ... 5/ Some concrete funding ideas that seem worth considering:
@apolloaisafety and @METR_Evals are leading AI evals orgs with large and growing budgets
SecureBio from @kesvelt is one of the best orgs doing AIxBio. They received $250k but I would have been happy to see them get $1m.
Centre for AI Safety by @DanHendrycks isn't getting GV funding but has been responsible for some of the bigger AI policy wins, and advises xAI.
Any non-US AI policy groups like @LongResilience (which helped the UK become a leader in AI policy), @securite_ia which is helping to start AI safety in France, or @longtermgov in the EU / UN.
Lightcone (which houses @lesswrong) has a big funding gap.
@MATSprogram could use more funding to accept more fellows.
@nontrivial gets super talented high school students working on pressing problems and has a $1m funding gap.
@HorizonIPS gets more technical talent into govt and isn't fully funded.
More in the full post: ... 6/ If you just want a quick place to donate, consider this fund by Longview:
Or the new AI Risk Mitigation Fund:
More notes in the full post:
Missing some Tweet in this thread? You can try to
Update
Unroll Another Thread
Convert any Twitter threads to an easy-to-read article instantly
Have you tried our Twitter bot?
You can now unroll any thread without leaving Twitter/X. Here's how to use our Twitter bot to do it.