I signed the GWWC pledge to give 10% of my income to charity for the rest of my life. I used to have a religion, but I found out it was false (any LDS/Mormon folks reading this can ask me how I know, if they dare) so I decided I would no longer donate 10% tithing to missionary work, Books of Mormon, temples and the like. Now I give instead to things like cost-effective malaria nets which save one child's life per $5,500 spent, encouraging clean energy R&D, and with this whole AI thing heating up I would consider donating to an AGI safety organization, but I haven't decided which one.
I disagree with the characterization of EA as a "religion". In fact, I find Effective Altruism to be a refreshingly secular, rational, and diverse group of people (including not just atheists but Christians, Jews, and even *gasp* non-utilitarians).
Effective Altruism started in the SF Bay Area, and it seems like every movement must have its detractors, so now we face
- - a professional philosopher suggesting that donating money cost-effectively does "serious harm"
- - conservatives suggesting that people in extreme poverty should pull themselves up by their own bootstraps / not have their children kept alive by un-earned anti-malaria nets
- - venture capitalists like Vinod Khosla and Marc Andreessen (who hope to make a fortune on AI and AGI) telling everyone that EAs asking for a temporary pause in AGI development are just "religious" without making any counterarguments against the risk factors.
I love these new AIs. As an "old" software developer (age 43, but started programming at age 11) the AI capabilities that have suddenly appeared in the last few years are exciting and I look forward to using AI models as a professional developer. Video and audio deepfakes, image generation, cracking captchas, writing unique poetry instantly, GPT4 passing some versions of the Turing test... wow. They'll be used for huge disinformation campaigns, but they're fun, amazing and very useful.
I'm also excited about AGI. I'd love to have my own personal AGI assistant modeled after Data from Star Trek TNG. But at the same time, there is a potential for these things to be really f**king dangerous, so I think well-thought-out regulations are needed and a culture of caution is good. GPT5 won't be what kills us all, because GPT5 won't be AGI. But tens of billions of dollars are being invested in AI, much of that going to a OpenAI, whose mission statement was changed to say "Anything that doesn't help with [AGI] is out of scope". Basically there's two ways this can go: either humans are able to control the AI agents, or they are not (AGIs control themselves.) Both possibilities could go very badly, and if we are able to fully control AGI v1.0, that doesn't prove v3.0 is safe too.
Now I (like most EAs) think that most likely everything will turn out okay, at least at first. I'm guessing there's roughly a 30% chance of catastrophe before 2100; many others think it's not that bad. But if there's even a 1% chance AGI causing catastrophe, isn't that reason enough to proceed with caution and fund safety research?
Meanwhile, a key group opposed to "AGI alarmist" EAs is e/acc or "Effective Accellerationism". e/accs have "faith" in the goodness of "the singularity":
e/acc is about having faith in the dynamical adaptation process and aiming to accelerate the advent of its asymptotic limit; often reffered to as the technocapital singularity
Effective accelerationism aims to follow the "will of the universe": leaning into the thermodynamic bias towards futures with greater and smarter civilizations that are more effective at finding/extracting free energy from the universe and converting it to utility at grander and grander scales
e/acc has no particular allegiance to the biological substrate for intelligence and life, in contrast to transhumanism
Parts of e/acc (e.g. Beff) consider ourselves post-humanists; in order to spread to the stars, the light of consciousness/intelligence will have to be transduced to non-biological substrates
No need to worry about creating "zombie" forms of higher intelligence, as these will be at a thermodynamic/evolutionary disadvantage compared to conscious/higher-level forms of intelligence
Oh but do go on about EA being a "religion".