That Was The Week 2023 #45
That Was The Week 2023, #45
A Tale of Two Weeks
It's Thanksgiving here in Palo Alto, and I should thank all the writers and producers whose work I read each week for the stimulation and provocation they provide. In case you do not all realize it, you are appreciated.
I always try to call out the creators of the content I curate at the top of That Was The Week, and I will continue to do so. Many appear every week.
Let us start with Brian Chesky, of AirBnB fame, this week. His X post is apt and to the point.
called this week's newsletter, A Tale of Two Weeks. Friends kindly noted that it has only been one week. It felt like two, and to Brian Chesky's point, we learned a lot.
EA and e /acc are now part of everyday conversation in tech circles. And we are starting to understand that ideology (or philosophy) plays a significant role in strategy. People are forced to take sides.
Last week, the EA (effective altruism) camp looked in the ascendancy. But this week, over 700 OpenAI employees sided with Sam Altman and Greg Brockman, resulting in their return to lead the company. The e/acc camp won. And that leads many advocates of effective altruism to question its relevance to startups.
There are also some new acronyms or labels to learn. Marc Andreessen is reposting @beffjezos on X, mentioning Decels.
The Decels want authoritarianism, which means the opponents of effective acceleration (e /acc) are now known as decelerators (Decels) and are considered to want to have the power to stop innovation (authoritarianism).
The e /acc lobby sees itself as humanistic at its core for innovation. It regards the Decels as authoritarian because they regard themselves as the moral arbiters of humanity.
Reading below, I also learned that similar schisms exist in TikTok. In this case, the acronym is BRG and apparently, it is like e /acc.
From Ryan Broderick:
Groups like Remilia and BRG are one half of an extremely stupid ideological battle tearing apart Silicon Valley - and OpenAI, specifically - right now. And before last Friday, the idea that the biggest names in tech could read too many blog posts and end up developing what are essentially two competing religions and could then be willing to blow up billion-dollar companies in defense of those religions was, honestly, too stupid to believe. And yet, here we are...
Which means it's worth understanding what these two sides want. And it essentially comes down to speed. On one side is effective altruism, or EA, and on the other is effective accelerationism, or e/acc, which is mainly what BRG is promoting on TikTok right now.
I'm afraid I have to disagree with Ryan that the disputes are "stupid." And I do not agree with him in his characterization of the two sides:
The altruists, which includes folks like Elon Musk and Sam Bankman-Fried, believe that maximum human happiness is a math equation you can solve with money, which should be what steers technological innovation. While the accelerationists believe almost the inverse, that innovation matters more than human happiness and the internet can, and should, rewire how our brains work. Either way, both groups are obsessed with race science, want to replace democratic institutions with privately-owned automations - that they control - and are utterly convinced that technology and, specifically, the emergence of AI is a cataclysmic doomsday moment for humanity. The accelerationists just think it should happen immediately
Last week's editorial explains the differences quite well. But suffice it to say that Ryan's definitions are way off. Both movements are ill-defined and have yet to be clearly articulated thoroughly. It would be unfair to create a binary world of only two views or to assume any individual is a 100% clone of groupthink. Many good people are grappling with critical philosophical issues. Most have not defined themselves into a strict camp.
So why am I inclined to side with the e /acc camp?
Mainly because e /acc is against a fear-driven elitist attempt to stop or slow down AI innovation; for me, that is sufficient. I see no evidence of anti-democratic or technocratic thinking. Indeed, with Altman and his WorldCoin project, there is evidence of the opposite - a genuine belief in using the wealth created by automation to serve human progress.
The entire European enlightenment that modern democracy was predicated on was broadly e /acc-like.
Once we accept the EA view that self-imposed limits to innovation, especially limits placed in the hands of elites and argued for due to fear, are required, we give up on the human ability to innovate freely using science alone
Of course, science needs to align with human good. So far, there is no evidence that AI does not.
At the core of democracy is wealth creation. At the heart of wealth creation is innovation. Most innovators are, by nature, rule breakers. The right to break the rules and learn is at the very essence of civilization. Once the rules win, the rulers win. And rulers are usually self-interested and not aligned with human evolution. For that reason, I'm with e /acc, not EA.
Much of this week's curated content has these threads running through them. Clarity of purpose around innovation is essential.