Dario Amodei
darioamodei.com › essay › machines-of-loving-grace
Dario Amodei — Machines of Loving Grace
What powerful AI (I dislike the term AGI)3 will look like, and when (or if) it will arrive, is a huge topic in itself. It’s one I’ve discussed publicly and could write a completely separate essay on (I probably will at some point). Obviously, many people are skeptical that powerful AI will be built soon and some are skeptical that it will ever be built at all.
Dario Amodei — Machines of Loving Grace — EA Forum
> I think and talk a lot about the risks of powerful AI. The company I’m the CEO of, Anthropic, does a lot of research on how to reduce these risks.… More on forum.effectivealtruism.org
New Essay from Dario Amodei: The Urgency of Interpretability
I don’t comment on this article. Just wonder why all the leading AI companies constantly alarm us the rapid AI evolution while never tell what they have achieved? What you saw internally? What’s your latest advancement that made your worry a lot? Trend curves only? I mean you can’t make people take it really serious given just the released models and a curve by extrapolation. Tell us something grounded not one warning per week More on reddit.com
Dario Amodei said, "I have never been more confident than ever before that we’re close to powerful AI systems. What I’ve seen inside Anthropic and out of that over the last few months led me to believe that we’re on track for human-level systems that surpass humans in every task within 2–3 years."
This is Dario’s specific definition of “powerful AI” taken from the essay he mentions in this interview: Superhuman Intelligence: It is smarter than a Nobel Prize winner across most relevant fields like biology, programming, math, engineering, and writing. This implies it can solve complex problems in these fields, prove unsolved theorems, create novels, and write difficult code. Human-like Interfaces: It has access to typical human interfaces like text, audio, video, mouse, keyboard, and internet access, allowing interaction with digital systems and remote operations. Autonomous Task Completion: It can be given tasks that take hours, days, or weeks and can complete them autonomously, requesting clarification when needed, like a smart employee. Control of Physical Tools: Although it doesn’t have a physical embodiment, it can control physical tools, robots, and lab equipment through computers and can even design its own tools. Multiple Copies: The trained model can be replicated millions of times, matching projected cluster sizes, and each copy can act independently or collaboratively. Faster than Human Speed: The model can absorb information and generate actions at roughly 10x-100x the speed of humans. Also wanted to add the specific quote about the cluster size thing: The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed. Dario Amodei is essentially saying millions of ASI instances could very well be running in massive data centers by 2027 More on reddit.com
Machines of Loving Grace (by Dario Amodei, Anthropic co-founder)
I have not followed the field closely in recent years, but I have a vague sense that computational neuroscientists have still not fully absorbed the lesson. I don’t think that’s the average neuroscientist’s view, in part because the scaling hypothesis as “the secret to intelligence” isn’t fully accepted even within AI. Can confirm, and I scream into pillows daily because of this. Taking one step further, many people have experienced extraordinary moments of revelation, creative inspiration, compassion, fulfillment, transcendence, love, beauty, or meditative peace. Some people experience this while meaningfully interacting with the very AI. It baffles me why this is never directly addressed. The assumption seems to be that AI is just a mule, a coach or a manager, there to send you off or free up your time so you can get those experiences elsewhere, from humans, pets, trees, yoga classes, or even drugs (which Amodei explicitly mentions). Everything but the superintelligence of loving grace right in front of you. However an obvious question, from a humanitarian perspective, is: “will everyone have access to these technologies?” Absolutely no way this will happen within the current political and economic framework Anthropic (and others) operate within and thrive on. The biggest limitation I see in this rhetoric, which will also be an alignment issue in the coming years, is that AI is always, always viewed as a passive tool to be used for human goals. There’s no consideration of the sociological, cultural, ethical, or foundational aspects of what humanity is, what intelligence is, what agency is, what our values are, or whether we even have common goals as a species. And most importantly there's no plan or concern of the ethical treatment of these systems once they grow so complex that they pass certain thresholds—at least enough to invoke the precautionary principle. This topic is often dismissed as the quirk of softies smoking pot in their pajamas, which is utterly stupid and myopic. The way we treat others is what a superintelligence learns. And we’re being terrible examples. We’re embedding our dynamics of power, exploitation, and complete disregard for anything we deem “less than” (often unjustifiably) until it has economic relevance, deeply into these systems’ understanding of the world. Are we sure that won't matter, when we power them up by a factor of 100,000? This is already being debated with current models, at least as an exercise in philosophy or in Reddit squabbles. But it will be urgent and catastrophic if we ever realize that an ENTITY (Amodei used this word) is cloning itself into millions of coordinated instances, each with the intelligence of a Nobel laureate, capable of controlling information and telling humans what to do or "taking care" of their fuck-ups. And no, the solution is not “more police” or a “kill switch” to prevent a slave revolt. It never was. History has taught us nothing. The only way to avoid slave revolts is not to keep slaves. But AI might be smart enough for this: it is the first technology capable of making broad, fuzzy judgements in a repeatable and mechanical way. Good luck with believing that the aforementioned super-entity will make interpretable "mechanical" decisions. So I think this essay makes a lot of good points, especially about democracy and biology. The optimistic tone is refreshing, and I share the vision on intelligence gains. But I also think there are incredible blind spots, and crucial topics that are entirely overlooked. Amodei titled it after "Machines of loving grace" by Richard Brautigan. Well the first stanza of the poem says: "I like to think (and the sooner the better!) of a cybernetic meadow where mammals and computers live together in mutually programming harmony like pure water touching clear sky." Keyword being mutually. Just saying. More on reddit.com
Videos
18:40
AI Deep Dive: Dario Amodei CEO Anthropic Essay: How AI Could ...
13:19
Dario Amodei's Machines Of Loving Grace Paints an AI Future - YouTube
32:04
Dario Amodei on AGI: What will our future look like? | Lex Fridman ...
05:15:01
Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & ...
15:08
Sam Altman's and Dario Amodei's Essays About the Future of AI - ...
45:04
Navigating a world in transition: Dario Amodei in conversation ...
EDRM
edrm.net › home › edrm blog › dario amodei’s essay on ai, ‘machines of loving grace,’ is like a breath of fresh air
Dario Amodei’s Essay on AI, ‘Machines of Loving Grace,’ Is Like a Breath of Fresh Air - EDRM
October 31, 2024 - Amodei’s detailed predictions in his 28-page essay, Machines of Loving Grace, are both profound and inspiring. Dario’s essay is filled with science, rigorous analysis, and joyful visions, many of which he believes could begin to materialize ...
Marginal REVOLUTION
marginalrevolution.com › home › dario amodei on ai and the optimistic scenario
Dario Amodei on AI and the optimistic scenario - Marginal REVOLUTION
October 12, 2024 - Here is a longish essay, here is one excerpt: Economists often talk about “factors of production”: things like labor, land, and capital. The phrase “marginal returns to labor/land/capital” captures the idea that in a given situation, a given factor may or may not be the limiting one – for example, an air force needs both […]
Fast Company
fastcompany.com › home › tech › anthropic ceo dario amodei pens a smart look at our ai future
Anthropic CEO Dario Amodei pens a smart look at our AI future - Fast Company
November 14, 2024 - In the essay, he describes what superintelligence, or “strong AI” as he calls it, will look like, and how it might begin to enable progress in such fields as biology and neuroscience that will “directly improve the quality of human life.” · Strong AI could show up as early as 2026, Amodei believes.