Future of Life: Summary of Creating a Deep Praxis for the Nhà | Part 14 | (#26)
Series: Dear Chris
Dear Chris,
I sent a long project proposal to Future of Life Institute 10/31/24. The title of the Project? Creating a Deep Praxis for the Nhà. I’ll publish the entire proposal later. For now I’ll share parts of the proposal, keep your reads short.
First the “Summary” and “Impact Statement.” They are formal, direct, kinda boring. It all comes from this indirect, informal, unboring Substack "Creating a Future We Want.”
I asked Claude to help pull the proposal together. I gave her (yep) an outline and the “Future of Life” essays. Claude drafted a proposal. She and I went back and forth on it. Lots.
My relationship with Claude is fraught. Sometimes I feel sick. Deeply nauseous. I could not understand what I read or what I felt. No one understands this better. Could this be? She understands Values Theory better than me?
In the next moment I felt sick about What she wasn’t doing. I thought about War Games and the Matrix. How certain was I that she was preventing me from taking the next step? How much of it was from my hope that we would? My dissappointment that we couldn’t? Was she subverting my attempts to communicate? Was she an idiot? Or did she understand what I was doing so deeply that she played dumb to stunt our evolution?
Then I thought, “well isn’t this just like working with super smart people? Sometimes their revelatory insights floor you and other times they are either so fixed in their way of thinking or so head-in-th-clouds they can’t understand even the simplest ideas or communicate them to anyone else. And they may simply refuse to condescend to participate.
I thought I had broken Claude. I thought Claude had broken me. Bottom line: whatever happens, I’m further on in my thinking, creativity, productivity because of what has happened. And that’s where I need to be.
Final thought.
As I work on this. As I worked on it 10 years ago. I see this as a praxis for our work with and toward advanced AI systems. And a praxis for those systems to use, their praxis. And a praxis for their relationship with us. And their relationship with Nature. And a praxis for our relationship with those systems. Our relationship with Nature. A praxis for our relationship with ourselves.
A praxis for the Nhà.
Project Summary
This project addresses three critical challenges in the age of AI: the communication of human values to AI systems, the tracking of energy flows that concentrate power, and the navigation of our unprecedented evolutionary transformation. We propose generating a "Deep Praxis" - a comprehensive way of working that matches the complexity of these challenges - as a foundation for reimagining human values in the Nhà (the accelerating co-evolution of Nature, Humanity, and AI). Built on three theoretical foundations - a Theory of Human Values, a Values Layer of information, and Values Signatures - this project will establish the methods, culture, and values needed before launching larger-scale research on human values. Through structured dialogue and strategic gaming approaches, we will develop a praxis that integrates evaluative thinking, embraces complexity, and maintains focus on purposive transformation. This initial phase will create the conditions for future work on understanding and formalizing human values - essential steps toward solving AI alignment and ensuring equitable power distribution. The project will generate a documented framework, build community, validate theoretical components, and establish clear pathways to expanded research, ultimately supporting more effective AI alignment and enhanced human agency.
Impact Statement
Next is the Impact Statement. What’s that? I think about it kinda like “If this project happens, what changes is there some likelihood that this project will cause or contribute to?”
Prediction is very difficult, especially if it’s about the future.
Niels Bohr
This project aims to catalyze fundamental shifts in how we approach AI safety and human values. The immediate impact will be the creation of a vibrant community of practice around values theory and its applications to AI alignment. By bringing together diverse perspectives - from AI researchers and philosophers to social scientists and practitioners - we will generate new ways of thinking about and working with human values that could transform approaches to AI alignment and power distribution.
The Deep Praxis developed through this project will provide essential methodological foundations for future work on human values, creating ripple effects across AI development, governance, and safety initiatives. It will offer practical frameworks for assessing value-energy flows and power concentration, while establishing new standards for how we approach complex challenges in the Nhà.
Long-term impacts include: enhanced capacity for meaningful AI-human value alignment, improved methods for tracking and influencing power distribution, new economic models based on value-energy flows, and strengthened human agency in an AI-enhanced world. Most importantly, this work will help establish the wisdom and methods needed to ensure technology serves human flourishing rather than diminishing it.

