Dear Chris,
In 2015 I was trying to connect the Evaluation and AI communities. I was sure that Evaluation - a community profession and transdiscipline where I spent much of my time - was important to AI safety.
Local cafes had sponsored my espresso-fueled concentration on the power of Values for ten years. I wanted to understand the what when why how where of Values?
I thought I was getting somewhere. I was excited. I told people. The wrong people. I was discouraged. Then I learned about the AI alignment problem. I was sure everything would change. I was fired up again.
I was giving presentations in the States and Europe. Hosting events at the office. Writing proposals. Applying for jobs. Editing a book. Writing a play. Reading everything. Cold calling researchers. Stopping colleagues in the hall. Looking for a water cooler. Meeting with the now-defunct (I guess?) IBM Watson team. I was dedicating all sorts of time-energy to connecting Evaluation and AI.
Successful? No. Niave and ineffective? So very. Evidence? I was a Social Scientist in US EPA’s Adminstrator’s Office sure that IBM’s sure-to-be dominant AI capabilities were sure-to-be the key to transforming US government into an effn (efficiently effective) bureaucracy.
In 2015 I mentioned my efforts to connect the AI and Evaluation communities to Max Tegmark, President of the Future of Life Institute (FLI).
The exchange was short. He was polite. He was preparing for a big conference I think.1
Pura Vida!
Now, ten years later, Max and FLI are asking all of us a critical question.
I’ll include FLI’s description of the situation here. This text is particular to our time. Read 10 years ago, science fiction. Read 10 years from now, anachronistic.
We're [FLI] offering up to $4M to support projects that work to mitigate the dangers of AI-driven power concentration and move towards a better world of meaningful human agency.
AI development is on course to concentrate power within a small number of groups, organizations, corporations, and individuals. Whether this entails the hoarding of resources, media control, or political authority, such concentration would be disastrous for everyone. We risk governments tyrannising with Orwellian surveillance, corporate monopolies crushing economic freedom, and rampant decision automation subverting meaningful individual agency. To combat these threats, FLI is launching a new grants program of up to $4M to support projects that work to mitigate the dangers of AI-driven power concentration and move towards a better world of meaningful human agency.
The ungoverned acceleration of AI development is on course to concentrate further the bulk of power amongst a very small number of organizations, corporations, and individuals. This would be disastrous for everyone.
Power here could mean several things. It could mean the ownership of a decisive proportion of the world’s financial, labor or material resources, or at least the ability to exploit them. It could be control of public attention, media narratives, or the algorithms that decide what information we receive. It could simply be a firm grip on political authority. Historically, power has entailed some combination of all three. A world where the transformative capabilities of AI are rolled out unfairly or unwisely will likely see most if not all power centres seized, clustered and kept in ever fewer hands.
Such concentration poses numerous risks. Governments could weaponize Orwellian levels of surveillance and societal control, using advanced AI to supercharge social media discourse manipulation. Truth decay would be locked in and democracy, or any other meaningful public participation in government, would collapse. Alternatively, giant AI corporations could become stifling monopolies with powers surpassing elected governments. Entire industries and large populations would increasingly depend on a tiny group of companies – with no satisfactory guarantees that benefits will be shared by all. In both scenarios, AI would secure cross-domain power within a specific group and render most people economically irrelevant and politically impotent. There would be no going back. Another scenario would leave no human in charge at all. AI powerful enough to command large parts of the political, social, and financial economy is also powerful enough to do so on its own. Uncontrolled artificial superintelligences could rapidly take over existing systems, and then continue amassing power and resources to achieve their objectives at the expense of human wellbeing and control, quickly bringing about our near-total disempowerment or even our extinction.
What world would we prefer to see?
We must reimagine our institutions, incentive structures, and technology development trajectory to ensure that AI is developed safely, to empower humanity, and to solve the most pressing problems of our time. AI has the potential to unlock an era of unprecendented human agency, innovation, and novel methods of cooperation. Combatting the concentration of power requires us to envision alternatives and viable pathways to get there.
Open source of AI models is sometimes hailed as a panacea. The truth is more nuanced: today’s leading technology companies have grown and aggregated massive amounts of power, even before generative AI, despite most core technology products having open source alternatives. Further, the benefits of “open” efforts often still favor entitities with the most resources. Hence, open source may be a tool for making some companies less dependent upon others, but it is insufficient to mitigate the continued concentration of power or meaningfully help to put power into the hands of the general populace.2
Chris,
I was going to ask FLI for support to describe a collection of concepts, methods, tools and philosophies that speak to a few of their topical interests including preference aggregation; aligning economic, sociocultural, and governance forces; safe decentralization; and responsible and safe open release. Maybe that means something like bringing it all together as a book.
I’d take it. But now, as I draft these letters, that seems kinda silly. Anyone with ideas about this should ask for funding TO DO the work that enacts their ideas.
But for me, today, I have very little capacity to do the work and I’m not affiliated with an institution qualified to receive this funding. I’m a homeschool dad.
But I’m just a homeschool dad! How can I keep up with SO MUCH changing so fast all at once?3 Some things must be ignored and some things are the most important…Max says don’t be a bystander (video below).4
Besides, I’ve got to write this down so that my girls, when they get older, will have evidence explaining why I was so nuts.
AI safety conference. https://futureoflife.org/event/ai-safety-conference-in-puerto-rico/
the call for proposals - https://futureoflife.org/grant-program/mitigate-ai-driven-power-concentration/
Information Overload and bystander effect https://thedecisionlab.com/reference-guide/psychology/information-overload
But who am I? How could I have anything to contribute to this conversation? Max says, be aware of the bystander effect. If you have an idea, tell us. I believe he said something similar at the end of his excellent book Life 3.0