Elon University’s Imagining the Digital Future Center released a report April 2 titled “Building a Human Resilience Infrastructure for the AI Age.” The report featured input from excerpts around the world, totaling 386 responses.
The report’s central message is that artificial intelligence will play a much larger role in everyday life within the next decade. Experts warn that the biggest risk is not a sudden disaster, but people slowly losing control and relying too heavily on AI for decisions.
The report states that individuals alone cannot handle these changes.
Instead, governments, companies, schools and communities must work together to create rules, teach AI skills and protect human thinking and relationships.
Lee Rainie, director of the Imagining the Digital Future Center, said the report was inspired by a desire to examine AI’s challenges in a more constructive way. He said that people already were aware of the rising concern about Ai, so they focused their research on moving forward. “We began to ask about solutions and how human resiliency has always been tested,” Rainie said.
Rainie said the heart of the report is captured on its front page in a quote from Mel Sellick, founder of the Future Human Lab. The quote states,“We are the last generation that knows what human capacity felt like before it became inseparable from Ai.”
Throughout the report, responses acknowledge that while this idea may feel intimidating, it is becoming reality.
“We are entering a new world. We're never going to go back to a world before,” Rainie said. “AI is a big force in our lives, and so thinking about that really smartly and responding to it really wisely is the call for us all.”
A major focus of the report is how human resilience can adapt to the influence of AI tools, with emphasis on what experts call an “institutions-first” resilience agenda.
Rainie described this concept by saying that many experts believe institutions should take a larger role in preparing and guiding the public, rather than leaving individuals to navigate AI-related challenges on their own.
Alison Poltock, co-founder of AI Commons UK and a contributor to the report, echoed that concern.
“We are operating on outdated institutional architecture, strapping jetpacks to systems built for another age and allowing our children to grow up in the gap,” Poltock said in the report.
Rainie said this moment presents an opportunity for society to rethink its relationship with institutions. He noted that trust in institutions — including public education, government and religious organizations — has declined in recent generations.
“We're basically saying we're going to have to rebuild the institutions that we've lost trust in,” Rainie said. “AI is going to create so many new challenges that we're going to have to reimagine what it is that institutions do for us.”
The report ultimately delivers a clear call to action, emphasizing that the next five to 10 years are critical. It urges immediate, coordinated efforts led by institutions to build systems that protect human agency, judgment and relationships.
“It's a pretty big thought, and it's a very big agenda,” Rainie said.

