AI Future: Entering a Shared World of Significance

In The Embodied Mind, Varela, Thompson, and Rosch suggest that “intelligence moves from the capacity to solve a problem to the capacity to enter into a shared world of significance.”

Too many of today’s AI demos are still narrowly focused on problem-solving: identify this plant, translate this menu, suggest a recipe, rewrite this email. These are useful, but shallow. The quote gets at something I’ve been thinking about for a while, it points to something deeper and more important. The long-term trajectory for AI is not just utility, but meaning.

To “enter into a shared world of significance” means developing shared context with the intelligence. Herbert Clark calls this common ground: the accumulated mutual knowledge that makes communication efficient and meaningful. Practically speaking, common ground allows me to say less and still be understood. It grows as more of my life, my preferences, my values, my patterns of thought all become computationally accessible.

Sam Altman hinted at a similar idea at Sequoia’s AI Ascent, envisioning a “tiny reasoning model with a trillion tokens of context that you put your whole life into.” It’s worth noting: those tokens won’t just be text. They’ll be multimodal—language, gestures, attention, memories, motion, silence. The importance of getting access to this information is why there is so much focus on the next computing platform. How much information about my life is sufficient?

A shared world of significance isn’t built through prompts. It’s built through presence.