When the humans are ready, the AGI will appear

Stelia's perspective on the philosophical barriers to artificial general intelligence

There is a saying “When the pupil is ready, the master appears.” At Stelia, we've spent years deploying AI systems that serve millions of users across four continents. This experience has led us to a fundamental realisation: the greatest challenge facing AGI adoption isn't computational power or algorithmic sophistication. It's the deeply human question of what we're willing to become.

Our position is that every generation, regardless of technological fluency, faces the same existential tension when confronting cognitive augmentation.

Technology adoption paradox

Through our global deployments, we've witnessed something profound. Technology readiness doesn't correlate with age as expected. Millennials demonstrate 54% readiness compared to Gen Z's 34%. This gap exists not because of superior technical skills, but because they experienced technology as empowerment rather than inevitability.

This reveals a crucial insight about human nature. Acceptance of transformative technology depends less on familiarity and more on perceived agency. Gen Z has grown up witnessing technology's capacity for manipulation, surveillance, and social disruption. Their scepticism represents philosophical wisdom, not technological incompetence.

Gen X serves as crucial "bridge builders" with 40% readiness. They've experienced both analogue and digital worlds, understanding transformation as choice rather than inevitability. Even Baby Boomers, at 30% readiness, demonstrate that adoption patterns reflect values alignment more than learning curves.

Universal concerns

Our philosophical analysis identifies the three concerns that transcend generational boundaries.

  • Essence preservation: Every human asks whether cognitive augmentation diminishes what makes us fundamentally human. This represents a legitimate philosophical question about identity and authenticity. If AI enhances our memory, are our recollections genuinely ours? When it augments our reasoning, do our conclusions reflect our character?
  • Autonomy anxiety: Across all demographics, people grapple with whether AI enhancement amplifies human agency or subverts it. The fear centers on control, not capability. Will we direct our enhanced cognition toward our chosen purposes? Or will the enhancement subtly redirect our intentions?
  • Authenticity questions: From teenagers to executives, the core concern remains consistent. If AI assists our thinking, are our achievements genuinely our own? This question touches the core of human dignity. We believe our accomplishments should reflect our effort, character, and growth.

Technology death chasm

History teaches us that revolutionary technologies face a critical adoption gap. This represents the chasm between early adopters and mainstream acceptance. Consider 3D television technology, which seemed destined for ubiquity around 2010.

Early adopters embraced the immersive experience. Manufacturers invested billions. Content creators adapted their workflows. Yet 3D TV died not from technical failure, but from human rejection.

Consumers found the glasses inconvenient. They questioned the added value. Ultimately, they decided the technology imposed more than it offered. The innovation failed because it prioritised technical capability over human dignity. It forced users to adapt to the technology rather than adapting technology to human needs.

This pattern repeats across transformative innovations. Google Glass failed when society rejected its invasion of privacy and social norms, despite impressive capabilities. The Segway couldn't overcome practical concerns about safety, storage, and social acceptance, despite revolutionary engineering.

AGI currently occupies this same critical gap. Early adopters see its potential. Mainstream adoption, however, requires addressing fundamental human concerns about identity, autonomy, and authenticity. Technical prowess alone won't suffice.

Gradual enhancement

We've learned that successful AI integration requires philosophical alignment before technical implementation. Our systems succeed because they enhance human judgment rather than replace it. They augment human capability while preserving human agency.

 The five common conditions that facilitate acceptance across all generation are:

  • Perceived necessity: The technology must solve real problems, not create new dependencies. Our healthcare AI succeeds because it helps doctors make better decisions. It doesn't make decisions for them.
  • Gradual integration: Revolutionary change happens through evolutionary steps. Our enterprise systems introduce AI capabilities incrementally. This allows users to maintain control while experiencing benefits.
  • User control: People must direct the technology toward their chosen purposes. Our AI systems amplify human intentions rather than substituting algorithmic preferences.
  • Transparent operation: Users understand what the technology does and why. Mystery breeds distrust. Clarity builds confidence.
  • Social normalisation: Communities must embrace the technology as aligned with their values. This requires demonstrating respect for human dignity, not just technical capability.

Why wisdom trumps innovation

The "elusive" nature of transhuman AGI reflects predictable human wisdom. We instinctively resist changes that threaten our sense of self. This represents a truth to honour, not a barrier to overcome.

Our experience deploying AI across diverse cultures and generations has taught us that sustainable adoption requires philosophical coherence. People don't just evaluate what technology can do. They evaluate what it means for who they are and who they might become.

The fundamental question centres not on whether we can build AGI, but whether we can build AGI that honours human dignity while enhancing human capability.

Stelia's commitment

We're building AI that works with human nature rather than against it. Our systems succeed because they address philosophical concerns about identity, autonomy, and authenticity alongside technical requirements.

The future of AGI depends less on what we can build and more on what we choose to become. We believe that choice should remain fundamentally human. The technology should serve that choice rather than constrain it.

Yes, we measure success partially by technical benchmarks alone, but by whether our AI systems enhance human flourishing across all generations. That represents our business model and our philosophical commitment.

Stelia's perspective draws from extensive experience deploying ethical AI systems across enterprise, media and entertainment and retail applications worldwide.

Thoughts

When the humans are ready, the AGI will appear

When the humans are ready, the AGI will appear

July 11, 2025
7 min

When the humans are ready, the AGI will appear

Humanitarian AI demands our highest ambition

Humanitarian AI demands our highest ambition

July 11, 2025
3 min

Humanitarian AI demands our highest ambition

AI will never be sovereign

AI will never be sovereign

July 11, 2025
3 min

AI will never be sovereign

We build bridges to the future, and we
invite you to walk across with us.

Deep Research
Deep Research