NEWSLado Okhotnikov on Decentralised AI: Could Intelligence Live Inside Personal Devices?

May 6 2026, Published 2:55 a.m. ET
Artificial intelligence today still depends on centralised cloud infrastructure. Model training, deployment, and governance are concentrated within hyperscale data centres — but this model is increasingly being reconsidered.
Industry practitioners, including Lado Okhotnikov, founder of Holiverse point to decentralisation as a potential long-term direction for AI development. This raises a practical question: where should intelligence operate, and can it function effectively on personal devices?
What Exists Today
Personal devices already run AI locally. Smartphones, laptops, and wearables process speech recognition, biometrics, image analysis, and recommendations directly on-device, improving responsiveness and reducing dependence on constant cloud connectivity.
However, local execution does not equal decentralised AI. Model training, updates, and governance remain controlled by platform providers, leaving users with little influence over how that intelligence develops.
The gap is clear: local AI is still centrally governed, while decentralised AI largely operates outside everyday user needs. In practice, this creates a mismatch between where AI runs and where it is controlled.
How a Decentralised AI Device Could Work
A more realistic architecture is likely hybrid. In this model, a personal device hosts a local AI agent responsible for inference, contextual memory, and user-specific operations. The agent processes data locally and adapts to how each user interacts with their device.
Interaction with external networks would be selective. The device connects to external systems for updates, collaborative learning, verification, or access to shared resources when required and under defined permissions. Intelligence is executed locally, while coordination occurs at the network level.
This approach does not attempt to replicate cloud-scale training on consumer hardware. Large model training and global optimisation remain distributed tasks. The architectural shift concerns where decisions and context-sensitive processing occur.
Want OK! each day? Sign up here!
The Science Behind the Shift
A number of technical building blocks for this model are beginning to take shape. Edge AI accelerators are expanding the feasibility of on-device inference, while compression methods such as quantisation and distillation make smaller models increasingly practical. Federated learning suggests potential pathways for collaborative model improvement without centralising data. Yet these elements still function mostly as separate advances rather than parts of a cohesive ecosystem.
Advances in distributed systems have also introduced mechanisms for coordination, verification, and trust across independent nodes, indicating how decentralised operation might be achieved without reliance on a single controlling entity.
The challenge increasingly appears to lie not only in feasibility, but in integration — aligning hardware capabilities, model design, learning frameworks, and governance structures into a coherent and scalable system.
Constraints and Open Questions
Meanwhile, important limitations remain — local models are constrained in scale and capability compared with centralised systems. Persistent on-device inference introduces energy and thermal constraints. Distributed environments also introduce difficult security questions around model integrity, adversarial updates, and trust.
Governance is similarly unresolved. Questions about update authority, behavioural limits, and the balance between local autonomy and network-level standards remain open — decentralisation does not eliminate these issues, it changes how they must be addressed.
Why the Question Matters Now
Artificial intelligence is becoming increasingly personal, persistent, and integrated into daily life. Devices already capture rich context about users, prompting a reconsideration of where intelligence should reside. Local processing enables immediate adaptation to behaviour, while selective network connections provide access to shared knowledge and model updates.
Voices such as Lado Okhotnikov highlight the shift from theoretical discussion to practical architecture, in which the move toward decentralised AI involves aligning hardware, models, learning frameworks, and governance into a coherent system. The aim is not to replace centralised infrastructure, but to complement it with locally adaptive intelligence, recognising both its opportunities and constraints.


