foundations/net/docs/CORE_CONCEPTS.md
Understanding the fundamental concepts of Huly Virtual Network is essential for building robust distributed systems.
Huly Network uses a hub-and-spoke architecture with three main components:
graph TB
subgraph "Network (Hub)"
N[Network Server]
R[Router]
REG[Container Registry]
end
subgraph "Agent 1"
A1[Agent]
C1A[Container A]
C1B[Container B]
end
subgraph "Agent 2"
A2[Agent]
C2A[Container C]
C2B[Container D]
end
subgraph "Clients"
CLIENT1[Client 1]
CLIENT2[Client 2]
end
CLIENT1 -->|Request Container| N
CLIENT2 -->|Request Container| N
N -->|Route to Agent| A1
N -->|Route to Agent| A2
A1 --> C1A
A1 --> C1B
A2 --> C2A
A2 --> C2B
The Network is the central coordinator that:
interface Network {
// Agent management
register(record: AgentRecord, agent: NetworkAgentApi): Promise<ContainerUuid[]>
unregister(agentId: AgentUuid): Promise<void>
ping(agentId: AgentUuid): Promise<void>
// Container management
get(client: ClientUuid, kind: ContainerKind, options: GetOptions): Promise<[ContainerUuid, ContainerEndpointRef]>
release(client: ClientUuid, uuid: ContainerUuid): Promise<void>
list(kind?: ContainerKind): Promise<ContainerRecord[]>
// Communication
request(target: ContainerUuid, operation: string, data?: any): Promise<any>
// Discovery
agents(): AgentRecord[]
kinds(): ContainerKind[]
}
The network server listens on a TCP port (default: 3737) and:
Example: Starting a Network Server
import { NetworkImpl, TickManagerImpl } from '@hcengineering/network-core'
import { NetworkServer } from '@hcengineering/network-server'
const tickManager = new TickManagerImpl(1000) // 1000 ticks/sec
tickManager.start()
const network = new NetworkImpl(tickManager)
const server = new NetworkServer(
network,
tickManager,
'*', // Bind to all interfaces
3737 // Port
)
console.log('Network server running on port 3737')
Agents are worker processes that:
Each agent declares which container kinds it can create:
// Note: For production code, use serveAgent() on the client
// This example uses AgentImpl directly for educational purposes
const agent = new AgentImpl('my-agent' as AgentUuid, {
session: async (options) => {
/* create session container */
},
workspace: async (options) => {
/* create workspace container */
},
query: async (options) => {
/* create query container */
}
})
Agents register with the network to announce availability:
import { NetworkAgentServer } from '@hcengineering/network-client'
const agentServer = new NetworkAgentServer(
tickManager,
'localhost', // Network host
'*', // Bind address
3738 // Agent port for container connections
)
await agentServer.start(agent)
// Register with network
await client.register(agent)
Agents must:
If an agent fails to ping within the aliveTimeout (default: 3 seconds), the network:
Containers are the workhorses of the system. They:
Every container must implement:
interface Container {
// Handle incoming requests
request(operation: string, data?: any, clientId?: ClientUuid): Promise<any>
// Health check
ping(): Promise<void>
// Cleanup and shutdown
terminate(): Promise<void>
// Event broadcasting support
connect(clientId: ClientUuid, broadcast: (data: any) => Promise<void>): void
disconnect(clientId: ClientUuid): void
// Optional termination callback
onTerminated?(): void
}
stateDiagram-v2
[*] --> Creating: Client requests container
Creating --> Active: Factory creates instance
Active --> Referenced: Client holds reference
Referenced --> Active: Client releases reference
Active --> Terminating: Timeout expires (no references)
Referenced --> Terminating: All clients released
Terminating --> [*]: Cleanup complete
Key lifecycle events:
containerTimeout after last reference releasedterminate() called, then removed from networkCreated on-demand when requested:
// Note: For production code, use serveAgent() on the client
// This example uses AgentImpl directly for educational purposes
const agent = new AgentImpl('agent-1' as any, {
'user-session': async (options: GetOptions) => {
const uuid = options.uuid ?? generateUuid()
const container = new UserSessionContainer(uuid, options)
return {
uuid,
container,
endpoint: `session://agent1/${uuid}` as any
}
}
})
Pre-created for high availability:
const leaderContainer = new LeaderContainer('leader-001' as ContainerUuid)
agent.addStatelessContainer(
'leader-001' as ContainerUuid,
'leader' as ContainerKind,
'leader://agent1/leader-001' as ContainerEndpointRef,
leaderContainer
)
Multiple agents can register the same stateless container UUID. The network accepts the first and rejects others, enabling automatic failover.
import type { Container, ContainerUuid, ClientUuid } from '@hcengineering/network-core'
class DataStoreContainer implements Container {
private data = new Map<string, any>()
private connections = new Map<ClientUuid, (data: any) => Promise<void>>()
constructor(readonly uuid: ContainerUuid) {}
async request(operation: string, data?: any): Promise<any> {
switch (operation) {
case 'set':
this.data.set(data.key, data.value)
await this.broadcast({ type: 'dataChanged', key: data.key })
return { success: true }
case 'get':
return { value: this.data.get(data.key) }
default:
return { error: 'Unknown operation' }
}
}
async ping(): Promise<void> {}
async terminate(): Promise<void> {
this.data.clear()
this.connections.clear()
}
connect(clientId: ClientUuid, broadcast: (data: any) => Promise<void>): void {
this.connections.set(clientId, broadcast)
}
disconnect(clientId: ClientUuid): void {
this.connections.delete(clientId)
}
private async broadcast(event: any): Promise<void> {
const promises = Array.from(this.connections.values()).map((fn) => fn(event))
await Promise.all(promises)
}
}
Clients are applications or services that:
import { createNetworkClient } from '@hcengineering/network-client'
const client = createNetworkClient(
'localhost:3737', // Network address
3600 // Alive timeout in seconds (optional)
)
await client.waitConnection(5000) // Wait up to 5 seconds
Clients request containers by kind and optional criteria:
// Get any container of this kind
const ref = await client.get('user-session' as ContainerKind, {})
// Get specific container by UUID
const ref = await client.get('user-session' as ContainerKind, {
uuid: 'session-123' as ContainerUuid
})
// Get container with labels
const ref = await client.get('workspace' as ContainerKind, {
labels: ['premium', 'us-west']
})
// Get container with extra data
const ref = await client.get('query-engine' as ContainerKind, {
extra: { database: 'analytics', userId: 'user-456' }
})
interface NetworkClient {
// Container management
get(kind: ContainerKind, request: GetOptions): Promise<ContainerReference>
list(kind?: ContainerKind): Promise<ContainerRecord[]>
// Agent management
register(agent: NetworkAgent): Promise<void>
unregister(agentId: AgentUuid): Promise<void>
// Discovery
agents(): AgentRecord[]
kinds(): ContainerKind[]
// Events
onUpdate(listener: NetworkUpdateListener): () => void
// Connection
close(): Promise<void>
}
Huly Network supports multiple communication patterns:
Direct request to container with response:
const result = await containerRef.request('processData', {
value: 42
})
console.log(result) // { processed: true, result: 84 }
Send data without waiting for response:
await containerRef.request('logEvent', {
event: 'user_login',
timestamp: Date.now()
})
Container broadcasts events to all connected clients:
// Client side
const connection = await containerRef.connect()
connection.on = async (event) => {
console.log('Received:', event)
}
// Container side
connect(clientId: ClientUuid, broadcast: (data: any) => Promise<void>): void {
this.clients.set(clientId, broadcast)
}
// Broadcast to all clients
for (const broadcast of this.clients.values()) {
await broadcast({ type: 'update', data: changes })
}
Establish persistent connection for streaming:
const connection = await containerRef.connect()
// Receive stream
connection.on = async (chunk) => {
console.log('Chunk:', chunk)
}
// Send requests
await connection.request('subscribe', { topic: 'updates' })
await connection.request('getData', { range: [0, 100] })
The network uses reference counting to manage container lifecycles:
client.get() increments the reference countcontainerRef.close() decrements itcontainerTimeout// Acquire reference (ref count = 1)
const ref1 = await client.get('service' as any, { uuid: 'svc-1' })
// Acquire another reference to same container (ref count = 2)
const ref2 = await client.get('service' as any, { uuid: 'svc-1' })
// Release first reference (ref count = 1)
await ref1.close()
// Release second reference (ref count = 0)
await ref2.close()
// Container kept alive for containerTimeout, then terminated
The network continuously monitors health:
Agent Health:
pingInterval (default: 1 second)aliveTimeout (default: 3 seconds)Container Health:
ping() callsProper cleanup on shutdown:
// Container cleanup
async terminate(): Promise<void> {
// 1. Notify connected clients
await this.broadcastShutdown()
// 2. Close external connections
await this.database.close()
// 3. Clear internal state
this.data.clear()
// 4. Release resources
this.connections.clear()
}
// Agent cleanup
await agentServer.close()
tickManager.stop()
// Client cleanup
await containerRef.close()
await client.close()
// Server cleanup
await server.close()
tickManager.stop()
Containers are addressed by endpoint references:
type ContainerEndpointRef = string & { _containerEndpointRef: true }
Direct Endpoint: Direct connection to container
tcp://host:port/uuid
Routed Endpoint: Connection through agent
agent://host:port:agentId/uuid
No-Connect Endpoint: Request-only, no persistent connection
noconnect://host:port/uuid
import { parseEndpointRef, EndpointKind } from '@hcengineering/network-core'
const parsed = parseEndpointRef(endpoint)
console.log(parsed.kind) // EndpointKind.direct | routed | noconnect
console.log(parsed.host) // Host address
console.log(parsed.port) // Port number
console.log(parsed.uuid) // Container UUID
console.log(parsed.agentId) // Agent ID (for routed)
Containers are categorized by kind (string type):
type ContainerKind = string & { _containerKind: true }
Examples:
'user-session' - User session management'workspace' - Workspace containers'query-engine' - Query processing'transactor' - Transaction handlingAgents declare which kinds they support:
// Note: For production code, use serveAgent() on the client
// This example uses AgentImpl directly for educational purposes
const agent = new AgentImpl('agent-1' as any, {
'user-session': sessionFactory,
workspace: workspaceFactory,
'query-engine': queryFactory
})
Containers can have labels for fine-grained selection:
// Create container with labels
await client.get('workspace' as any, {
labels: ['premium', 'us-west', 'production']
})
// Labels enable:
// - Multi-tenancy (tenant ID as label)
// - Geographic routing (region labels)
// - Tier-based selection (free, premium, enterprise)
// - Environment separation (dev, staging, production)
interface GetOptions {
uuid?: ContainerUuid // Specific container UUID
extra?: Record<string, any> // Additional parameters for factory
labels?: string[] // Labels for selection
}
Key concepts to remember:
These concepts form the foundation for building scalable, fault-tolerant distributed systems with Huly Network.
Need more help? Check the Troubleshooting Guide or Examples.