← Back to Astral Rift
Deep Dive

Networking Architecture

Server-authoritative multiplayer with client-side prediction, server reconciliation, lag compensation, and entity interpolation.

Architecture overview

Astral Rift uses a listen-server model where one player hosts and all others connect as clients. The server is authoritative on all game state—positions, health, projectile hits. Clients predict their own movement locally for responsive controls, then reconcile when the server corrects them.

Listen Server (Host)
Authoritative Simulation
Receives input RPCs → simulates → sends corrections
↓ Replication ↓
Client A (Owner)
Predict + Replay
Local input → buffer → reconcile
Client B (Proxy)
Interpolate
VInterpTo @ 15.0/s
Client C (Proxy)
Interpolate
VInterpTo @ 15.0/s
Owning client predicts locally, other clients interpolate replicated state.

Client-side prediction

The owning client applies input locally before sending it to the server, eliminating the round-trip delay for movement. Every input frame is buffered with a sequence number so the client can replay unacknowledged inputs after a server correction.

Input frame

struct FShipInputFrame {
    uint32 SequenceNumber;   // Monotonic, for ack tracking
    float  DeltaTime;        // Frame delta (validated: 0 < dt < 0.5)
    float  RotationInput;    // -1 to 1 (validated: |r| <= 1.1)
    bool   bThrusting;
    bool   bReversing;
    bool   bOverdrive;
};

The client sends this to the server via an unreliable RPC every tick (ServerReceiveInput). Unreliable is correct here—if an input is lost, the next one supersedes it. The server validates inputs (clamping delta time, rotation range) to prevent cheating.

1. Input
Player Action
thrust, rotate, fire
2. Build Frame
Stamp with seq# + delta time
validated, buffered in PendingInputs
3a. Local
Apply instantly
no latency felt
3b. Server
Send via RPC
unreliable, every tick

Server reconciliation

When the server processes an input, it sends back a correction containing the authoritative position, velocity, and the last processed sequence number. The client reconciles by:

  1. Discarding all acknowledged inputs (seq ≤ LastProcessedSequence)
  2. Snapping to the server's authoritative position
  3. Replaying all remaining unacknowledged inputs on top
  4. Computing a visual smoothing offset to hide the snap

Correction struct

struct FShipStateCorrection {
    FVector_NetQuantize Position;    // Quantized for bandwidth
    FVector_NetQuantize Velocity;
    float Rotation;
    uint32 LastProcessedSequence;    // Which input was last applied
};

Smoothing

Input replay means most corrections are tiny (client and server ran the same deterministic physics). But when they diverge:

Correction < 50 units: SmoothingOffset absorbs the delta, decaying at 10.0/s. Ship visually glides to the corrected position over ~100ms. The player never notices.

Correction > 50 units: Hard snap. This only happens on teleport, respawn, or severe desync. No point smoothing a 500-unit correction.

This works because SimulateMovement is a pure static function—same inputs always produce the same outputs. Client and server run identical physics, so corrections are rare and small under normal conditions.

Entity interpolation

Other players' ships (SimulatedProxy) can't be predicted—we don't know their inputs. Instead, the server replicates their position and the client smoothly interpolates toward it.

// Each frame for SimulatedProxy ships:
CurrentPos = FMath::VInterpTo(CurrentPos, TargetPos, DeltaTime, 15.0f);
CurrentRot = FMath::RInterpTo(CurrentRot, TargetRot, DeltaTime, 15.0f);

The interpolation rate of 15.0 gives smooth movement without excessive lag. Higher values track more tightly but look jittery on lossy connections. 15.0 was tuned through PIE testing with simulated latency.

Lag compensation

The hardest networking problem: when you shoot where an enemy was on your screen, the server needs to validate that hit against where they actually were at the time you fired, not where they are now.

16
History snapshots
266ms
History window
200ms
Max rewind

Every server tick, each ship records a position snapshot into a 16-entry ring buffer. When a projectile hits, the server:

  1. Reads the shooter's ping from PlayerState
  2. Computes the rewind timestamp: ServerTime - (Ping / 2)
  3. Interpolates the target's position from the history buffer at that timestamp
  4. Validates the hit against the rewound position (1.5x tolerance)
T = 1.000s (Client)
Shooter fires
Bullet spawned at client position
T = 1.050s (Server)
Hit detected on server
50ms later — target has moved
Rewind
Where was the target at T=1.000?
Interpolate history buffer → (503, 199)
Bullet at (502, 198) → within hit radius
Hit Valid
Damage applied from shooter's perspective
Server rewinds time to validate the hit from the shooter's perspective.

Bots (AIController) always have 0ms latency—no rewind needed. The 200ms max rewind supports up to 400ms round-trip time, which covers most internet connections.

RPC inventory

RPCDirectionReliabilityPurpose
ServerReceiveInputClient→ServerUnreliableSend input frame every tick
MulticastWeaponFireServer→AllUnreliableFire sound + muzzle flash
MulticastDeathServer→AllReliableDeath explosion VFX
MulticastRespawnServer→AllReliableRespawn visual
MulticastHitFlashServer→AllUnreliableDamage feedback
MulticastKillFeedServer→AllUnreliableKill notification

Unreliable for cosmetic events (missing a fire sound is fine), reliable for state-changing events (a missed death would leave a ghost ship). This matches the standard UE5 networking pattern.

Why this matters

These are the same problems every multiplayer game solves—Valorant, Overwatch, Rocket League all use variations of this architecture. The tradeoffs are the interesting part: how much history to buffer, when to smooth vs snap, how much tolerance on lag-compensated hits. Each choice has a direct impact on how the game feels to play at different latencies.

The implementation is ~400 lines of C++, plus 12 automated behavior tests for prediction, reconciliation, and rewind correctness, and 49 multi-process headless networking tests that verify replication, input flow, and latency simulation with up to 3 simultaneous clients.