|
|
Bomberman Multiplayer
Authoritative multiplayer networking layer for Bomberman.
|
This page presents the networking layer added on top of the provided singleplayer Bomberman base.
The project uses an authoritative client-server model: clients send gameplay intent, the server owns the shared match state, and both sides stay compatible through one explicit wire contract defined in NetCommon.h.
This page explains the system-level contract and message flow:
The networking layer is built around three deliberate decisions:
At a high level, the system is split into a small number of clear responsibilities.
Net/NetCommon.h defines the shared wire contract used by both client and server. It contains the packet header, protocol version gate, message identifiers, channel assignments, payload sizes, and the helpers used to serialize and validate network messages.
Net/PacketDispatch.h and Net/NetSend.h sit beside that contract as small shared infrastructure: one keeps receive-path parsing and typed dispatch consistent, and the other keeps channel-specific ENet sends explicit.
On the client side, Net/Client/NetClient.h owns the transport endpoint, handshake flow, and cached authoritative state. Net/Client/ClientPrediction.h exists to keep the owning player responsive under latency by predicting local movement and reconciling once authoritative correction arrives.
On the server side, Server/ServerSimulation.cpp and the surrounding server flow code accept input, advance the match on authoritative simulation ticks, and replicate the resulting world state back to clients through snapshots, corrections, and reliable gameplay events.
The design goal is not only to make multiplayer function. It is to keep the networking model readable, testable, and explainable while still supporting responsive moment-to-moment gameplay.
Net/NetCommon.h is the single shared protocol file used by both client and server. It acts as the source of truth for the wire contract and keeps the most important protocol rules in one place.
It defines:
This is intentionally dense code. For protocol work, explicitness is a strength: wire layout, validation rules, and packet construction stay close together. The same idea carries into the adjacent infrastructure helpers: Net/PacketDispatch.h keeps packet parsing and typed dispatch narrow, while Net/NetSend.h keeps channel selection visible instead of hiding it behind broader gameplay abstractions.
The protocol uses five ENet channels, each with one clear responsibility:
| Channel | ID | Purpose | Direction |
|---|---|---|---|
ControlReliable | 0 | Reliable non-gameplay events | Client and Server |
GameplayReliable | 1 | Reliable gameplay events that must stay ordered | Client and Server |
InputUnreliable | 2 | Client input stream | Client to Server |
SnapshotUnreliable | 3 | Authoritative world snapshots | Server to Client |
CorrectionUnreliable | 4 | Owner-local authoritative corrections | Server to Client |
This split makes reliability and ordering decisions visible at the protocol level instead of hiding them inside higher-level gameplay code. That keeps message behaviour easier to reason about during implementation, debugging, and testing.
These are the current payloads defined by the protocol:
| Payload type | Channel | Payload size (v9) | Purpose |
|---|---|---|---|
| MsgHello | ControlReliable | 18 B | Client handshake initiation with protocol version and player name |
| MsgWelcome | ControlReliable | 5 B | Server acceptance, assigned player id, and server tick rate |
| MsgReject | ControlReliable | 3 B | Explicit handshake rejection with reason and expected protocol version |
| MsgLevelInfo | ControlReliable | 8 B | Announces authoritative matchId and mapSeed for round bootstrap |
| MsgLobbyState | ControlReliable | 76 B | Current lobby roster, ready states, and lobby phase |
| MsgLobbyReady | ControlReliable | 1 B | Client ready toggle for its current seat |
| MsgMatchLoaded | ControlReliable | 4 B | Client acknowledgement that the round scene is loaded |
| MsgMatchStart | ControlReliable | 12 B | Reliable round start edge with authoritative timing ticks |
| MsgMatchCancelled | ControlReliable | 4 B | Cancels round bootstrap and returns to the lobby |
| MsgMatchResult | ControlReliable | 22 B | Final authoritative round result |
| MsgInput | InputUnreliable | 21 B | Batched client input stream keyed by sequence number |
| MsgSnapshot | SnapshotUnreliable | 111 B | Authoritative replicated world state |
| MsgCorrection | CorrectionUnreliable | 20 B | Owner-local correction for reconciliation |
| MsgBombPlaced | GameplayReliable | 16 B | Reliable gameplay event for bomb creation |
| MsgExplosionResolved | GameplayReliable | 497 B | Reliable gameplay event for blast resolution and resulting world changes |
These fields are especially important for understanding how the protocol keeps client and server aligned:
| Field or concept | Description |
|---|---|
kProtocolVersion | Hard compatibility gate used during the initial handshake |
PacketHeader::type | Selects the payload parser, expected channel, and size-validation path |
PacketHeader::payloadSize | Enables fast fixed-size validation before any payload is decoded |
matchId | Prevents stale round data from leaking across lobby and match transitions |
serverTick | Anchors snapshots, corrections, and reliable gameplay events on the same timeline |
baseInputSeq / lastProcessedInputSeq | Connects client input history to authoritative server acknowledgement |
| channel assignment | Encodes the reliability and ordering strategy directly into the protocol |
The networking layer uses an asymmetric authority split.
The server owns the authoritative match state: player positions, bomb state, explosions, round flow, and the final outcome of gameplay decisions all resolve on the server timeline.
The client owns local input intent and may temporarily predict the owning player's movement for responsiveness. That prediction improves local feel, but it does not replace server authority.
At a high level:
This split is the core design decision behind the networking layer. It keeps the shared match state consistent across all players while still allowing the local player to feel responsive under latency.
This section gives the high-level flow of a multiplayer round, from bootstrap into active play.
The transition from lobby into live gameplay is explicit and synchronised on the server timeline.
The server controls admission, lobby synchronisation, and round start through explicit control messages. MsgLevelInfo, MsgMatchLoaded, and MsgMatchStart ensure both sides agree on the current matchId and timing before gameplay unlocks.
Once a round is active, the client and server exchange a small number of defined streams:
These streams feed authoritative simulation on the server and prediction-aware presentation on the client.
Prediction exists to keep the owning player responsive under latency.
The client records and sends local input, may predict immediate local movement, and later reconciles once authoritative correction arrives. If replay cannot be trusted, the client falls back to recovery behaviour instead.
The deeper client-side behaviour lives in Client Multiplayer Netcode.
The networking layer is designed to be inspectable, not just functional.
That includes:
The detailed methodology lives in Testing, but the important point here is that observability is part of the networking design itself.
MsgCorrection is a good example of the protocol style used across the networking layer: it combines the shared header with authoritative timing, acknowledged input progress, and owner-local repair state in one compact fixed-size packet.
Including the shared PacketHeader, the full packet is 23 B on the wire: 3 B of header plus a 20 B correction payload.