Is minimatch really just a mini? No, it is not! Despite its name, minimatch has scalability.
minimatch can be configured as shown in the following figure. Want to try it? See Helm chart: charts/minimatch-scaled in a repository.
Since Frontend, Backend and Redis are separate processes,
configure them separately without using minimatch.NewMinimatchWithRedis
.
Use statestore.NewRedisStore
to configure Redis by passing rueidis.Client
and rueidislock.Locker
.
// Create a Redis client
redis, err := rueidis.NewClient(rueidis.ClientOption{
InitAddress: []string{"x.x.x.x:6379"},
})
// Create a Redis locker client
locker, err := rueidislock.NewClient(rueidislock.LockerOption{
ClientOption: rueidis.ClientOption{InitAddress: []string{"x.x.x.x:6379"}},
})
store := statestore.NewRedisStore(redis, locker)
minimatch Frontend mainly provides CreateTicket and WatchAssignment APIs. No matchmaking implementation is required here. See Scalable frontend example for an actual example.
sv := grpc.NewServer()
pb.RegisterFrontendServiceServer(sv, minimatch.NewFrontendService(store))
minimatch Backend fetches the created Ticket and performs matchmaking. Insert your matchmaking logic here. See Scalable backend example for an actual example.
matchProfile := &pb.MatchProfile{...}
matchFunction := minimatch.MatchFunctionSimple1vs1
assigner := minimatch.AssignerFunc(dummyAssign)
backend, err := minimatch.NewBackend(store, assigner)
backend.AddMatchFunction(matchProfile, matchFunction)
You can configure a read replica for GetTicket(s). As follows.
primary, err := rueidis.NewClient(...)
replica, err := rueidis.NewClient(...)
statestore.NewRedisStore(primary, locker, statestore.WithRedisReadReplicaClient(replica))
If Redis is the bottleneck, Storing Ticket and Assignment on different Redis servers will result in better load balancing.
redis1, err := rueidis.NewClient(...)
redis2, err := rueidis.NewClient(...)
statestore.NewRedisStore(redis1, locker, statestore.WithSeparatedAssignmentRedis(redis2))
minimatch achieved 5,000 assign/s under the following conditions:
- 1vs1 simple matchmaking
- Backend tick rate: 100ms
- Kubernetes cluster: GKE Autopilot (asia-northeast1 region)
- Total vCPU: 70 (includes loadtest attacker)
- Total memory: 245GB
- Attacker replicas: 50 (CPU: 500m, Mem: 1GiB)
- Frontend replicas: 50 (CPU: 500m, Mem: 1GiB)
- Backend replicas: 10 (CPU: 500m, Mem: 1GiB)
- Redis (primary): Google Cloud Memorystore for Redis Basic tier (max capacity: 1GB)
the 50th percentile time of ticket assignment was stable at less than 170 ms.
The code used for the load test is in loadtest/.