-
Notifications
You must be signed in to change notification settings - Fork 94
zetcd performance #94
Comments
There have been performance improvements since then on both etcd and zetcd, but another round of benchmarks would be needed to quantify the impact. If performance has gotten worse, there's a regression. |
Thanks, Anthony. Any plans about this benchmarks? If you do should I convert this issue to tracking issue about them? |
I just ran zkboom on a deployment backed by a 5-node etcd v3.3.2 cluster, attempting to replicate the table shown here:https://coreos.com/blog/introducing-zetcd. I compiled the latest zetcd HEAD and vendored in etcd v3.3.2 and gRPC v1.7.5 as well. For a single zetcd replica and small numbers of concurrent clients, things are much faster than the blog post. However, the write latencies are higher once you hit 50 concurrent clients. This may be expected - my 5-node cluster will have a higher write latency than a 3 or 1 node cluster (I don't know what was used in the blog post) Likewise, my etcd peer and client connections are secured with TLS which could incur a performance hit. If I set an overall request limit to 500/s, I get an average write latency of 30ms, so it looks like running without a limit it just hammering zetcd, and it can't really handle more than 500 writes a second on my setup.
To put this in comparison with a real 5-node zookeeper deployment, I can run zkboom with 50 connections and get 2.8k op/s - 5x higher throughput than zetcd and with much lower latency. Unless I've done something terribly wrong, this only seems like zetcd is useful if you're expecting less than 500 writes a second.
|
Is zetcd+etcd supposed to be faster than zk nowadays? I saw a few graphics, which says that etcd itself faster than zk and this benchmark about zetcd 0.0.1, is it still relevant?
The text was updated successfully, but these errors were encountered: