Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Drop HAProxy as recommended until we can do further testing #1360

Open
ravindk89 opened this issue Oct 31, 2024 · 2 comments
Open
Assignees
Labels
kubernetes tiny Small, bite-sized fixes that require minimal effort

Comments

@ravindk89
Copy link
Collaborator

haproxy/haproxy#2076

Based on those reports circa 2023, HAProxy has some fundamental incompatibility w/ certain MinIO S3 protocols around Tiering. At least one other user reported general slowness or other issues w/ HAProxy.

The ticket does have an example of an HAProxy config we could start with and experiment to see if we could resolve both the performance and S3 API issues. The latter is, according to a maintainer, not going away due to S3's implementation not respecting HTTP1/1 or something along those lines.

I think we have had customers and users on HAProxy before, but given the mixed results and compat concerns, we should drop it from docs until we have a formally reviewed and tested setup where all APIs are known to work and have normal/expected performance.

@ravindk89 ravindk89 added the triage Needs triage and scheduling label Oct 31, 2024
@ravindk89 ravindk89 self-assigned this Oct 31, 2024
@ravindk89 ravindk89 added tiny Small, bite-sized fixes that require minimal effort kubernetes and removed triage Needs triage and scheduling labels Oct 31, 2024
@ravindk89
Copy link
Collaborator Author

Example config here:

defaults

mode                    http
log                     global
option                  httplog
option                  dontlognull
option http-server-close # removing this makes no difference
retries                 3
timeout http-request    10s # removing this makes no difference
timeout queue           1m
timeout connect         10s
timeout client          300s
timeout server          300s
timeout http-keep-alive 10s
timeout check           10s
maxconn                 10000

frontend ft_http
  bind :::80 v4v6
  mode HTTP
  # removing all of the stats config makes no difference
  stats enable
  stats auth <snip>
  stats refresh 30s
  stats show-node
  stats uri  /haproxy_adm_panel
  stats admin if TRUE
  option forwardfor
  default_backend bk_http

backend bk_http
  server hydra1 <target1>:<port1> check
  server hydra2 <target2>:<port2> check
  balance round-robin

@ravindk89
Copy link
Collaborator Author

Will also driveby on nginx guidance for setting client_max_body_size to something more than 1MiB
https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size

@pavelanni I think in your example you set it to 1gi, but is there some other value here that makes sense? Body size shouldn't generally exceed the max part size + some buffer anyways right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kubernetes tiny Small, bite-sized fixes that require minimal effort
Projects
None yet
Development

No branches or pull requests

1 participant