Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Feature/double hbone #1429

Draft
wants to merge 8 commits into
base: master
Choose a base branch
from

Conversation

Stevenjin8
Copy link
Contributor

@Stevenjin8 Stevenjin8 commented Jan 17, 2025

Initial double HBONE implementation

Right now, inner HBONE will only hold one connect tunnel. Once the inner tunnel terminates, so will the outer tunnel (but not the outer HBONE). So when ztunnel receives its first connection to a double HBONE host (E/W gateway), it will perform two TLS handshakes. Subsequent connections to the same host will perform one TLS handshake.

This behavior is not great, but if we put the inner HBONE in the connection pool, then we pin ourselves to a pod in the remote cluster since ztunnel performs connection pooling, but is not aware of the E/W gateway's routing decision.

That being said, I think this is a good place to stop and think about control plane implementation and get some feedback on how I'm approaching this.

NOTE: The TLS/certificate related code changes are just for me to tests.

Tasks:

  • Implement double HBONE
  • Implement graceful shutdowns
  • Fix TLS code (I don't think I can do this without more control plane info since SANs are set for services)
  • Implement proper inner HBONE connection pooling
  • Tests

Some open questions:

  • Do I need to make any changes to metrics?
  • How do should we do inner HBONE pooling? My suggestion is to have up to N inner HBONE connections per E/W or per remote cluster.
  • Good ways to test for race conditions in connection terminations? Right now, it seems that connections end gracefully without race conditions, but that's just on my machine.

References:

@istio-testing
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@istio-testing istio-testing added do-not-merge/work-in-progress Block merging of a PR because it isn't ready yet. needs-rebase Indicates a PR needs to be rebased before being merged size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jan 17, 2025
@Stevenjin8 Stevenjin8 force-pushed the feature/double-hbone branch from e27a4a8 to 40bbcd0 Compare January 17, 2025 17:00
@istio-testing istio-testing removed the needs-rebase Indicates a PR needs to be rebased before being merged label Jan 17, 2025
@Stevenjin8 Stevenjin8 changed the title Feature/double hbone WIP: Feature/double hbone Jan 17, 2025
@Stevenjin8 Stevenjin8 added the do-not-merge/hold Block automatic merging of a PR. label Jan 17, 2025
@Stevenjin8 Stevenjin8 force-pushed the feature/double-hbone branch from 66db8ae to 99d622f Compare January 17, 2025 17:51
@@ -107,7 +112,7 @@ impl Outbound {
debug!(component="outbound", dur=?start.elapsed(), "connection completed");
}).instrument(span);

assertions::size_between_ref(1000, 1750, &serve_outbound_connection);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How did we get these numbers?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

by looking a the current size and adding a small amount of buffer.

@Stevenjin8 Stevenjin8 force-pushed the feature/double-hbone branch from 99d622f to 96bb4de Compare January 17, 2025 17:55
@@ -83,6 +83,26 @@ struct ConnSpawner {

// Does nothing but spawn new conns when asked
impl ConnSpawner {
async fn new_unpooled_conn(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Anything here we can do higher up I think, but things might change if we decide to implement pooling in this PR

Copy link
Contributor

@bleggett bleggett Jan 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah if we want double-hbone conns to be unpooled and thus need ~none of this surrounding machinery, then I'd be inclined to just start proxy/double-hbone.rs and use that directly, rather than complicating the purpose of this file.

(Could also just have a common HboneConnMgr trait or something too)

Comment on lines 247 to 256
// This always drops ungracefully
// drop(conn_client);
// tokio::time::sleep(std::time::Duration::from_secs(1)).await;
// drain_tx.send(true).unwrap();
// tokio::time::sleep(std::time::Duration::from_secs(1)).await;
drain_tx.send(true).unwrap();
let _ = driver_task.await;
// this sleep is important, so we have a race condition somewhere
// tokio::time::sleep(std::time::Duration::from_secs(1)).await;
res
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does anybody have any info on how to properly drop/terminate H2 connections over stream with nontrivial drops (e.g. shutting down TLS over HTTP2 CONNECT). Right now, I'm just dropping things/aborting tasks randomly until something works

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you asking about how to cleanup after, for example, a RST_STREAM to the inner tunnel? Or something else

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kinda. I mostly mean the outer TLS stream because that's what I've looked at. It seems like if I drop conn_client before termination driver_task the TCP connection will close without sending close notifies. So yes, I'm asking if there is a way to explicitly do cleanup rather than relying on implicit drops.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see the code changed; do you still need help figuring this out?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Im still not confident in it. It works (on my machine), but I couldn't find any docs on proper connection termination/dropping.

@@ -217,12 +267,12 @@ impl OutboundConnection {
copy::copy_bidirectional(copy::TcpStreamSplitter(stream), upgraded, connection_stats).await
}

async fn send_hbone_request(
fn create_hbone_request(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Git merge is getting confused here

@@ -70,7 +70,7 @@ const IPV6_ENABLED: &str = "IPV6_ENABLED";

const UNSTABLE_ENABLE_SOCKS5: &str = "UNSTABLE_ENABLE_SOCKS5";

const DEFAULT_WORKER_THREADS: u16 = 2;
const DEFAULT_WORKER_THREADS: u16 = 40;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may have missed in the description, but why the change here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was hoping it would making debugging async rust easier (it didn't)

Comment on lines 247 to 256
// This always drops ungracefully
// drop(conn_client);
// tokio::time::sleep(std::time::Duration::from_secs(1)).await;
// drain_tx.send(true).unwrap();
// tokio::time::sleep(std::time::Duration::from_secs(1)).await;
drain_tx.send(true).unwrap();
let _ = driver_task.await;
// this sleep is important, so we have a race condition somewhere
// tokio::time::sleep(std::time::Duration::from_secs(1)).await;
res
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you asking about how to cleanup after, for example, a RST_STREAM to the inner tunnel? Or something else

@Stevenjin8 Stevenjin8 force-pushed the feature/double-hbone branch from 1ea75fb to f1cc535 Compare January 21, 2025 17:07

// Inner HBONE
let upgraded = TokioH2Stream::new(upgraded);
// TODO: dst should take a hostname? and upstream_sans currently contains E/W Gateway certs
let inner_workload = pool::WorkloadKey {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will reorganize later.

@Stevenjin8 Stevenjin8 force-pushed the feature/double-hbone branch from a8856a4 to 565f41f Compare January 21, 2025 21:19
Protocol::TCP => None,
};
let (upstream_sans, final_sans) = match us.workload.protocol {
Copy link
Contributor Author

@Stevenjin8 Stevenjin8 Jan 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding from talking to @keithmattix is that Upstream.service_sans will be repurposed to contain the identities of remote pods/waypoints, so I should change the logic of the other protocols to only use us.workload.identity instead of us.workload_and_services_san.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think this is correct; only the double hbone codepath needs to be added/changed because there are two sans being considered: the e/w gateway SAN and the SANs of the backends. So what you have looks right to me

@@ -511,10 +578,10 @@ impl WorkloadHBONEPool {
#[derive(Debug, Clone)]
// A sort of faux-client, that represents a single checked-out 'request sender' which might
// send requests over some underlying stream using some underlying http/2 client
struct ConnClient {
sender: H2ConnectClient,
pub struct ConnClient {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixme

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
do-not-merge/hold Block automatic merging of a PR. do-not-merge/work-in-progress Block merging of a PR because it isn't ready yet. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants