Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

callro should route requests to the master, but instead routes them to the replica #488

Open
pavlua opened this issue Sep 5, 2024 · 2 comments

Comments

@pavlua
Copy link

pavlua commented Sep 5, 2024

Seems like a bug in new vshard.

Previously (in Tarantool 2 clusters), callro always route requests to master: when I called require('vshard').router.callro(1, 'dostring', {'return box.info.uuid'}) I always got the id of master.

Now when I use Tarantool 3 cluster and vshard version 0.1.28, when I call vshard.router.callro(1, 'dostring', {'return box.info.uuid'}) I get the replica id.

This kind of breaks some logic in the crud module: under the hood, when we call crud.select with preference_replica=false and balance = false, it calls the callro method, but now it leads to the replica

@Serpentian
Copy link
Contributor

I cannot reproduce the behavior you're describing. A router still goes to master for me by default:

We have a basic example of vshard cluster in example. Here, what I do:

$> make
tarantoolctl start storage_1_a
Starting instance storage_1_a...
<...>
unix/:./data/router_1.control> require('vshard').router.callro(1, 'dostring', {'return box.info.uuid'})
---
- 1e02ae8a-afc0-4e91-ba34-843a356b8ed7
...
unix/:./data/router_1.control> require('vshard').router.callro(1, 'dostring', {'return box.info.ro'})
---
- false
...

Here's the config:

{
    sharding = {
        ['cbf06940-0790-498b-948d-042b62cf3d29'] = { -- replicaset #1
            replicas = {
                ['8a274925-a26d-47fc-9e1b-af88ce939412'] = {
                    uri = 'storage:[email protected]:3301',
                    name = 'storage_1_a',
                    master = true
                },
                ['3de2e3e1-9ebe-4d0d-abb1-26d301b84633'] = {
                    uri = 'storage:[email protected]:3302',
                    name = 'storage_1_b'
                }
            },
        }, -- replicaset #1
        ['ac522f65-aa94-4134-9f64-51ee384f1a54'] = { -- replicaset #2
            replicas = {
                ['1e02ae8a-afc0-4e91-ba34-843a356b8ed7'] = {
                    uri = 'storage:[email protected]:3303',
                    name = 'storage_2_a',
                    master = true
                },
                ['001688c3-66f8-4a31-8e19-036c17d489c2'] = {
                    uri = 'storage:[email protected]:3304',
                    name = 'storage_2_b'
                }
            },
        }, -- replicaset #2
    }, -- sharding
    replication_connect_quorum = 0,
}

Please, check, that weight of master is >= than weight of replica, if you use zones for configuring vshard. Please, check log messages for errors, router will go to replica instead of master in callro if connection is down or 3 sequential requests fails. But with properly working replicasets and correctly configured weights callro should go to master by default.

@Serpentian Serpentian added the needs feedback Something is unclear with the issue label Sep 5, 2024
@pavlua
Copy link
Author

pavlua commented Sep 6, 2024

I prepared more structured repro.
tt version: Tarantool CLI EE 2.4.0, linux/amd64. commit: d5f731b
tarantool --version: Tarantool Enterprise 3.1.1-0-g84de79644

  1. tt init
  2. tt create vshard_cluster --name bug-488-repro (1 replicaset, other parameters default)
  3. update vshard == 0.1.28 in rockspec
  4. tt build bug-488-repro
  5. tt start bug-488-repro
  6. tt connect bug-488-repro:router-001-a
  7. vshard.router.callro(1, 'dostring', {'return box.info.ro'})

expected - false, actual - true
image

@sergepetrenko sergepetrenko removed the needs feedback Something is unclear with the issue label Sep 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants