forked from ovn-org/ovn
-
Notifications
You must be signed in to change notification settings - Fork 4
/
ovn-architecture.7.xml
2754 lines (2434 loc) · 115 KB
/
ovn-architecture.7.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="utf-8"?>
<manpage program="ovn-architecture" section="7" title="OVN Architecture">
<h1>Name</h1>
<p>ovn-architecture -- Open Virtual Network architecture</p>
<h1>Description</h1>
<p>
OVN, the Open Virtual Network, is a system to support logical network
abstraction in virtual machine and container environments. OVN complements
the existing capabilities of OVS to add native support for logical network
abstractions, such as logical L2 and L3 overlays and security groups.
Services such as DHCP are also desirable features. Just like OVS, OVN's
design goal is to have a production-quality implementation that can operate
at significant scale.
</p>
<p>
A physical network comprises physical wires, switches, and routers. A
<dfn>virtual network</dfn> extends a physical network into a hypervisor or
container platform, bridging VMs or containers into the physical network.
An OVN <dfn>logical network</dfn> is a network implemented in software that
is insulated from physical (and thus virtual) networks by tunnels or other
encapsulations. This allows IP and other address spaces used in logical
networks to overlap with those used on physical networks without causing
conflicts. Logical network topologies can be arranged without regard for
the topologies of the physical networks on which they run. Thus, VMs that
are part of a logical network can migrate from one physical machine to
another without network disruption. See <code>Logical Networks</code>,
below, for more information.
</p>
<p>
The encapsulation layer prevents VMs and containers connected to a logical
network from communicating with nodes on physical networks. For clustering
VMs and containers, this can be acceptable or even desirable, but in many
cases VMs and containers do need connectivity to physical networks. OVN
provides multiple forms of <dfn>gateways</dfn> for this purpose. See
<code>Gateways</code>, below, for more information.
</p>
<p>
An OVN deployment consists of several components:
</p>
<ul>
<li>
<p>
A <dfn>Cloud Management System</dfn> (<dfn>CMS</dfn>), which is
OVN's ultimate client (via its users and administrators). OVN
integration requires installing a CMS-specific plugin and
related software (see below). OVN initially targets OpenStack
as CMS.
</p>
<p>
We generally speak of ``the'' CMS, but one can imagine scenarios in
which multiple CMSes manage different parts of an OVN deployment.
</p>
</li>
<li>
An OVN Database physical or virtual node (or, eventually, cluster)
installed in a central location.
</li>
<li>
One or more (usually many) <dfn>hypervisors</dfn>. Hypervisors must run
Open vSwitch and implement the interface described in
<code>Documentation/topics/integration.rst</code> in the OVN source tree.
Any hypervisor platform supported by Open vSwitch is acceptable.
</li>
<li>
<p>
Zero or more <dfn>gateways</dfn>. A gateway extends a tunnel-based
logical network into a physical network by bidirectionally forwarding
packets between tunnels and a physical Ethernet port. This allows
non-virtualized machines to participate in logical networks. A gateway
may be a physical host, a virtual machine, or an ASIC-based hardware
switch that supports the <code>vtep</code>(5) schema.
</p>
<p>
Hypervisors and gateways are together called <dfn>transport node</dfn>
or <dfn>chassis</dfn>.
</p>
</li>
</ul>
<p>
The diagram below shows how the major components of OVN and related
software interact. Starting at the top of the diagram, we have:
</p>
<ul>
<li>
The Cloud Management System, as defined above.
</li>
<li>
<p>
The <dfn>OVN/CMS Plugin</dfn> is the component of the CMS that
interfaces to OVN. In OpenStack, this is a Neutron plugin.
The plugin's main purpose is to translate the CMS's notion of logical
network configuration, stored in the CMS's configuration database in a
CMS-specific format, into an intermediate representation understood by
OVN.
</p>
<p>
This component is necessarily CMS-specific, so a new plugin needs to be
developed for each CMS that is integrated with OVN. All of the
components below this one in the diagram are CMS-independent.
</p>
</li>
<li>
<p>
The <dfn>OVN Northbound Database</dfn> receives the intermediate
representation of logical network configuration passed down by the
OVN/CMS Plugin. The database schema is meant to be ``impedance
matched'' with the concepts used in a CMS, so that it directly supports
notions of logical switches, routers, ACLs, and so on. See
<code>ovn-nb</code>(5) for details.
</p>
<p>
The OVN Northbound Database has only two clients: the OVN/CMS Plugin
above it and <code>ovn-northd</code> below it.
</p>
</li>
<li>
<code>ovn-northd</code>(8) connects to the OVN Northbound Database
above it and the OVN Southbound Database below it. It translates the
logical network configuration in terms of conventional network
concepts, taken from the OVN Northbound Database, into logical
datapath flows in the OVN Southbound Database below it.
</li>
<li>
<p>
The <dfn>OVN Southbound Database</dfn> is the center of the system.
Its clients are <code>ovn-northd</code>(8) above it and
<code>ovn-controller</code>(8) on every transport node below it.
</p>
<p>
The OVN Southbound Database contains three kinds of data: <dfn>Physical
Network</dfn> (PN) tables that specify how to reach hypervisor and
other nodes, <dfn>Logical Network</dfn> (LN) tables that describe the
logical network in terms of ``logical datapath flows,'' and
<dfn>Binding</dfn> tables that link logical network components'
locations to the physical network. The hypervisors populate the PN and
Port_Binding tables, whereas <code>ovn-northd</code>(8) populates the
LN tables.
</p>
<p>
OVN Southbound Database performance must scale with the number of
transport nodes. This will likely require some work on
<code>ovsdb-server</code>(1) as we encounter bottlenecks.
Clustering for availability may be needed.
</p>
</li>
</ul>
<p>
The remaining components are replicated onto each hypervisor:
</p>
<ul>
<li>
<code>ovn-controller</code>(8) is OVN's agent on each hypervisor and
software gateway. Northbound, it connects to the OVN Southbound
Database to learn about OVN configuration and status and to
populate the PN table and the <code>Chassis</code> column in
<code>Binding</code> table with the hypervisor's status.
Southbound, it connects to <code>ovs-vswitchd</code>(8) as an
OpenFlow controller, for control over network traffic, and to the
local <code>ovsdb-server</code>(1) to allow it to monitor and
control Open vSwitch configuration.
</li>
<li>
<code>ovs-vswitchd</code>(8) and <code>ovsdb-server</code>(1) are
conventional components of Open vSwitch.
</li>
</ul>
<pre fixed="yes">
CMS
|
|
+-----------|-----------+
| | |
| OVN/CMS Plugin |
| | |
| | |
| OVN Northbound DB |
| | |
| | |
| ovn-northd |
| | |
+-----------|-----------+
|
|
+-------------------+
| OVN Southbound DB |
+-------------------+
|
|
+------------------+------------------+
| | |
HV 1 | | HV n |
+---------------|---------------+ . +---------------|---------------+
| | | . | | |
| ovn-controller | . | ovn-controller |
| | | | . | | | |
| | | | | | | |
| ovs-vswitchd ovsdb-server | | ovs-vswitchd ovsdb-server |
| | | |
+-------------------------------+ +-------------------------------+
</pre>
<h2>Information Flow in OVN</h2>
<p>
Configuration data in OVN flows from north to south. The CMS, through its
OVN/CMS plugin, passes the logical network configuration to
<code>ovn-northd</code> via the northbound database. In turn,
<code>ovn-northd</code> compiles the configuration into a lower-level form
and passes it to all of the chassis via the southbound database.
</p>
<p>
Status information in OVN flows from south to north. OVN currently
provides only a few forms of status information. First,
<code>ovn-northd</code> populates the <code>up</code> column in the
northbound <code>Logical_Switch_Port</code> table: if a logical port's
<code>chassis</code> column in the southbound <code>Port_Binding</code>
table is nonempty, it sets <code>up</code> to <code>true</code>, otherwise
to <code>false</code>. This allows the CMS to detect when a VM's
networking has come up.
</p>
<p>
Second, OVN provides feedback to the CMS on the realization of its
configuration, that is, whether the configuration provided by the CMS has
taken effect. This feature requires the CMS to participate in a sequence
number protocol, which works the following way:
</p>
<ol>
<li>
When the CMS updates the configuration in the northbound database, as
part of the same transaction, it increments the value of the
<code>nb_cfg</code> column in the <code>NB_Global</code> table. (This is
only necessary if the CMS wants to know when the configuration has been
realized.)
</li>
<li>
When <code>ovn-northd</code> updates the southbound database based on a
given snapshot of the northbound database, it copies <code>nb_cfg</code>
from northbound <code>NB_Global</code> into the southbound database
<code>SB_Global</code> table, as part of the same transaction. (Thus, an
observer monitoring both databases can determine when the southbound
database is caught up with the northbound.)
</li>
<li>
After <code>ovn-northd</code> receives confirmation from the southbound
database server that its changes have committed, it updates
<code>sb_cfg</code> in the northbound <code>NB_Global</code> table to the
<code>nb_cfg</code> version that was pushed down. (Thus, the CMS or
another observer can determine when the southbound database is caught up
without a connection to the southbound database.)
</li>
<li>
The <code>ovn-controller</code> process on each chassis receives the
updated southbound database, with the updated <code>nb_cfg</code>. This
process in turn updates the physical flows installed in the chassis's
Open vSwitch instances. When it receives confirmation from Open vSwitch
that the physical flows have been updated, it updates <code>nb_cfg</code>
in its own <code>Chassis</code> record in the southbound database.
</li>
<li>
<code>ovn-northd</code> monitors the <code>nb_cfg</code> column in all of
the <code>Chassis</code> records in the southbound database. It keeps
track of the minimum value among all the records and copies it into the
<code>hv_cfg</code> column in the northbound <code>NB_Global</code>
table. (Thus, the CMS or another observer can determine when all of the
hypervisors have caught up to the northbound configuration.)
</li>
</ol>
<h2>Chassis Setup</h2>
<p>
Each chassis in an OVN deployment must be configured with an Open vSwitch
bridge dedicated for OVN's use, called the <dfn>integration bridge</dfn>.
System startup scripts may create this bridge prior to starting
<code>ovn-controller</code> if desired. If this bridge does not exist when
ovn-controller starts, it will be created automatically with the default
configuration suggested below. The ports on the integration bridge include:
</p>
<ul>
<li>
On any chassis, tunnel ports that OVN uses to maintain logical network
connectivity. <code>ovn-controller</code> adds, updates, and removes
these tunnel ports.
</li>
<li>
On a hypervisor, any VIFs that are to be attached to logical networks.
The hypervisor itself, or the integration between Open vSwitch and the
hypervisor (described in
<code>Documentation/topics/integration.rst</code>) takes care of this.
(This is not part of OVN or new to OVN; this is pre-existing integration
work that has already been done on hypervisors that support OVS.)
</li>
<li>
On a gateway, the physical port used for logical network connectivity.
System startup scripts add this port to the bridge prior to starting
<code>ovn-controller</code>. This can be a patch port to another bridge,
instead of a physical port, in more sophisticated setups.
</li>
</ul>
<p>
Other ports should not be attached to the integration bridge. In
particular, physical ports attached to the underlay network (as opposed to
gateway ports, which are physical ports attached to logical networks) must
not be attached to the integration bridge. Underlay physical ports should
instead be attached to a separate Open vSwitch bridge (they need not be
attached to any bridge at all, in fact).
</p>
<p>
The integration bridge should be configured as described below.
The effect of each of these settings is documented in
<code>ovs-vswitchd.conf.db</code>(5):
</p>
<!-- Keep the following in sync with create_br_int() in
ovn/controller/ovn-controller.c. -->
<dl>
<dt><code>fail-mode=secure</code></dt>
<dd>
Avoids switching packets between isolated logical networks before
<code>ovn-controller</code> starts up. See <code>Controller Failure
Settings</code> in <code>ovs-vsctl</code>(8) for more information.
</dd>
<dt><code>other-config:disable-in-band=true</code></dt>
<dd>
Suppresses in-band control flows for the integration bridge. It would be
unusual for such flows to show up anyway, because OVN uses a local
controller (over a Unix domain socket) instead of a remote controller.
It's possible, however, for some other bridge in the same system to have
an in-band remote controller, and in that case this suppresses the flows
that in-band control would ordinarily set up. Refer to the documentation
for more information.
</dd>
</dl>
<p>
The customary name for the integration bridge is <code>br-int</code>, but
another name may be used.
</p>
<h2>Logical Networks</h2>
<p>
Logical network concepts in OVN include <dfn>logical switches</dfn> and
<dfn>logical routers</dfn>, the logical version of Ethernet switches and IP
routers, respectively. Like their physical cousins, logical switches and
routers can be connected into sophisticated topologies. Logical switches
and routers are ordinarily purely logical entities, that is, they are not
associated or bound to any physical location, and they are implemented in a
distributed manner at each hypervisor that participates in OVN.
</p>
<p>
<dfn>Logical switch ports</dfn> (LSPs) are points of connectivity into and
out of logical switches. There are many kinds of logical switch ports.
The most ordinary kind represent VIFs, that is, attachment points for VMs
or containers. A VIF logical port is associated with the physical location
of its VM, which might change as the VM migrates. (A VIF logical port can
be associated with a VM that is powered down or suspended. Such a logical
port has no location and no connectivity.)
</p>
<p>
<dfn>Logical router ports</dfn> (LRPs) are points of connectivity into and
out of logical routers. A LRP connects a logical router either to a
logical switch or to another logical router. Logical routers only connect
to VMs, containers, and other network nodes indirectly, through logical
switches.
</p>
<p>
Logical switches and logical routers have distinct kinds of logical ports,
so properly speaking one should usually talk about logical switch ports or
logical router ports. However, an unqualified ``logical port'' usually
refers to a logical switch port.
</p>
<p>
When a VM sends a packet to a VIF logical switch port, the Open vSwitch
flow tables simulate the packet's journey through that logical switch and
any other logical routers and logical switches that it might encounter.
This happens without transmitting the packet across any physical medium:
the flow tables implement all of the switching and routing decisions and
behavior. If the flow tables ultimately decide to output the packet at a
logical port attached to another hypervisor (or another kind of transport
node), then that is the time at which the packet is encapsulated for
physical network transmission and sent.
</p>
<h3>Logical Switch Port Types</h3>
<p>
OVN supports a number of kinds of logical switch ports. VIF ports that
connect to VMs or containers, described above, are the most ordinary kind
of LSP. In the OVN northbound database, VIF ports have an empty string for
their <code>type</code>. This section describes some of the additional
port types.
</p>
<p>
A <code>router</code> logical switch port connects a logical switch to a
logical router, designating a particular LRP as its peer.
</p>
<p>
A <code>localnet</code> logical switch port bridges a logical switch to a
physical VLAN. A logical switch may have one or more <code>localnet</code>
ports. Such a logical switch is used in two scenarios:
</p>
<ul>
<li>
With one or more <code>router</code> logical switch ports, to attach L3
gateway routers and distributed gateways to a physical network.
</li>
<li>
With one or more VIF logical switch ports, to attach VMs or containers
directly to a physical network. In this case, the logical switch is not
really logical, since it is bridged to the physical network rather than
insulated from it, and therefore cannot have independent but overlapping
IP address namespaces, etc. A deployment might nevertheless choose such
a configuration to take advantage of the OVN control plane and features
such as port security and ACLs.
</li>
</ul>
<p>
When a logical switch contains multiple <code>localnet</code> ports, the
following is assumed.
</p>
<ul>
<li>
Each chassis has a bridge mapping for one of the <code>localnet</code>
physical networks only.
</li>
<li>
To facilitate interconnectivity between VIF ports of the switch that are
located on different chassis with different physical network
connectivity, the fabric implements L3 routing between these adjacent
physical network segments.
</li>
</ul>
<p>
Note: nothing said above implies that a chassis cannot be plugged to
multiple physical networks as long as they belong to different
switches.
</p>
<p>
A <code>localport</code> logical switch port is a special kind of VIF
logical switch port. These ports are present in every chassis, not bound
to any particular one. Traffic to such a port will never be forwarded
through a tunnel, and traffic from such a port is expected to be destined
only to the same chassis, typically in response to a request it received.
OpenStack Neutron uses a <code>localport</code> port to serve metadata to
VMs. A metadata proxy process is attached to this port on every host and
all VMs within the same network will reach it at the same IP/MAC address
without any traffic being sent over a tunnel. For further details, see
the OpenStack documentation for networking-ovn.
</p>
<p>
LSP types <code>vtep</code> and <code>l2gateway</code> are used for
gateways. See <code>Gateways</code>, below, for more information.
</p>
<h3>Implementation Details</h3>
<p>
These concepts are details of how OVN is implemented internally. They
might still be of interest to users and administrators.
</p>
<p>
<dfn>Logical datapaths</dfn> are an implementation detail of logical
networks in the OVN southbound database. <code>ovn-northd</code>
translates each logical switch or router in the northbound database into a
logical datapath in the southbound database <code>Datapath_Binding</code>
table.
</p>
<p>
For the most part, <code>ovn-northd</code> also translates each logical
switch port in the OVN northbound database into a record in the southbound
database <code>Port_Binding</code> table. The latter table corresponds
roughly to the northbound <code>Logical_Switch_Port</code> table. It has
multiple types of logical port bindings, of which many types correspond
directly to northbound LSP types. LSP types handled this way include VIF
(empty string), <code>localnet</code>, <code>localport</code>,
<code>vtep</code>, and <code>l2gateway</code>.
</p>
<p>
The <code>Port_Binding</code> table has some types of port binding that do
not correspond directly to logical switch port types. The common common is
<code>patch</code> port bindings, known as <dfn>logical patch ports</dfn>.
These port bindings always occur in pairs, and a packet that enters on
either side comes out on the other. <code>ovn-northd</code> connects
logical switches and logical routers together using logical patch ports.
</p>
<p>
Port bindings with types <code>vtep</code>, <code>l2gateway</code>,
<code>l3gateway</code>, and <code>chassisredirect</code> are used for
gateways. These are explained in <code>Gateways</code>, below.
</p>
<h2>Gateways</h2>
<p>
Gateways provide limited connectivity between logical networks and physical
ones. They can also provide connectivity between different OVN deployments.
This section will focus on the former, and the latter will be described in
details in section <code>OVN Deployments Interconnection</code>.
</p>
<p>
OVN support multiple kinds of gateways.
</p>
<h3>VTEP Gateways</h3>
<p>
A ``VTEP gateway'' connects an OVN logical network to a physical (or
virtual) switch that implements the OVSDB VTEP schema that accompanies Open
vSwitch. (The ``VTEP gateway'' term is a misnomer, since a VTEP is just a
VXLAN Tunnel Endpoint, but it is a well established name.) See <code>Life
Cycle of a VTEP gateway</code>, below, for more information.
</p>
<p>
The main intended use case for VTEP gateways is to attach physical servers
to an OVN logical network using a physical top-of-rack switch that supports
the OVSDB VTEP schema.
</p>
<h3>L2 Gateways</h3>
<p>
A L2 gateway simply attaches a designated physical L2 segment available on
some chassis to a logical network. The physical network effectively
becomes part of the logical network.
</p>
<p>
To set up a L2 gateway, the CMS adds an <code>l2gateway</code> LSP to an
appropriate logical switch, setting LSP options to name the chassis on
which it should be bound. <code>ovn-northd</code> copies this
configuration into a southbound <code>Port_Binding</code> record. On the
designated chassis, <code>ovn-controller</code> forwards packets
appropriately to and from the physical segment.
</p>
<p>
L2 gateway ports have features in common with <code>localnet</code> ports.
However, with a <code>localnet</code> port, the physical network becomes
the transport between hypervisors. With an L2 gateway, packets are still
transported between hypervisors over tunnels and the <code>l2gateway</code>
port is only used for the packets that are on the physical network. The
application for L2 gateways is similar to that for VTEP gateways, e.g. to
add non-virtualized machines to a logical network, but L2 gateways do not
require special support from top-of-rack hardware switches.
</p>
<h3>L3 Gateway Routers</h3>
<p>
As described above under <code>Logical Networks</code>, ordinary OVN
logical routers are distributed: they are not implemented in a single place
but rather in every hypervisor chassis. This is a problem for stateful
services such as SNAT and DNAT, which need to be implemented in a
centralized manner.
</p>
<p>
To allow for this kind of functionality, OVN supports L3 gateway routers,
which are OVN logical routers that are implemented in a designated chassis.
Gateway routers are typically used between distributed logical routers and
physical networks. The distributed logical router and the logical switches
behind it, to which VMs and containers attach, effectively reside on each
hypervisor. The distributed router and the gateway router are connected by
another logical switch, sometimes referred to as a ``join'' logical switch.
(OVN logical routers may be connected to one another directly, without an
intervening switch, but the OVN implementation only supports gateway
logical routers that are connected to logical switches. Using a join
logical switch also reduces the number of IP addresses needed on the
distributed router.) On the other side, the gateway router connects to
another logical switch that has a <code>localnet</code> port connecting to
the physical network.
</p>
<p>
The following diagram shows a typical situation. One or more logical
switches LS1, ..., LSn connect to distributed logical router LR1, which in
turn connects through LSjoin to gateway logical router GLR, which also
connects to logical switch LSlocal, which includes a <code>localnet</code>
port to attach to the physical network.
</p>
<pre fixed="yes">
LSlocal
|
GLR
|
LSjoin
|
LR1
|
+----+----+
| | |
LS1 ... LSn
</pre>
<p>
To configure an L3 gateway router, the CMS sets
<code>options:chassis</code> in the router's northbound
<code>Logical_Router</code> to the chassis's name. In response,
<code>ovn-northd</code> uses a special <code>l3gateway</code> port binding
(instead of a <code>patch</code> binding) in the southbound database to
connect the logical router to its neighbors. In turn,
<code>ovn-controller</code> tunnels packets to this port binding to the
designated L3 gateway chassis, instead of processing them locally.
</p>
<p>
DNAT and SNAT rules may be associated with a gateway router, which
provides a central location that can handle one-to-many SNAT (aka IP
masquerading). Distributed gateway ports, described below, also
support NAT.
</p>
<h3>Distributed Gateway Ports</h3>
<p>
A <dfn>distributed gateway port</dfn> is a logical router port that is
specially configured to designate one distinguished chassis, called the
<dfn>gateway chassis</dfn>, for centralized processing. A distributed
gateway port should connect to a logical switch that has an LSP that
connects externally, that is, either a <code>localnet</code> LSP or a
connection to another OVN deployment (see <code>OVN Deployments
Interconnection</code>). Packets that traverse the distributed gateway
port are processed without involving the gateway chassis when they can be,
but when needed they do take an extra hop through it.
</p>
<p>
The following diagram illustrates the use of a distributed gateway port. A
number of logical switches LS1, ..., LSn connect to distributed logical
router LR1, which in turn connects through the distributed gateway port to
logical switch LSlocal that includes a <code>localnet</code> port to attach
to the physical network.
</p>
<pre fixed="yes">
LSlocal
|
LR1
|
+----+----+
| | |
LS1 ... LSn
</pre>
<p>
<code>ovn-northd</code> creates two southbound <code>Port_Binding</code>
records to represent a distributed gateway port, instead of the usual one.
One of these is a <code>patch</code> port binding named for the LRP, which
is used for as much traffic as it can. The other one is a port binding
with type <code>chassisredirect</code>, named
<code>cr-<var>port</var></code>. The <code>chassisredirect</code> port
binding has one specialized job: when a packet is output to it, the flow
table causes it to be tunneled to the gateway chassis, at which point
it is automatically output to the <code>patch</code> port binding. Thus,
the flow table can output to this port binding in cases where a particular
task has to happen on the gateway chassis. The
<code>chassisredirect</code> port binding is not otherwise used (for
example, it never receives packets).
</p>
<p>
The CMS may configure distributed gateway ports three different ways. See
<code>Distributed Gateway Ports</code> in the documentation for
<code>Logical_Router_Port</code> in <code>ovn-nb</code>(5) for details.
</p>
<p>
Distributed gateway ports support high availability. When more than one
chassis is specified, OVN only uses one at a time as the gateway chassis.
OVN uses BFD to monitor gateway connectivity, preferring the
highest-priority gateway that is online.
</p>
<h4>Physical VLAN MTU Issues</h4>
<p>
Consider the preceding diagram again:
</p>
<pre fixed="yes">
LSlocal
|
LR1
|
+----+----+
| | |
LS1 ... LSn
</pre>
<p>
Suppose that each logical switch LS1, ..., LSn is bridged to a physical
VLAN-tagged network attached to a <code>localnet</code> port on LSlocal,
over a distributed gateway port on LR1. If a packet originating on
LS<var>i</var> is destined to the external network, OVN sends it to the
gateway chassis over a tunnel. There, the packet traverses LR1's logical
router pipeline, possibly undergoes NAT, and eventually ends up at
LSlocal's <code>localnet</code> port. If all of the physical links in the
network have the same MTU, then the packet's transit across a tunnel causes
an MTU problem: tunnel overhead prevents a packet that uses the full
physical MTU from crossing the tunnel to the gateway chassis (without
fragmentation).
</p>
<p>
OVN offers two solutions to this problem, the
<code>reside-on-redirect-chassis</code> and <code>redirect-type</code>
options. Both solutions require each logical switch LS1, ..., LSn to
include a <code>localnet</code> logical switch port LN1, ..., LNn
respectively, that is present on each chassis. Both cause packets to be
sent over the <code>localnet</code> ports instead of tunnels. They differ
in which packets--some or all--are sent this way. The most prominent
tradeoff between these options is that
<code>reside-on-redirect-chassis</code> is easier to configure and that
<code>redirect-type</code> performs better for east-west traffic.
</p>
<p>
The first solution is the <code>reside-on-redirect-chassis</code> option
for logical router ports. Setting this option on a LRP from (e.g.) LS1 to
LR1 disables forwarding from LS1 to LR1 except on the gateway chassis. On
chassis other than the gateway chassis, this single change means that
packets that would otherwise have been forwarded to LR1 are instead
forwarded to LN1. The instance of LN1 on the gateway chassis then receives
the packet and forwards it to LR1. The packet traverses the LR1 logical
router pipeline, possibly undergoes NAT, and eventually ends up at
LSlocal's <code>localnet</code> port. The packet never traverses a tunnel,
avoiding the MTU issue.
</p>
<p>
This option has the further consequence of centralizing ``distributed''
logical router LR1, since no packets are forwarded from LS1 to LR1 on any
chassis other than the gateway chassis. Therefore, east-west traffic
passes through the gateway chassis, not just north-south. (The naive
``fix'' of allowing east-west traffic to flow directly between chassis over
LN1 does not work because routing sets the Ethernet source address to LR1's
source address. Seeing this single Ethernet source address originate from
all of the chassis will confuse the physical switch.)
</p>
<p>
Do not set the <code>reside-on-redirect-chassis</code> option on a
distributed gateway port. In the diagram above, it would be set on the
LRPs connecting LS1, ..., LSn to LR1.
</p>
<p>
The second solution is the <code>redirect-type</code> option for
distributed gateway ports. Setting this option to <code>bridged</code>
causes packets that are redirected to the gateway chassis to go over the
<code>localnet</code> ports instead of being tunneled. This option does
not change how OVN treats packets not redirected to the gateway chassis.
</p>
<p>
The <code>redirect-type</code> option requires the administrator or the
CMS to configure each participating chassis with a unique Ethernet address
for the locgical router by setting <code>ovn-chassis-mac-mappings</code> in
the Open vSwitch database, for use by <code>ovn-controller</code>. This
makes it more difficult to configure than
<code>reside-on-redirect-chassis</code>.
</p>
<p>
Set the <code>redirect-type</code> option on a distributed gateway port.
</p>
<h2>Life Cycle of a VIF</h2>
<p>
Tables and their schemas presented in isolation are difficult to
understand. Here's an example.
</p>
<p>
A VIF on a hypervisor is a virtual network interface attached either
to a VM or a container running directly on that hypervisor (This is
different from the interface of a container running inside a VM).
</p>
<p>
The steps in this example refer often to details of the OVN and OVN
Northbound database schemas. Please see <code>ovn-sb</code>(5) and
<code>ovn-nb</code>(5), respectively, for the full story on these
databases.
</p>
<ol>
<li>
A VIF's life cycle begins when a CMS administrator creates a new VIF
using the CMS user interface or API and adds it to a switch (one
implemented by OVN as a logical switch). The CMS updates its own
configuration. This includes associating unique, persistent identifier
<var>vif-id</var> and Ethernet address <var>mac</var> with the VIF.
</li>
<li>
The CMS plugin updates the OVN Northbound database to include the new
VIF, by adding a row to the <code>Logical_Switch_Port</code>
table. In the new row, <code>name</code> is <var>vif-id</var>,
<code>mac</code> is <var>mac</var>, <code>switch</code> points to
the OVN logical switch's Logical_Switch record, and other columns
are initialized appropriately.
</li>
<li>
<code>ovn-northd</code> receives the OVN Northbound database update. In
turn, it makes the corresponding updates to the OVN Southbound database,
by adding rows to the OVN Southbound database <code>Logical_Flow</code>
table to reflect the new port, e.g. add a flow to recognize that packets
destined to the new port's MAC address should be delivered to it, and
update the flow that delivers broadcast and multicast packets to include
the new port. It also creates a record in the <code>Binding</code> table
and populates all its columns except the column that identifies the
<code>chassis</code>.
</li>
<li>
On every hypervisor, <code>ovn-controller</code> receives the
<code>Logical_Flow</code> table updates that <code>ovn-northd</code> made
in the previous step. As long as the VM that owns the VIF is powered
off, <code>ovn-controller</code> cannot do much; it cannot, for example,
arrange to send packets to or receive packets from the VIF, because the
VIF does not actually exist anywhere.
</li>
<li>
Eventually, a user powers on the VM that owns the VIF. On the hypervisor
where the VM is powered on, the integration between the hypervisor and
Open vSwitch (described in
<code>Documentation/topics/integration.rst</code>) adds the VIF to the OVN
integration bridge and stores <var>vif-id</var> in
<code>external_ids</code>:<code>iface-id</code> to indicate that the
interface is an instantiation of the new VIF. (None of this code is new
in OVN; this is pre-existing integration work that has already been done
on hypervisors that support OVS.)
</li>
<li>
On the hypervisor where the VM is powered on, <code>ovn-controller</code>
notices <code>external_ids</code>:<code>iface-id</code> in the new
Interface. In response, in the OVN Southbound DB, it updates the
<code>Binding</code> table's <code>chassis</code> column for the
row that links the logical port from <code>external_ids</code>:<code>
iface-id</code> to the hypervisor. Afterward, <code>ovn-controller</code>
updates the local hypervisor's OpenFlow tables so that packets to and from
the VIF are properly handled.
</li>
<li>
Some CMS systems, including OpenStack, fully start a VM only when its
networking is ready. To support this, <code>ovn-northd</code> notices
the <code>chassis</code> column updated for the row in
<code>Binding</code> table and pushes this upward by updating the
<ref column="up" table="Logical_Switch_Port" db="OVN_NB"/> column
in the OVN Northbound database's <ref table="Logical_Switch_Port"
db="OVN_NB"/> table to indicate that the VIF is now up. The CMS,
if it uses this feature, can then react by allowing the VM's
execution to proceed.
</li>
<li>
On every hypervisor but the one where the VIF resides,
<code>ovn-controller</code> notices the completely populated row in the
<code>Binding</code> table. This provides <code>ovn-controller</code>
the physical location of the logical port, so each instance updates the
OpenFlow tables of its switch (based on logical datapath flows in the OVN
DB <code>Logical_Flow</code> table) so that packets to and from the VIF
can be properly handled via tunnels.
</li>
<li>
Eventually, a user powers off the VM that owns the VIF. On the
hypervisor where the VM was powered off, the VIF is deleted from the OVN
integration bridge.
</li>
<li>
On the hypervisor where the VM was powered off,
<code>ovn-controller</code> notices that the VIF was deleted. In
response, it removes the <code>Chassis</code> column content in the
<code>Binding</code> table for the logical port.
</li>
<li>
On every hypervisor, <code>ovn-controller</code> notices the empty
<code>Chassis</code> column in the <code>Binding</code> table's row
for the logical port. This means that <code>ovn-controller</code> no
longer knows the physical location of the logical port, so each instance
updates its OpenFlow table to reflect that.
</li>
<li>
Eventually, when the VIF (or its entire VM) is no longer needed by
anyone, an administrator deletes the VIF using the CMS user interface or
API. The CMS updates its own configuration.
</li>
<li>
The CMS plugin removes the VIF from the OVN Northbound database,
by deleting its row in the <code>Logical_Switch_Port</code> table.
</li>
<li>
<code>ovn-northd</code> receives the OVN Northbound update and in turn
updates the OVN Southbound database accordingly, by removing or updating
the rows from the OVN Southbound database <code>Logical_Flow</code> table
and <code>Binding</code> table that were related to the now-destroyed
VIF.
</li>
<li>
On every hypervisor, <code>ovn-controller</code> receives the
<code>Logical_Flow</code> table updates that <code>ovn-northd</code> made
in the previous step. <code>ovn-controller</code> updates OpenFlow
tables to reflect the update, although there may not be much to do, since
the VIF had already become unreachable when it was removed from the
<code>Binding</code> table in a previous step.
</li>
</ol>
<h2>Life Cycle of a Container Interface Inside a VM</h2>
<p>
OVN provides virtual network abstractions by converting information
written in OVN_NB database to OpenFlow flows in each hypervisor. Secure
virtual networking for multi-tenants can only be provided if OVN controller
is the only entity that can modify flows in Open vSwitch. When the
Open vSwitch integration bridge resides in the hypervisor, it is a
fair assumption to make that tenant workloads running inside VMs cannot
make any changes to Open vSwitch flows.
</p>
<p>
If the infrastructure provider trusts the applications inside the
containers not to break out and modify the Open vSwitch flows, then
containers can be run in hypervisors. This is also the case when
containers are run inside the VMs and Open vSwitch integration bridge
with flows added by OVN controller resides in the same VM. For both
the above cases, the workflow is the same as explained with an example