-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathCONFIG-KEYS
1563 lines (1319 loc) · 92.6 KB
/
CONFIG-KEYS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
SUPPORTED CONFIGURATION KEYS
Both configuration directives and commandline switches are listed below.
A configuration consists of key/value pairs, separated by the ':' char.
Starting a line with the '!' symbol, makes the whole line to be ignored
by the interpreter, making it a comment. Please also refer to QUICKSTART
document and the 'examples/' sub-tree for some examples.
Directives are sometimes grouped, like sql_table and print_output_file:
this is to stress if multiple plugins are running as part of the same
daemon instance, such directives must be casted to the plugin they refer
to - in order to prevent undesired inheritance effects. In other words,
grouped directives share the same field in the configuration structure.
LEGEND of flags:
GLOBAL Can't be configured on individual plugins
NO_GLOBAL Can't be configured globally
NO_PMACCTD Does not apply to 'pmacctd' (it's likely it will not apply to 'uacctd' either)
NO_UACCTD Does not apply to 'uacctd'
NO_NFACCTD Does not apply to 'nfacctd' (it's likely it will not apply to 'sfacctd' either)
NO_SFACCTD Does not apply to 'sfacctd'
LIST OF DIRECTIVES:
KEY: debug (-d)
VALUES: [ true | false ]
DESC: enables debug (default: false).
KEY: daemonize (-D) [GLOBAL]
VALUES: [ true | false ]
DESC: daemonizes the process (default: false).
KEY: aggregate (-c)
VALUES: [ src_mac, dst_mac, vlan, cos, etype, src_host, dst_host, src_net, dst_net,
src_mask, dst_mask, src_as, dst_as, src_port, dst_port, tos, proto, none,
sum_mac, sum_host, sum_net, sum_as, sum_port, flows, tag, tag2, class,
tcpflags, in_iface, out_iface, std_comm, ext_comm, as_path, peer_src_ip,
peer_dst_ip, peer_src_as, peer_dst_as, local_pref, med, src_std_comm,
src_ext_comm, src_as_path, src_local_pref, src_med, mpls_vpn_rd,
mpls_label_top, mpls_label_bottom, mpls_stack_depth, sampling_rate,
src_host_country, dst_host_country, pkt_len_distrib, nat_event,
post_nat_src_host, post_nat_dst_host, post_nat_src_port, post_nat_dst_port,
fw_event, timestamp_start, timestamp_end ]
FOREWORDS: individual IP packets are uniquely identified by their header fields values (a
rather large set of primitives!). Same applies to uni-directional IP flows, as
they have at least enough information to discriminate where packets are coming
from and going to. Aggregates are instead used for the sole purpose of IP
accounting and hence can be identified by a custom and stripped down set of
primitives.
The procedure to create an aggregate starting from IP packets or flow is: (a)
select only the primitives of interest (generic aggregation), (b) optionally
cast certain primitive values into broader logical entities, ie. IP addresses
into network prefixes or Autonomous System Numbers (spatial aggregation) and
(c) sum bytes/flows/packets counters whenever a new constituent IP packet or
flow is captured (temporal aggregation).
DESC: aggregate captured traffic data by selecting the specified set of primitives.
sum_<primitive> are compound primitives which join together both inbound and
outbound traffic into a single aggregate. The 'none' primitive allows to make
an unique aggregate which accounts for the grand total of traffic flowing
through a specific interface. 'tag' and 'tag2' enable generation of tags when
tagging engines (pre_tag_map, post_tag) are in use. 'class' enables reception
of L7 traffic classes when Packet/Flow Classification engine (classifiers) is
in use. (default: src_host).
NOTES: * Some primitives (ie. tag2, timestamp_start, timestamp_end) are not part of
any default SQL table schema shipped. Always check out documentation related
to the RDBMS in use (ie. 'sql/README.mysql') which will point you to extra
primitive-related documentation, if required.
* List of the aggregation primitives available to each specific pmacct daemon
is available via -a command-line option, ie. "pmacctd -a".
* sampling_rate: if counters renormalization is enabled this field will report
a value of 1; otherwise it will report the rate pmacct would have applied if
renormalize counters was enabled.
* src_std_comm, src_ext_comm, src_as_path are based on reverse BGP lookups;
peer_src_as, src_local_pref and src_med are by default based on reverse BGP
lookups but can be alternatively based on other methods, for example maps
(ie. bgp_peer_src_as_type). Internet traffic is by nature asymmetric hence
reverse BGP lookups must be used with caution (ie. against own prefixes).
* timestamp_start and timestamp_end should not be mixed with pmacct support
for historical accounting, ie. breakdown of traffic in time-bins via the
sql_history feature; the two primitives have the effect of letting pmacct
act as a logger up to the msec level (if reported by the capturing method).
timestamp_start records the likes of libpcap packet timestamp, sFlow sample
arrival time, NetFlow/IPFIX observation time and flow first switched time;
timestamp_end currently only makes sense for logging flows via NetFlow and
IPFIX.
KEY: aggregate_primitives [GLOBAL]
DESC: Expects full pathname to a file containing custom-defined primitives. Once
properly defined in this file (see 'examples/primitives.lst' for full syntax),
primitives can be used in 'aggregate' statements. The feature is currently
available only in nfacctd, for NetFlow v9/IPFIX, pmacctd and uacctd. IPFIX
variable-length fields, for example strings, are currently supported only by
defining a length equal to the maximum estimated length of the field.
KEY: aggregate_filter [NO_GLOBAL]
DESC: Per-plugin filtering applied against the original packet or flow. Aggregation
is performed slightly afterwards, upon successful match of this filter.
By binding a filter, in tcpdump syntax, to an active plugin, this directive
allows to select which data has to be delivered to the plugin and aggregated
as specified by the plugin 'aggregate' directive. See the following example:
...
aggregate[inbound]: dst_host
aggregate[outbound]: src_host
aggregate_filter[inbound]: dst net 192.168.0.0/16
aggregate_filter[outbound]: src net 192.168.0.0/16
plugins: memory[inbound], memory[outbound]
...
This directive can be used in conjunction with 'pre_tag_filter' (which, in
turn, allows to filter tags). You will also need to force fragmentation handling
in the specific case in which a) none of the 'aggregate' directives is including
L4 primitives (ie. src_port, dst_port) but b) an 'aggregate_filter' runs a filter
which requires dealing with L4 primitives. For further information, refer to the
'pmacctd_force_frag_handling' directive.
KEY: pcap_filter (like tcpdump syntax) [GLOBAL, NO_NFACCTD]
DESC: this filter is global and applied to all incoming packets. It's passed to libpcap
and, indeed, expects libpcap/tcpdump filter syntax. Being global it doesn't offer
a great flexibility but it's the fastest way to drop unwanted traffic. It applies
only to pmacctd.
KEY: snaplen (-L) [GLOBAL, NO_NFACCTD]
DESC: specifies the maximum number of bytes to capture for each packet. This directive has
key importance when enabling both classification and connection tracking engines. In
fact, some protocols (mostly text-based eg.: RTSP, SIP, etc.) benefit of extra bytes
because they give more chances to successfully track data streams spawned by control
channel. But it must be also noted that capturing larger packet portion require more
resources. The right value need to be traded-off. In case classification is enabled,
values under 200 bytes are often meaningless. 500-750 bytes are enough even for text
based protocols. Default snaplen values are ok if classification is disabled.
For uacctd daemon, this option doesn't apply to packet snapshot length but rather to
the Netlink socket read buffer size. This should be reasonably large - at least 4KB,
which is the default value. For large uacctd_nl_size values snaplen could be further
increased.
KEY: plugins (-P)
VALUES: [ memory | print | mysql | pgsql | sqlite3 | mongodb | nfprobe | sfprobe | tee ]
DESC: plugins to be enabled. SQL plugins are available only if configured and compiled.
'memory' enables the use of a memory table as backend; then, a client tool, 'pmacct',
can fetch its content; mysql, pgsql and sqlite3 enable the use of respectively MySQL,
PostgreSQL and SQLite 3.x (or BerkeleyDB 5.x with the SQLite API compiled-in) tables
to store data. 'mongodb' enables use of the noSQL document-oriented database MongoDB
(requires installation of MongoDB API C driver which is shipped separatedly from the
main package). 'print' prints aggregates to flat-files or stdout in CSV or formatted.
'nfprobe' acts as a NetFlow/IPFIX agent and exports collected data via NetFlow v1/v5/
v9 and IPFIX datagrams to a remote collector. 'sfprobe' acts as a sFlow agent and
exports collected data via sFlow v5 datagrams to a remote collector. Both 'nfprobe'
and 'sfprobe' apply only to 'pmacctd' and 'uacctd' daemons. 'tee' acts as a replicator
for NetFlow/IPFIX/ sFlow data (also transparent); it applies to 'nfacctd' and 'sfacctd'
daemons only. Plugins can be either anonymous or named; configuration directives can
be either global or bound to a specific named plugin. An anonymous plugin is declared
as 'plugins: mysql' whereas a named plugin is declared as 'plugins: mysql[name]'.
Then, directives can be bound to such named plugin as: 'directive[name]: value'.
(default: memory)
KEY: [ nfacctd_pipe_size | sfacctd_pipe_size | pmacctd_pipe_size ]
DESC: Defines the size of the kernel socket used read traffic data. On Linux systems, if
this configuration directive is not specified default socket size awarded is defined
in /proc/sys/net/core/rmem_default ; the maximum configurable socket size (which can
be changed via sysctl) is defined in /proc/sys/net/core/rmem_max instead.
KEY: bgp_daemon_pipe_size
DESC: Defines the size of the kernel socket used for BGP messaging. On Linux systems, if
this configuration directive is not specified default socket size awarded is defined
in /proc/sys/net/core/rmem_default ; the maximum configurable socket size (which can
be changed via sysctl) is defined in /proc/sys/net/core/rmem_max instead.
KEY: plugin_pipe_size
DESC: Core process and each of the plugins are run into different processes. To exchange
data, they set up a communication channel structured as a circular queue (referred
as pipe). This directive sets the total size, in bytes, of such queue. Its default
size is set to 4MB. Whenever facing heavy traffic loads, this size can be adjusted
to store more data. In the following example the pipe between the Core process and
the plugin 'test' is set to 10MB, whereas the receiving socket of the Core process
is set to 2MB:
...
plugins: memory[test]
plugin_pipe_size[test]: 10240000
plugin_pipe_size[default]: 2048000
...
When enabling debug, log messages about obtained and target pipe sizes are printed.
If obtained is less than target, it could mean the maximum socket size granted by
the Operating System has to be increased. On Linux systems default socket size awarded
is defined in /proc/sys/net/core/rmem_default ; the maximum configurable socket size
(which can be changed via sysctl) is defined in /proc/sys/net/core/rmem_max instead.
(default: 4MB)
KEY: plugin_buffer_size
DESC: by defining the transfer buffer size, in bytes, this directive enables bufferization
of data transfers between core process and active plugins. It is disabled by default,
ie. size of a buffer coincides to the size of a single element to be transferred,
giving best feelings while testing out the package with small traffic loads. The
value has to be minor/equal to the size defined by 'plugin_pipe_size' and keeping a
ratio of 1:1000 among the two is considered good practice. The 'plugin_pipe_size'
circular queue is hence partitioned in plugin_buffer_size/plugin_pipe_size slots.
Once a slot is filled, it is delivered to the plugin while the circular queue moves
to the next buffer element. (default: 0)
KEY: plugin_pipe_backlog
VALUES: [0 <= value < 100]
DESC: Expects the value to be a percentage. It creates a backlog of buffers on the pipe
before actually releasing them to the plugin. The strategy helps optimizing inter
process communications where plugins are quicker handling data than the Core process.
By default backlog is disabled; as with buffering in general, this feature should be
enabled with caution in lab and low-traffic environments. (default: 0)
KEY: files_umask
DESC: Defines the mask for newly created files (log, pid, etc.). A mask less than "002" is
currently not accepted due to security reasons. (default: 077)
KEY: files_uid
DESC: Defines the system user id (UID) for files opened for writing (log, pid, etc.); this
is indeed possible only when running the daemon as super-user; by default this is left
untouched.
KEY: files_gid
DESC: Defines the system group id (GID) for files opened for writing (log, pid, etc.); this
is indeed possible only when running the daemon as super-user; by default this is left
untouched.
KEY: interface (-i) [GLOBAL, NO_NFACCTD]
DESC: interface on which 'pmacctd' listens. If such directive isn't supplied, a libpcap
function is used to select a valid device. [ns]facctd can catch similar behaviour by
employing the [ns]facctd_ip directives; also, note that this directive is mutually
exclusive with 'pcap_savefile' (-I).
KEY: pcap_savefile (-I) [GLOBAL, NO_NFACCTD]
DESC: file in libpcap savefile format from which read data (this is in alternative to binding
to an intervace). The file has to be correctly finalized in order to be read. As soon
as 'pmacctd' is finished with the file, it exits (unless the 'savefile_wait' option is
in place). The directive doesn't apply to [ns]facctd; to replay original NetFlow/sFlow
streams, a tool like TCPreplay can be used instead. The directive is mutually exclusive
with 'interface' (-i).
KEY: interface_wait (-w) [GLOBAL, NO_NFACCTD]
VALUES: [ true | false ]
DESC: if set to true, this option causes 'pmacctd' to wait for the listening device to become
available; it will try to open successfully the device each few seconds. Whenever set to
false, 'pmacctd' will exit as soon as any error (related to the listening interface) is
detected. (default: false)
KEY: savefile_wait (-W) [GLOBAL, NO_NFACCTD]
VALUES: [ true | false ]
DESC: if set to true, this option will cause 'pmacctd' to wait indefinitely for a signal (ie.
CTRL-C when not daemonized or 'killall -9 pmacctd' if it is) after being finished with
the supplied libpcap savefile (pcap_savefile). It's particularly useful when inserting
fixed amounts of data into memory tables by keeping the daemon alive. (default: false)
KEY: promisc (-N) [GLOBAL, NO_NFACCTD]
VALUES: [ true | false ]
DESC: if set to true, puts the listening interface in promiscuous mode. It's mostly useful when
running 'pmacctd' in a box which is not a router, for example, when listening for traffic
on a mirroring port. (default: true)
KEY: imt_path (-p)
DESC: specifies the full pathname where the memory plugin has to listen for client queries.
When multiple memory plugins are active, each one has to use its own file to communicate
with the client tool. Note that placing these files into a carefully protected directory
(rather than /tmp) is the proper way to control who can access the memory backend.
(default: /tmp/collect.pipe)
KEY: imt_buckets (-b)
DESC: defines the number of buckets of the memory table which is organized as a chained hash
table. A prime number is highly recommended. Read INTERNALS 'Memory table plugin' chapter
for further details.
KEY: imt_mem_pools_number (-m)
DESC: defines the number of memory pools the memory table is able to allocate; the size of each
pool is defined by the 'imt_mem_pools_size' directive. Here, a value of 0 instructs the
memory plugin to allocate new memory chunks as they are needed, potentially allowing the
memory structure to grow undefinitely. A value > 0 instructs the plugin to not try to
allocate more than the specified number of memory pools, thus placing an upper boundary
to the table size. (default: 16)
KEY: imt_mem_pools_size (-s)
DESC: defines the size of each memory pool. For further details read INTERNALS 'Memory table
plugin'. The number of memory pools is defined by the 'imt_mem_pools_number' directive.
(default: 8192).
KEY: syslog (-S)
VALUES: [ auth | mail | daemon | kern | user | local[0-7] ]
DESC: enables syslog logging, using the specified facility. (default: none, console logging)
KEY: logfile
DESC: enables logging to a file (bypassing syslog); expected value is a pathname (default: none,
console logging)
KEY: pidfile (-F) [GLOBAL]
DESC: writes PID of Core process to the specified file. PIDs of the active plugins are written
aswell by employing the following syntax: 'path/to/pidfile-<plugin_type>-<plugin_name>'.
This gets particularly useful to recognize which process is which on architectures where
pmacct does not support the setproctitle() function. (default: none)
KEY: networks_file (-n)
DESC: full pathname to a file containing a list of networks - and optionally ASN information,
BGP next-hop (peer_dst_ip) and IP prefix labels (read more about the file syntax in
examples/networks.lst.example). Purpose of the feature is to act as a resolver when
network, next-hop and/or peer/origin ASN information is not available through other
means (ie. BGP, IGP, telemetry protocol) or for the purpose of overriding such
information with custom/self-defined one. IP prefix labels rewrite the resolved
source and/or destination IP prefix into the supplied label; labels can be up to 15
characters long.
KEY: networks_file_filter
VALUES [ true | false ]
DESC: Makes networks_file work as a filter in addition to its basic resolver functionality:
networks and hosts not belonging to defined networks are zeroed out. (default: false)
KEY: networks_mask
DESC: specifies the network mask - in bits - to apply to IP address values in L3 header. The
mask is applied sistematically and before evaluating the 'networks_file' content (if
any is specified).
KEY: networks_cache_entries
DESC: Networks Lookup Table (which is the memory structure where the 'networks_file' data is
loaded) is preeceded by a Network Lookup Cache where lookup results are saved to speed
up later searches. NLC is structured as an hash table, hence, this directive is aimed to
set the number of buckets for the hash table. The default value should be suitable for
most common scenarios, however when facing with large-scale network definitions, it is
quite adviceable to tune this parameter to improve performances. A prime number is highly
recommended.
KEY: ports_file
DESC: full pathname to a file containing a list of (known/interesting/meaningful) ports (one
for each line, read more about the file syntax into examples/ tree). The directive allows
to rewrite as zero port numbers not matching any port defined in the list. Indeed, this
makes sense only if aggregating on either 'src_port' or 'dst_port' primitives.
KEY: sql_db
DESC: defines the SQL database to use. Remember that when using the SQLite3 plugin, this
directive refers to the full path to the database file (default: 'pmacct', SQLite 3.x
default: '/tmp/pmacct.db').
KEY: [ sql_table | print_output_file | mongo_table ]
DESC: In SQL and mongodb plugins this defines the table to use; in print plugin it defines the
file to write output to. Dynamic names are supported through the use of variables, which
are computed at the moment when data is purged to the backend. The list of supported
variables follows:
%d The day of the month as a decimal number (range 01 to 31).
%H The hour as a decimal number using a 24 hour clock (range 00 to 23).
%m The month as a decimal number (range 01 to 12).
%M The minute as a decimal number (range 00 to 59).
%s The number of seconds since Epoch, ie., since 1970-01-01 00:00:00 UTC.
%w The day of the week as a decimal, range 0 to 6, Sunday being 0.
%W The week number of the current year as a decimal number, range
00 to 53, starting with the first Monday as the first day of
week 01.
%Y The year as a decimal number including the century.
$ref Configured refresh time value for the plugin.
$hst Configured sql_history value, in seconds, for the plugin.
$peer_src_ip Record value for peer_src_ip primitive (if primitive is not part of
the aggregation method then this will be set to a null value).
$tag Record value for tag primitive ((if primitive is not part of the
aggregation method then this will be set to a null value).
$tag2 Record value for tag2 primitive ((if primitive is not part of the
aggregation method then this will be set to a null value).
SQL plugins notes:
Time-related variables require 'sql_history' to be specified in order to work correctly
(see 'sql_history' entry in this in this document for further information) and that the
'sql_refresh_time' setting is aligned with the 'sql_history', ie.:
sql_history: 5m
sql_refresh_time: 300
Furthermore, if the 'sql_table_schema' directive is not specified, tables are expected
to be already in place. This is an example on how to split accounted data among multiple
tables basing on the day of the week:
sql_history: 1h
sql_history_roundoff: h
sql_table: acct_v4_%w
The above directives will account data on a hourly basis (1h). Also the above sql_table
definition will make: Sunday data be inserted into the 'acct_v4_0' table, Monday into
the 'acct_v4_1' table, and so on. The switch between the tables will happen each day at
midnight: this behaviour is ensured by the use of the 'sql_history_roundoff' directive.
Ideally sql_refresh_time and sql_history values should be aligned for the dynamic tables
to work; sql_refresh_time with a value smaller than sql_history is also supported; whereas
the feature does not support values of sql_refresh_time greater than sql_history. The
maximum table name length is 64 characters.
Print plugin notes:
If a non-dynamic filename is selected, content is overwritten to the existing one in
case print_output_file_append is set to false (default). Are supported scenarios where
multiple level of directories need to be created in order to create the target file,
ie. "/path/to/%Y/%Y-%m/%Y-%m-%d/blabla-%Y%m%d-%H%M.txt". Shell replacements are not
supported though, ie. '~' symbol to denote the user home directory. print_history
values are used for time-related variables substitution of dynamic print_output_file
names.
MongoDB plugin notes:
The table name is expected as <database>.<collection> . Default table is test.acct
Common notes:
The maximum number of variables it may contain is 32.
KEY: print_output_file_append
VALUES: [ true | false ]
DESC: If set to true, print plugin will append to existing files instead of overwriting. If
appending, and in case of an output format requiring a title, ie. csv, formatted, etc.,
intuitively the title is not re-printed (default: false)
KEY: print_latest_file
DESC: It defines the full pathname to pointer(s) to latest file(s). Dynamic names are supported
through the use of variables, which are computed at the moment when data is purged to the
backend: refer to print_output_file for a full listing of supported variables; time-based
variables are not allowed. Three examples follow:
#1:
print_output_file: /path/to/spool/foo-%Y%m%d-%H%M.txt
print_latest_file: /path/to/spool/foo-latest
#2:
print_output_file: /path/to/spool/%Y/%Y-%m/%Y-%m-%d/foo-%Y%m%d-%H%M.txt
print_latest_file: /path/to/spool/latest/foo
#3:
print_output_file: /path/to/$peer_src_ip/foo-%Y%m%d-%H%M.txt
print_latest_file: /path/to//spool/latest/blabla-$peer_src_ip
For correct working of the feature, responsibility is put on the user. A file is reckon
as latest if it's lexicographically greater than an existing one: this is generally fine
but requires dates to be in %Y%m%d format rather than %d%m%Y. Also, upon restart of the
daemon, if print_output_file is modified to a different location good practice would be
to 1) manually delete latest pointer(s) or 2) move existing print_output_file files to
the new targer location. Finally, if upgrading from pmacct releases before 1.5.0rc1, it
is recommended to delete existing symlinks.
KEY: sql_table_schema
DESC: full pathname to a file containing a SQL table schema. It allows to create the SQL table
if it does not exist; this directive makes sense only if a dynamic 'sql_table' is in use.
A configuration example where this directive could be useful follows:
sql_history: 5m
sql_history_roundoff: h
sql_table: acct_v4_%Y%m%d_%H%M
sql_table_schema: /usr/local/pmacct/acct_v4.schema
In this configuration, the content of the file pointed by 'sql_table_schema' should be:
CREATE TABLE acct_v4_%Y%m%d_%H%M (
[ ... PostgreSQL/MySQL specific schema ... ]
);
This setup, along with this directive, are mostly useful when the dynamic tables are not
closed in a 'ring' fashion (e.g., the days of the week) but 'open' (e.g., current date).
KEY: sql_table_version (-v)
VALUES [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 ]
DESC: defines the version of the SQL table. SQL table versioning was introduced to achieve two
goals: a) make tables work out-of-the-box for the SQL beginners, smaller installations
and quick try-outs; and in this context b) to allow introduction of new features over
time without breaking backward compatibility. For the SQL experts, the alternative to
versioning is 'sql_optimize_clauses' which allows custom mix-and-match of primitives:
in such a case you have to build yourself custom SQL schemas and indexes. Check in the
'sql/' sub-tree the SQL table profiles which are supported by the pmacct version you are
currently using (default: 1)
KEY: sql_table_type
VALUES [ bgp ]
DESC: optionally combined with "sql_table_version", defines one of the supported SQL table
profiles. Currently this directive has to be defined to select one of the default BGP
table profiles.
KEY: sql_data
VALUES: [ typed | unified ]
DESC: this switch makes sense only when using PostgreSQL plugin and supplied default tables
up to v5: the pgsql scripts in the sql/ tree, up to v5, will in fact create a 'unified'
table along with multiple 'typed' tables. The 'unified' table has IP and MAC addresses
specified as standard CHAR strings, slower and not space savy but flexible; 'typed'
tables sport PostgreSQL own types (inet, mac, etc.), resulting in a faster but more
rigid structure. Since v6 unified mode is being discontinued leading to simplification.
The supplied 'typed' schema can still be customized, ie. to write IP addresses in CHAR
fields because making use of IP prefix labels, transparently to pmacct - making this
configuration switch deprecated. (default: 'typed').
KEY: [ sql_host | mongo_host | amqp_host ]
DESC: defines the backend server IP/hostname (default: localhost).
KEY: [ sql_user | mongo_user | amqp_user ]
DESC: defines the username to use when connecting to the server. In MongoDB, if both
mongo_user and mongo_passwd directives are omitted, authentication is disabled;
if only one of the two is specified, the other is set to its default value.
(default: pmacct).
KEY: [ sql_passwd | mongo_passwd | amqp_passwd ]
DESC: defines the password to use when connecting to the server.In MongoDB, if both
mongo_user and mongo_passwd directives are omitted, authentication is disabled;
if only one of the two is specified, the other is set to its default value.
(default: arealsmartpwd).
KEY: [ sql_refresh_time | print_refresh_time | mongo_refresh_time | amqp_refresh_time ] (-r)
DESC: time interval, in seconds, between consecutive executions of the plugin cache scanner. The
scanner purges data into the plugin backend. Note: internally all these config directives
write to the same variable; when using multiple plugins it is recommended to bind refresh
time definitions to specific plugins, ie.:
plugins: mysql[x], mongodb[y]
sql_refresh_time[x]: 900
mongo_refresh_time[y]: 300
As doing otherwise can originate unexpected behaviours.
KEY: sql_startup_delay
DESC: defines the time, in seconds, the first SQL cache scan event has to be delayed. This delay
is, in turn, propagated to the subsequent scans. It comes useful in two scenarios: a) so
that multiple plugins can use the same 'sql_refresh_time' value, allowing them to spread
the writes among the length of the time-bin; b) with NetFlow, to keep original flow start
time (nfacctd_time_new: false) while enabling the sql_dont_try_update feature (for RDBMS
efficiency purposes); in such a context, sql_startup_delay value should be greater (better
>= 2x the value) of the NetFlow active flow timeout. (default: 0)
KEY: sql_optimize_clauses
VALUES: [ true | false ]
DESC: enables the optimization of the statements sent to the RDBMS essentially allowing to a)
run stripped-down variants of the default SQL tables or b) totally customized SQL tables
by a free mix-and-match of the available primitives. Either case, you will need to build
the custom SQL table schema and indexes. As a rule of thumb when NOT using this directive
always remember to specify which default SQL table version you intend to stick to by using
the 'sql_table_version' directive. (default: false)
KEY: [ sql_history | print_history | mongo_history | amqp_history ]
VALUES: #[s|m|h|d|w|M]
DESC: enables historical accounting by placing accounted data into configurable time-bins. It
will use the 'stamp_inserted' (base time of the time-bin) and 'stamp_updated' (last time
the time-bin was touched) fields. The supplied value defines the time slot length during
which counters are accumulated. For a nice effect, it's adviceable to pair this directive
with 'sql_history_roundoff'. In nfacctd, where a flow can span across multiple time-bins,
flow counters can be pro-rated (seconds timestamp resolution) over involved time-bins by
setting nfacctd_pro_rating to true. Note that this value is fully disjoint from the
*_refresh_time directives which set the time intervals at which data has to be written to
the backend instead. The final effect is close to time slots in a RRD file. Examples of
valid values are: '300' or '5m' - five minutes, '3600' or '1h' - one hour, '14400' or '4h'
- four hours, '86400' or '1d' - one day, '1w' - one week, '1M' - one month).
KEY: [ sql_history_offset | print_history_offset | mongo_history_offset | amqp_history_offset ]
DESC: Sets an offset to timeslots basetime. If history is set to 30 mins (by default creating
10:00, 10:30, 11:00, etc. time-bins), with an offset of 900 seconds (so 15 mins) it will
create 10:15, 10:45, 11:15, etc. time-bins. It expects a positive value, in seconds.
(default: 0)
KEY: [ sql_history_roundoff | print_history_roundoff | mongo_history_roundoff |
amqp_history_roundoff ]
VALUES [m,h,d,w,M]
DESC: enables alignment of minutes (m), hours (h), days of month (d), weeks (w) and months (M)
in print (to print_refresh_time) and SQL plugins (to sql_history and sql_refresh_time).
Suppose you go with 'sql_history: 1h', 'sql_history_roundoff: m' and it's 6:34pm. Rounding
off minutes gives you an hourly timeslot (1h) starting at 6:00pm; so, subsequent ones will
start at 7:00pm, 8:00pm, etc. Now, you go with 'sql_history: 5m', 'sql_history_roundoff: m'
and it's 6:37pm. Rounding off minutes will result in a first slot starting at 6:35pm; next
slot will start at 6:40pm, and then every 5 minutes (6:45pm ... 7:00pm, etc.). 'w' and 'd'
are mutually exclusive, that is: you can either reset the date to last Monday or reset the
date to the first day of the month.
KEY: sql_history_since_epoch
VALUES [ true | false ]
DESC: enables the use of timestamps (stamp_inserted, stamp_updated) in the standard seconds since
the Epoch format. This directive requires changes to the default types for timestamp fields
in the SQL schema. (default: false)
MySQL: DATETIME ==> INT(8) UNSIGNED
PostgreSQL: timestamp without time zone ==> bigint
SQLite3: DATETIME ==> INT(8)
KEY: sql_recovery_logfile
DESC: enables recovery mode; recovery mechanism kicks in if the DB fails. It works by checking
for the successful result of each SQL query. By default it is disabled. By using this key
aggregates are recovered to the specified logfile. Data may be played later by either
'pmmyplay' or 'pmpgplay' tools. Each time pmacct package is updated it's good rule not
continue writing old files but start a new ones. Each plugin instance has to write to a
different logfile in order to avoid inconsistencies over data. And, finally, the maximum
size for a logfile is set to 2Gb: if the logfile reaches such size, it's automatically
rotated (in a way similar to logrotate: old file is renamed, appending a little sequential
integer to it, and a new file is started). See INTERNALS 'Recovery modes' section for
details about this topic. SQLite 3.x note: because the database is file-based it's quite
useless to have a logfile, thus this feature is not supported. However, note that the
'sql_recovery_backup_host' directive allows to specify an alternate SQLite 3.x database
file.
KEY: sql_recovery_backup_host
DESC: enables recovery mode; recovery mechanism kicks in if DB fails. It works by checking for
the successful result of each SQL query. By default it is disabled. By using this key
aggregates are recovered to a secondary DB. See INTERNALS 'Recovery modes' section for
details about this topic. SQLite 3.x note: the plugin uses this directive to specify
a the full path to an alternate database file (e.g., because you have multiple file
system on a box) to use in the case the primary backend fails.
KEY: [ sql_max_writers | print_max_writers | mongo_max_writers | amqp_max_writers ]
DESC: sets the maximum number of concurrent writer processes the plugin is allowed to start.
This setting allows pmacct to degrade gracefully during major backend lock/outages/
unavailability. The value is split as follows: up to N-1 concurrent processes will
queue up; the Nth process will go for a recovery mechanism, if configured (like:
sql_recovery_logfile, sql_recovery_backup_host for SQL plugins), writers beyond Nth
will stop managing data (so, data will be lost at this stage) and an error message is
printed out. (default: 10)
KEY: [ sql_cache_entries | print_cache_entries | mongo_cache_entries | amqp_cache_entries ]
DESC: SQL and other plugins sport a Plugin Memory Cache (PMC) meant to accumulate bytes/packets
counters until next purging event (for further insights take a look to 'sql_refresh_time').
This directive sets the number of PMC buckets. Default value is suitable for most common
scenarios, however when facing large-scale networks, it's higly recommended to carefully
tune this parameter to improve performances. Use a prime number of buckets.
(default: sql_cache_entries: 32771, print_cache_entries: 16411)
KEY: sql_dont_try_update
VALUES: [ true | false ]
DESC: by default pmacct uses an UPDATE-then-INSERT mechanism to write data to the RDBMS; this
directive instructs pmacct to use a more efficient INSERT-only mechanism. This directive
is useful for gaining performances by avoiding UPDATE queries. Using this directive puts
some timing constraints, specifically sql_history == sql_refresh_time, otherwise it may
lead to duplicate entries and, potentially, loss of data. When used in nfacctd it also
requires nfacctd_time_new to be enabled. (default: false)
KEY: sql_use_copy
VALUES: [ true | false ]
DESC: instructs the plugin to build non-UPDATE SQL queries using COPY (in place of INSERT). While
providing same functionalities of INSERT, COPY is also more efficient. To have effect, this
directive requires 'sql_dont_try_update' to be set to true. It applies to PostgreSQL plugin
only. (default: false)
KEY: sql_delimiter
DESC: If sql_use_copy is true, uses the supplied character as delimiter. This is thought in cases
where the default delimiter is part of any of the supplied strings to be inserted into the
database. (default: ',')
KEY: sql_multi_values
DESC: enables the use of multi-values INSERT statements. The value of the directive is intended
to be the size (in bytes) of the multi-values buffer. The directive applies only to MySQL
and SQLite 3.x plugins. Inserting many rows at the same time is much faster (many times
faster in some cases) than using separate single-row INSERT statements. It's adviceable
to check the size of this pmacct buffer against the size of the corresponding MySQL buffer
(max_allowed_packet). (default: none)
KEY: [ sql_trigger_exec | print_trigger_exec | mongo_trigger_exec ]
DESC: defines the executable to be launched at fixed time intervals to post-process aggregates;
in SQL plugins, intervals are specified by the 'sql_trigger_time' directive; if no interval
is supplied 'sql_refresh_time' value is used instead: this will result in a trigger being
fired each purging event. A number of environment variables are set in order to allow the
trigger to take actions; take a look to docs/TRIGGER_VARS to check them out. In the print
and mongodb plugins a simpler implementation is made: triggers can be fired each time data
is written to the backend (ie. print_refresh_time) and no environment variables are passed
over to the executable.
KEY: sql_trigger_time
VALUES: #[s|m|h|d|w|M]
DESC: specifies time interval at which the executable specified by 'sql_trigger_exec' has to
be launched; if no executables are specified, this key is simply ignored. Values need to be
in the 'sql_history' directive syntax (for example, valid values are '300' or '5m', '3600'
or '1h', '14400' or '4h', '86400' or '1d', '1w', '1M'; eg. if '3600' or '1h' is selected,
the executable will be fired each hour).
KEY: sql_preprocess
DESC: allows to process aggregates (via a comma-separated list of conditionals and checks) while
purging data to the RDBMS thus resulting in a powerful selection tier; aggregates filtered
out may be just discarded or saved through the recovery mechanism (if enabled). The set of
available preprocessing directives follows:
KEY: qnum
DESC: conditional. Subsequent checks will be evaluated only if the number of queries to be
created during the current cache-to-DB purging event is '>=' qnum value.
KEY: minp
DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid
only if the number of packets is '>=' minp value.
KEY: minf
DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid
only if the number of flows is '>=' minf value.
KEY: minb
DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid
only if the bytes counter is '>=' minb value. An interesting idea is to set its value
to a fraction of the link capacity. Remember that you have also a timeframe reference:
the 'sql_refresh_time' seconds.
For example, given the following parameters:
Link Capacity = 8Mbit/s, THreshold = 0.1%, TImeframe = 60s
minb = ((LC / 8) * TI) * TH -> ((8Mbit/s / 8) * 60s) * 0.1% = 60000 bytes.
Given a 8Mbit link, all aggregates which have accounted for at least 60Kb of traffic
in the last 60 seconds, will be written to the DB.
KEY: maxp
DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid
only if the number of packets is '<' maxp value.
KEY: maxf
DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid
only if the number of flows is '<' maxf value.
KEY: maxb
DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid
only if the bytes counter is '<' maxb value.
KEY: maxbpp
DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid
only if the number of bytes per packet is '<' maxbpp value.
KEY: maxppf
DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid
only if the number of packets per flow is '<' maxppf value.
KEY: minbpp
DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid
only if the number of bytes per packet is '>=' minbpp value.
KEY: minppf
DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid
only if the number of packets per flow is '>=' minppf value.
KEY: fss
DESC: check. Enforces flow (aggregate) size dependent sampling, computed against the bytes
counter and returns renormalized results. Aggregates which have collected more than the
supplied 'fss' threshold in the last time window (specified by the 'sql_refresh_time'
configuration key) are sampled. Those under the threshold are sampled with probability
p(bytes). The method allows to get much more accurate samples compared to classic 1/N
sampling approaches, providing an unbiased estimate of the real bytes counter. It would
be also adviceable to hold the the equality 'sql_refresh_time' = 'sql_history'.
For further references: http://www.research.att.com/projects/flowsamp/ and specifically
to the papers: N.G. Duffield, C. Lund, M. Thorup, "Charging from sampled network usage",
http://www.research.att.com/~duffield/pubs/DLT01-usage.pdf and N.G. Duffield and C. Lund,
"Predicting Resource Usage and Estimation Accuracy in an IP Flow Measurement Collection
Infrastructure", http://www.research.att.com/~duffield/pubs/p313-duffield-lund.pdf
KEY: fsrc
DESC: check. Enforces flow (aggregate) sampling under hard resource constraints, computed
against the bytes counter and returns renormalized results. The method selects only 'fsrc'
flows from the set of the flows collected during the last time window ('sql_refresh_time'),
providing an unbiasied estimate of the real bytes counter. It would be also adviceable
to hold the equality 'sql_refresh_time' = 'sql_history'.
For further references: http://www.research.att.com/projects/flowsamp/ and specifically
to the paper: N.G. Duffield, C. Lund, M. Thorup, "Flow Sampling Under Hard Resource
Constraints", http://www.research.att.com/~duffield/pubs/DLT03-constrained.pdf
KEY: usrf
DESC: action. Applies the renormalization factor 'usrf' to counters of each aggregate. Its use
is suitable for use in conjunction with uniform sampling methods (for example simple random
- e.g. sFlow, 'sampling_rate' directive or simple systematic - e.g. sampled NetFlow by
Cisco and Juniper). The factor is applied to recovered aggregates also. It would be also
adviceable to hold the equality 'sql_refresh_time' = 'sql_history'. Before using this action
to renormalize counters generated by sFlow, take also a read of the 'sfacctd_renormalize'
key.
KEY: adjb
DESC: action. Adds (or subtracts) 'adjb' bytes to the bytes counter multiplied by the number of
packet in each aggregate. This is a particularly useful action when - for example - fixed
lower (link, llc, etc.) layer sizes need to be included into the bytes counter (as explained
by Q7 in FAQS document).
KEY: recover
DESC: action. If previously evaluated checks have marked the aggregate as invalid, a positive
'recover' value makes the packet to be handled through the recovery mechanism (if enabled).
KEY: sql_preprocess_type
VALUES: [ any | all ]
DESC: When more checks are to be evaluated, this directive tells whether aggregates on the queue
are valid if they just match one of the checks (any) or all of them (all) (default: any).
KEY: timestamps_secs
VALUES: [ true | false ]
DESC: Sets timestamp (timestamp_start, timestamp_end primitives) resolution to seconds, ie. prevents
timestamp_start_residual, timestamp_end_residual fields to be populated (default: false).
KEY: mongo_insert_batch
DESC: When purging data in a MongoDB database, defines the amount of elements to be inserted per
batch. This value depends on available memory: with 8GB RAM a max 35000 value did work OK;
with 16GB RAM a max 75000 value did work OK instead. (default: 10000)
KEY: mongo_indexes_file
DESC: full pathname to a file containing a list of indexes to apply to a MongoDB collection with
dynamic name. If the collection does not exists, it is created. Index names are picked by
MongoDB. For example, to create collections with two indexes 1) one using as key source/
destination IP addresses and 2) the other using source/destination TCP/UDP ports compile
the file pointed by this directive as:
src_host, dst_host
src_port, dst_port
KEY: amqp_exchange
DESC: Name of the AMQP exchange to publish data (default: pmacct).
KEY: amqp_exchange_type
DESC: Type of the AMQP exchange to publish data. Currently only 'direct' and 'fanout' types are
supported. (default: direct).
KEY: amqp_routing_key
DESC: Name of the AMQP routing key to attach to published data. Dynamic names are supported through
the use of variables, which are computed at the moment when data is purged to the backend. The
list of supported variables follows (default: acct):
$peer_src_ip Value of the peer_src_ip primitive of the record being processed.
$pre_tag Value of the tag primitive of the record being processed.
$post_tag Configured value of post_tag.
$post_tag2 Configured value of post_tag2.
KEY: amqp_persistent_msg
VALUES: [ true | false ]
DESC: Marks messages as persistent so that a queue content does not get lost if RabbitMQ restarts.
Note from RabbitMQ docs: "Marking messages as persistent doesn't fully guarantee that a
message won't be lost. Although it tells RabbitMQ to save message to the disk, there is
still a short time window when RabbitMQ has accepted a message and hasn't saved it yet.
Also, RabbitMQ doesn't do fsync(2) for every message -- it may be just saved to cache and
not really written to the disk. The persistence guarantees aren't strong, but it is more
than enough for our simple task queue.".
KEY: print_markers
VALUES: [ true | false ]
DESC: Enables the use of START/END markers each time data is written to 'stdout'. Start marker returns
additional information about current time-bin and configured refresh time (default: false)
KEY: print_output
VALUES: [ formatted | csv | json | event_formatted | event_csv ]
DESC: Defines the print plugin output format. 'formatted' enables tabular output; 'csv' is to enable
comma-separated values format, suitable for injection into 3rd party tools. 'json' is to enable
JavaScript Object Notation format, also suitable for injection into 3rd party tools and having
the extra benefit over 'csv' format of not requiring an 'event' version of the output ('json'
not requiring a table title). 'event' versions of the output strip trailing bytes and packets
counters. 'json' format requires compiling the package against Jansson library (downloadable at
the following URL: http://www.digip.org/jansson/) (default: formatted)
NOTES: * Jansson library does not seem to have concept of unsigned integers. integers up to 32 bits
are packed as 'I', ie. 64 bits signed integers, working around the issue. No work around is
possible for unsigned 64 bits integers instead (ie. tag, tag2, packets, bytes).
KEY: print_output_separator
DESC: Defines the print plugin output separator. Value is expected to be a single character and
cannot be a spacing (if spacing separator is wanted then 'formatted' output should be the
natural choice) (default: ',').
KEY: [ print_num_protos | sql_num_protos ]
VALUES: [ true | false ]
DESC: Defines whether IP protocols (ie. tcp, udp) should be looked up and presented in string format
or left numerical. The default is to look protocol names up. (default: false)
KEY: sql_num_hosts
VALUES: [ true | false ]
DESC: Defines whether IP addresses should be left numerical (in network bytes ordering) or converted
into human-readable strings. Applies to MySQL and SQLite plugins only and assumes the INET_NTOA()
function is defined in the RDBMS (which for MySQL is always the case, while for SQLite it is not
by default). Since INET_NTOA() is used, unless redefined to some custom-made variant, this feature
works for IPv4 addresses only. The feature is also not compatible with making use of IP prefix
labels. Default setting is to convert IP addresses into strings. (default: false)
KEY: [ nfacctd_port | sfacctd_port ] (-l) [GLOBAL, NO_PMACCTD]
DESC: defines the UDP port where to bind nfacctd (nfacctd_port) and sfacctd (sfacctd_port) daemons
(default: nfacctd_port: 2100, sfacctd_port: 6343).
KEY: [ nfacctd_ip | sfacctd_ip ] (-L) [GLOBAL, NO_PMACCTD]
DESC: defines the IPv4/IPv6 address where to bind the nfacctd (nfacctd_ip) and sfacctd (sfacctd_ip)
daemons (default: all interfaces).
KEY: core_proc_name
DESC: defines the name of the core process. This is the equivalent to instantiate named plugins but
for the core process (default: "defaut")
KEY: [ nfacctd_allow_file | sfacctd_allow_file ] [GLOBAL, NO_PMACCTD]
DESC: full pathname to a file containing the list of IPv4/IPv6 addresses (one for each line) allowed
to send packets to the daemon. Current syntax does not implement network masks but individual
IP addresses only. The Allow List is intended to be small; firewall rules should be preferred
to long ACLs. (default: allow all)
KEY: nfacctd_time_secs [GLOBAL, NO_PMACCTD]
VALUES: [ true | false ]
DESC: makes 'nfacctd' expect times included in NetFlow header to be in seconds rather than msecs. This
knob makes sense for NetFlow up to v8 - as in NetFlow v9 and IPFIX different fields are reserved
for secs and msecs timestamps, increasing collector awareness. (default: false)
KEY: nfacctd_time_new [GLOBAL, NO_PMACCTD]
VALUES: [ true | false ]
DESC: makes 'nfacctd' to ignore timestamps included in NetFlow header and build new ones. This gets
particularly useful to assign flows to time-bins based on the flow arrival time at the collector
rather than the flow start time. An application for it is when historical accounting is enabled
('sql_history') and an INSERT-only mechanism is in use ('sql_dont_try_update', 'sql_use_copy').
(default: false)
KEY: nfacctd_pro_rating [ NO_PMACCTD]
VALUES: [ true | false ]
DESC: if nfacctd_time_new is set to false (default) and historical accounting (ie. sql_history) is
enabled, this directive enables pro rating of NetFlow/IPFIX flows over time-bins, if needed.
For example, if sql_history is set to '5m' (so 300 secs), the considered flow duration is 1000
secs, its bytes counter is 1000 bytes and, for simplicity, its start time is at the base time
of t0, time-bin 0, then the flow is inserted in time-bins t0, t1, t2 and t3 and its bytes
counter is proportionally split among these time-bins: 300 bytes during t0, t1 and t2 and
100 bytes during t3. (default: false)
NOTES: If NetFlow sampling is enabled, it is recommended to have counters renormalization enabled
(nfacctd_renormalize set to true).
KEY: [ nfacctd_as_new | sfacctd_as_new | pmacctd_as | uacctd_as ] [GLOBAL]
VALUES: [ false | (true|file) | bgp | fallback ]
DESC: When 'false', it instructs nfacctd and sfacctd to populate 'src_as', 'dst_as', 'peer_src_as' and
'peer_dst_as' primitives from NetFlow and sFlow datagram respectively; when 'true' ('file' being
an alias of 'true') it instructs nfacctd and sfacctd to generate 'src_as' and 'dst_as' (only! ie.
no peer-AS) by looking up source and destination IP addresses against a networks_file. When 'bgp'
is specified, ASNs are looked up against the BGP RIB of the peer from which the NetFlow datagram
was received (see also bgp_agent_map directive). When 'fallback' is specified, lookup is done
against the winning longest match lookup method (sFlow/NetFlow <= BGP), which can be different
for source and destination IP prefix. Intuitively if 'fallback' is specified, IS-IS/IGP daemon
is enabled and IGP is the winning method then no BGP information will be attached to the prefixes.
In pmacctd and uacctd 'false' (maintained for backward compatibility), 'true' and 'file' expect a
'networks_file' to be defined; 'bgp' just works as described previously for nfacctd and sfacctd;
'fallback' is mapped to 'bgp' since no export protocol lookup method is available. (default: false)
KEY: [ nfacctd_net | sfacctd_net | pmacctd_net | uacctd_net ]
VALUES: [ netflow | sflow | mask | file | igp | bgp | fallback ]
DESC: Determines the method for performing IP prefix aggregation - hence directly influencing 'src_net',
'dst_net', 'src_mask', 'dst_mask' and 'peer_dst_ip' primitives. 'netflow' and 'sflow' get values
from NetFlow and sFlow protocols respectively; these keywords are only valid in nfacctd, sfacctd.
'mask' applies a defined networks_mask; 'file' selects a defined networks_file; 'igp' and 'bgp'
source values from IGP/IS-IS daemon and BGP daemon respectively. Default behaviour under pmacctd
and uacctd is for backward compatibility: 'mask' and 'file' are turned on if a networks_mask and
a networks_file are respectively specified by configuration. If they both are defined, the outcome
will be the intersection of their definitions. 'fallback' behaves in a longest-match-wins fashion:
in nfacctd and sfacctd lookup is done against a networks list (if networks_file is defined) sFlow/
NetFlow protocol, IGP (if the IGP thread started) and BGP (if the BGP thread is started) with the
following logics: networks_file < sFlow/NetFlow < IGP <= BGP; in pmacctd and uacctd lookup is done
against ia networks list, IGP and BGP only (networks_file < IGP <= BGP).
(default: nfacctd: 'netflow'; sfacctd: 'sflow'; pmacctd and uacctd: 'mask', 'file')
KEY: use_ip_next_hop
VALUES: [ true | false ]
DESC: When IP prefix aggregation (ie. nfacctd_net) is set to 'netflow', 'sflow' or 'fallback' (in
which case longest winning match is via 'netflow' or 'sflow') populate 'peer_dst_ip' field
from NetFlow/sFlow IP next hop field if BGP next-hop is not available. (default: false)
KEY: [ nfacctd_mcast_groups | sfacctd_mcast_groups ] [GLOBAL, NO_PMACCTD]
DESC: defines one or more IPv4/IPv6 multicast groups to be joined by the daemon. If more groups are
supplied, they are expected comma separated. A maximum of 20 multicast groups may be joined by
a single daemon instance. Some OS (noticeably Solaris -- seems) may also require an interface
to bind to which - in turn - can be supplied declaring an IP address ('nfacctd_ip' key).
KEY: [ nfacctd_disable_checks | sfacctd_disable_checks ] [GLOBAL, NO_PMACCTD]
VALUES: [ true | false ]
DESC: both nfacctd and sfacctd check health of incoming NetFlow/sFlow datagrams - actually this is
limited to just verifying sequence numbers progression. You may want to disable such feature
because of non-standard implementations. By default checks are enabled (default: false)
KEY: pre_tag_map
DESC: full pathname to a file containing tag mappings. Tags can be internal-only (ie. for filtering
purposes, see pre_tag_filter configuration directive) or exposed to users (ie. if 'tag' and/or
'tag2' primitives are part of the aggregation method). Take a look to the examples/ sub-tree
for all supported keys and detailed examples (pretag.map.example). Pre-Tagging is evaluated in
the Core Process and each plugin can be defined a local pre_tag_map. Result of evaluation of
pre_tag_map overrides any tags passed via NetFlow/sFlow by a pmacct nfprobe/sfprobe plugin.
KEY: maps_entries
DESC: defines the maximum number of entries a map (ie. pre_tag_map) can contain. The default value
is suitable for most scenarios, though tuning it could be required either to save on memory
or to allow for more entries. Refer to the specific map directives documentation in this file
to see which are affected by this setting. (default: 384)
KEY: maps_row_len
DESC: defines the maximum length of map (ie. pre_tag_map) rows. The default value is suitable for
most scenario, though tuning it could be required either to save on memory or to allow for
more entries. (default: 256)
KEY: maps_refresh [GLOBAL]
VALUES: [ true | false ]
DESC: when enabled, this directive allows to reload map files without restarting the daemon instance.
For example, it may result particularly useful to reload pre_tag_map or networks_file entries in
order to reflect some change in the network. After having modified the map files, a SIGUSR2 has
to be sent (e.g.: in the simplest case "killall -USR2 pmacctd") to the daemon to notify the
change. If such signal is sent to the daemon and this directive is not enabled, the signal is
silently discarded. The Core Process is in charge of processing the Pre-Tagging map; plugins are
devoted to Networks and Ports maps instead. Then, because signals can be sent either to the whole
daemon (killall) or to just a specific process (kill), this mechanism also offers the advantage
to elicit local reloads. (default: true)
KEY: maps_index [GLOBAL]
VALUES: [ true | false ]
DESC: enables indexing of maps to increase lookup speeds on large maps and/or sustained lookup
rates. Indexes are automatically defined basing on structure and content of the map, up to
a maximum of 8. Indexing of pre_tag_map, bgp_peer_src_as_map, flows_to_rd_map is supported.
Only a sub-set of pre_tag_map fields are supported, including: ip, bgp_nexthop, vlan_id,
src_mac, mpls_vpn_rd, mpls_pw_id, src_as, dst_as, peer_src_as, peer_dst_as, input, output).
Only IP addresses, ie. no IP prefixes, are supported as part of the 'ip' field. Also, negations
are not supported (ie. 'in=-216' match all but input interface 216). bgp_agent_map and
sampling_map implement a separate caching mechanism and hence do not leverage this feature.