forked from pmacct/pmacct
-
Notifications
You must be signed in to change notification settings - Fork 1
/
CONFIG-KEYS
3603 lines (3073 loc) · 198 KB
/
CONFIG-KEYS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
SUPPORTED CONFIGURATION KEYS
Both configuration directives and commandline switches are listed below.
A configuration consists of key/value pairs, separated by the ':' char.
Starting a line with the '!' symbol, makes the whole line to be ignored
by the interpreter, making it a comment. Please also refer to QUICKSTART
document and the 'examples/' sub-tree for some examples.
Directives are sometimes grouped, like sql_table and print_output_file:
this is to stress if multiple plugins are running as part of the same
daemon instance, such directives must be casted to the plugin they refer
to - in order to prevent undesired inheritance effects. In other words,
grouped directives share the same field in the configuration structure.
LEGEND of flags:
GLOBAL Can't be configured on individual plugins
NO_GLOBAL Can't be configured globally
NO_PMACCTD Does not apply to pmacctd
NO_UACCTD Does not apply to uacctd
NO_NFACCTD Does not apply to nfacctd
NO_SFACCTD Does not apply to sfacctd
NO_PMBGPD Does not apply to pmbgpd
NO_PMBMPD Does not apply to pmbmpd
ONLY_PMACCTD Applies only to pmacctd
ONLY_UACCTD Applies only to uacctd
ONLY_NFACCTD Applies only to nfacctd
ONLY_SFACCTD Applies only to sfacctd
ONLY_PMBGPD Applies only to pmbgpd
ONLY_PMBMPD Applies only to pmbmpd
MAP Indicates the input file is a map
LIST OF DIRECTIVES:
KEY: debug (-d)
VALUES: [ true | false ]
DESC: Enables debug (default: false).
KEY: debug_internal_msg
VALUES: [ true | false ]
DESC: Extra flag to enable debug of internal messaging between Core process
and plugins. It has to be enabled on top of 'debug' (default: false).
KEY: dry_run
VALUES: [ config | setup ]
DESC: Performs a dry run. With 'config', only the configuration is parsed
(reporting any config validation errors). With 'setup', on top of the
config validation also the daemon, its plugins and all their config
options are instantiated and validated.
KEY: daemonize (-D) [GLOBAL]
VALUES: [ true | false ]
DESC: Daemonizes the process (default: false).
KEY: aggregate (-c)
VALUES: [ src_mac, dst_mac, vlan, in_vlan, out_vlan, in_cvlan, out_cvlan, cos, etype,
src_host, dst_host, src_net, dst_net, src_mask, dst_mask, src_as, dst_as,
src_port, dst_port, tos, proto, none, sum_mac, sum_host, sum_net, sum_as,
sum_port, flows, flow_label, tag, tag2, label, class, tcpflags, in_iface,
out_iface, std_comm, ext_comm, lrg_comm, as_path, peer_src_ip, peer_dst_ip,
peer_src_as, peer_dst_as, local_pref, med, dst_roa, src_std_comm,
src_ext_comm, src_lrg_comm, src_as_path, src_local_pref, src_med, src_roa,
mpls_vpn_rd, mpls_pw_id, mpls_label_top, mpls_label_bottom, mpls_label_stack,
sampling_rate, sampling_direction, src_host_country, dst_host_country,
src_host_pocode, dst_host_pocode, src_host_coords, dst_host_coords,
nat_event, fw_event, post_nat_src_host, post_nat_dst_host, post_nat_src_port,
post_nat_dst_port, tunnel_src_mac, tunnel_dst_mac, tunnel_src_host,
tunnel_dst_host, tunnel_proto, tunnel_tos, tunnel_src_port, tunnel_dst_port,
tunnel_tcpflags, tunnel_flow_label, fwd_status, vxlan, nvgre, timestamp_start,
timestamp_end, timestamp_arrival, timestamp_export, export_proto_seqno,
export_proto_version, export_proto_sysid, path_delay_avg_usec,
path_delay_min_usec, path_delay_max_usec, srv6_seg_ipv6_list ]
FOREWORDS: Individual IP packets are uniquely identified by their header field values (a
rather large set of primitives!). Same applies to uni-directional IP flows, as
they have at least enough information to discriminate where packets are coming
from and going to. Aggregates are instead used for the sole purpose of IP
accounting and hence can be identified by an arbitrary set of primitives.
The process to create an aggregate starting from IP packets or flows is: (a)
select only the primitives of interest (generic aggregation), (b) optionally
cast certain primitive values into broader logical entities, ie. IP addresses
into network prefixes or Autonomous System Numbers (spatial aggregation) and
(c) sum aggregate bytes/flows/packets counters when a new tributary IP packet
or flow is captured (temporal aggregation).
DESC: Aggregate captured traffic data by selecting the specified set of primitives.
sum_<primitive> are compound primitives which sum ingress/egress traffic in a
single aggregate; current limit of sum primitives: each sum primitive is mutual
exclusive with any other, sum and non-sum, primitive. The 'none' primitive
allows to make a single grand total aggregate for traffic flowing through.
'tag', 'tag2' and 'label' generates tags when tagging engines (pre_tag_map,
post_tag) are in use. 'class' enables L7 traffic classification.
NOTES: * List of the aggregation primitives available to each specific pmacct daemon,
along with their description, is available via -a command-line option, ie.
"pmacctd -a".
* Some primitives (ie. tag2, timestamp_start, timestamp_end) are not part of
any default SQL table schema shipped. Always check out documentation related
to the RDBMS in use (ie. 'sql/README.mysql') which will point you to extra
primitive-related documentation, if required.
* peer_src_ip, peer_dst_ip: two primitives with an obscure name conceived to
be as generic as possible due to the many different use-cases around them:
peer_src_ip is the IP address of the node exporting NetFlow/IPFIX or sFlow;
peer_dst_ip is the BGP next-hop or IP next-hop (if use_ip_next_hop is set
to true).
* sampling_rate: if counters renormalization (ie. sfacctd_renormalize) is
enabled this field will report a value of one (1); otherwise it will report
the rate that is passed by the protocol or sampling_map. A value of zero (0)
means 'unknown' and hence no rate is applied to original counter values.
* sampling_direction: in case of sFlow, direction is derived in the following
way: if ds_index part of Source ID matches input interface of a Flow Sample,
it is inferred that sampling direction is 'ingress'; if ds_index matches the
output interface, it is inferred that sampling direction is 'egress'.
In the standard sFlow data model, every measurement comes from a particular
datasource defined by agent IP address, ds_class and ds_index, and written
as agent>ds_class:ds_index.
* src_std_comm, src_ext_comm, src_lrg_comm, src_as_path are based on reverse
BGP lookups; peer_src_as, src_local_pref and src_med are by default based on
reverse BGP lookups but can be alternatively based on other methods, for
example maps (ie. bgp_peer_src_as_type). Internet traffic is by nature
asymmetric hence reverse BGP lookups must be used with caution (ie. against
own prefixes).
* mpls_label_top, mpls_label_bottom primitives only include the MPLS label
value, stripped of EXP code-points (and BoS flag). Visibiliy in EXP values
can be achieved by defining a custom primitive to extract the full 3 bytes,
ie. 'name=mplsFullTopLabel field_type=70 len=3 semantics=raw' for NetFlow/
IPFIX. On the contrary mpls_label_stack does extract the full 3 bytes.
* mpls_vpn_rd primitive value can be sourced in multiple ways in case of
IPFIX/NFv9. The current preference is: flow_to_rd.map > RD in IPFIX/NFv9
data packet > RD in IPFIX/NFv9 option packets.
* timestamp_start, timestamp_end and timestamp_arrival let pmacct act as a
traffic logger up to the msec level (if reported by the capturing method).
timestamp_start records NetFlow/IPFIX flow start time or observation;
timestamp_end records NetFlow/IPFIX flow end time; timestamp_arrival
records libpcap packet timestamp and sFlow/NetFlow/IPFIX packet arrival
time at the collector. Historical accounting (enabled by the *_history
config directives, ie. kafka_history) finest granularity for time-bins
is 1 minute: timestamp_start can be used for finer greater granularities,
ie. second (timestamps_secs set to true) or sub-second.
* tcpflags: in pmacctd, uacctd and sfacctd daemons TCP flags are ORed until
the aggregate is flushed - hence emulating the behaviour of NetFlow/IPFIX.
If a flag analysis is needed, packets with different flags (combinations)
should be isolated using a pre_tag_map/pre_tag_filter or aggregate_filter
features (see examples in QUICKSTART and review libpcap filtering syntax
via pcap-filter man page).
* export_proto_seqno reports about export protocol (NetFlow, sFlow, IPFIX)
sequence number and can be very relevant to detect packet loss. nfacctd and
sfacctd do perform simple non-contextual sequencing checks but these are
mainly limited to check out-of-order situations; proper contextual checking
can be performed as part of post-processing. A specific plugin instance,
separate from the main / accounting one, can be configured with 'aggregate:
export_proto_seqno' for the task. An example of a simple check would be to
find min/max sequence numbers, compute their difference and make sure it
does match to the amount of entries in the interval; the check can be then
windowed over time by using timestamps (ie. 'timestamp_export' primitive
and/or *_history config directives).
* timestamp_export is the observation time at the exporter. This is only
relevant in export protocols involving caching, ie. NetFlow/IPFIX. In all
other cases this would not be populated or be equal to timestamp_start.
* In nfacctd, undocumented aggregation primitive class_frame allows to apply
nDPI clssification to NFv9/IPFIX packets with IE 315 (dataLinkFrameSection).
class primitive instead allows to leverage traditional classification using
NetFlow v9/IPFIX IE 94, 95 and 96 (applicationDescription, applicationId
and applicationName).
* vlan / in_vlan / out_vlan: in NetFlow / IPFIX and sFlow, where there is
indication (explicit or implicit, ie. expressing sample direction) of
ingress / egress sampling, 'vlan' checks both cases reporting the VLAN ID
of the first case checked reporting a non-zero ID (ingress > egress); more
intuitively, in_vlan reports ingress VLAN ID if any and out_vlan reports
egress VLAN ID if any.
* srv6_seg_ipv6_list primitive is only available if using an encoding, like
JSON or Avro, that supports complex data (ie. arrays, maps, etc.).
DEFAULT: src_host
KEY: aggregate_primitives [GLOBAL, MAP]
DESC: Expects full pathname to a file containing custom-defined primitives. Once
defined in this file, primitives can be used in 'aggregate' statements. The
feature is currently available only in nfacctd, for NetFlow v9/IPFIX, pmacctd
and uacctd. Examples are available in 'examples/primitives.lst.example'. This
map does not support reloading at runtime.
DEFAULT: none
KEY: aggregate_filter [NO_GLOBAL, NO_UACCTD]
DESC: Per-plugin filtering applied against the original packet or flow. Aggregation
is performed slightly afterwards, upon successful match of this filter.
By binding a filter, in tcpdump syntax, to an active plugin, this directive
allows to select which data has to be delivered to the plugin and aggregated
as specified by the plugin 'aggregate' directive. See the following example:
...
aggregate[inbound]: dst_host
aggregate[outbound]: src_host
aggregate_filter[inbound]: dst net 192.168.0.0/16
aggregate_filter[outbound]: src net 192.168.0.0/16
plugins: memory[inbound], memory[outbound]
...
This directive can be used in conjunction with 'pre_tag_filter' (which, in
turn, allows to filter tags). You will also need to force fragmentation handling
in the specific case in which a) none of the 'aggregate' directives is including
L4 primitives (ie. src_port, dst_port) but b) an 'aggregate_filter' runs a filter
which requires dealing with L4 primitives. For further information, refer to the
'pmacctd_force_frag_handling' directive.
DEFAULT: none
KEY: aggregate_unknown_etype [GLOBAL]
VALUES: [ true | false ]
DESC: By default, Ethernet frames with unknown EtherTypes for which pmacct has not
implemented decoding support are ignored by the aggregation engine. Enabling this
option allows such frames to be aggregated by the available Ethernet L2 header
fields ('src_mac', 'dst_mac', 'vlan', 'cos', 'etype'). This is currently
supported in pmacctd and uacctd; while it only makes ARP packets pass through
in sfacctd.
DEFAULT: false
KEY: dtls_path [GLOBAL]
DESC: Full path to a directory containing files needed to establish a successful DTLS
session (key, certificate and CA file); a key.pem file can be generated with the
"certtool --generate-privkey --outfile key.pem" command-line; a self-signed
cert.pem certificate, having previously created the key, can be generated with
the "certtool --generate-self-signed --load-privkey key.pem --outfile cert.pem"
command-line; the ca-certificates.crt CA file can be copied from (ie. on Debian
or Ubuntu) "/etc/ssl/certs/ca-certificates.crt".
DEFAULT: none
KEY: writer_id_string
DESC: A "writer_id" field is added when sending data onto a Kafka or RabbitMQ broker,
this is meant to add contextual information about the collector producing data
(ie. $proc_name) or the specific batch of data (ie. PID of the writer process,
$writer_pid). Additional static information and separators can be supplied as
part of the string. Some variables are supported:
$proc_name The name of the process producing data. This maps to the plugin
name in case of 'kafka' and 'amqp' plugins and core_proc_name
when the write is made from the Core Process, ie. BGP, BMP and
Streaming Telemetry cases
$writer_pid The PID of the process producing data
$pmacct_build The build version of the collector producing data
Note: The '_' character is part of the variables alphabet not hence it isn't a
valid separator between any two variables and between any variable and static
text. It can only used as part of variables, like the ones defined above, or
static text.
DEFAULT: $proc_name/$writer_pid
KEY: pcap_filter [GLOBAL, PMACCTD_ONLY, ONLY_PMBMPD]
DESC: This filter is global and applied to all incoming packets. It's passed to libpcap
and expects libpcap/tcpdump filter syntax. Being global it doesn't offer a great
flexibility but it's the fastest way to drop unwanted traffic.
DEFAULT: none
KEY: pcap_protocol [GLOBAL, PMACCTD_ONLY]
DESC: If set, specifies a specific packet socket protocol value to limit packet capture
to (for example, 0x0800 = IPv4). This option is only supported if pmacct was built
against a version of libpcap that supports pcap_set_protocol().
DEFAULT: none
KEY: pcap_arista_trailer_offset [GLOBAL, PMACCTD_ONLY]
DESC: Arista does set a trailer structure to convey extra info (ie. output interface, etc.) when
mirroring packets. This knob sets the byte offset from the end of the packet to indicate
where the trailer starts.
DEFAULT: none
KEY: pcap_arista_trailer_flag_value [GLOBAL, PMACCTD_ONLY]
DESC: When 'pcap_arista_trailer_offset' is set, specify the expected value in the arista trailer
flag field that indicates the output interface is present (this varies by chipset).
DEFAULT: 1
KEY: snaplen (-L) [GLOBAL, NO_NFACCTD, NO_SFACCTD]
DESC: Specifies the maximum number of bytes to capture for each packet. This directive has
key importance to both classification and connection tracking engines. In fact, some
protocols (mostly text-based eg.: RTSP, SIP, etc.) benefit of extra bytes because
they give more chances to successfully track data streams spawned by control channel.
But it must be also noted that capturing larger packet portion require more resources.
The right value need to be traded-off. In case classification is enabled, values under
200 bytes are often meaningless. 500-750 bytes are enough even for text based
protocols. Default snaplen value is OK if classification is disabled.
DEFAULT: 128 bytes
KEY: plugins (-P) [GLOBAL]
VALUES: [ memory | print | mysql | pgsql | sqlite3 | nfprobe | sfprobe | tee | amqp | kafka ]
DESC: Plugins to be enabled. memory, print, nfprobe, sfprobe and tee plugins are always
compiled in pmacct executables as they do not have external dependencies. Database
(ie. RDBMS, noSQL) and messaging ones (ie. amqp, kafka) do have external dependencies
and hence are available only if explicitly configured and compiled (see QUICKSTART).
'memory' plugin uses a memory table as backend and a client tool, 'pmacct', can fetch
the memory table content; the memory plugin is only good to prototype solutions, lab
environment without mass traffic generation and small/home production environments.
mysql, pgsql and sqlite3 plugins do output respectively to MySQL (or MariaDB via the
MySQL-compatible C API), PostgreSQL and SQLite 3.x (or BerkeleyDB 5.x via the SQLite
API compiled-in) databases to store data. 'print' plugin prints output data to flat-
files or stdout in JSON, Apache Avro, CSV or tab-spaced encodings. 'amqp' and 'kafka'
plugins allow to output data to RabbitMQ and Kafka brokers respectively. All these
plugins - to output to stdout, files, RDBMS and messaging brokers - are suitable for
production solutions and/or larger scenarios.
'nfprobe' plugin is a NetFlow/IPFIX agent and exports collected data via NetFlow v5/
v9 and IPFIX datagrams to a remote collector. 'sfprobe' plugin is a sFlow agent and
exports collected data via sFlow v5 datagrams to a remote collector. Both 'nfprobe'
and 'sfprobe' plugins can be run only via pmacctd and uacctd daemons (in other words
no collect NetFlow v5 / re-export IPFIX and similar trans-codings are supported).
The 'tee' plugin is a replicator of NetFlow/IPFIX/sFlow data (also transparent); it
can be run only via nfacctd and sfacctd.
Plugins can be either anonymous or named; configuration directives can be global or
bound to a specific plugins when named. An anonymous plugin is declared as 'plugins:
mysql' in the config whereas a named plugin is declared as 'plugins: mysql[name]'.
Then directives can be bound to a specific named plugin as: 'directive[name]: value'.
DEFAULT: memory
KEY: [ nfacctd_pipe_size | sfacctd_pipe_size | pmacctd_pipe_size ] [GLOBAL, NO_UACCTD]
DESC: Defines the size of the kernel socket to read traffic data. The socket is highlighted
below with "XXXX":
XXXX
[network] ----> [kernel] ----> [core process] ----> [plugin] ----> [backend]
[__________pmacct___________]
On Linux systems, if this configuration directive is not specified default socket size
awarded is defined in /proc/sys/net/core/[rw]mem_default ; the maximum configurable
socket size is defined in /proc/sys/net/core/[rw]mem_max instead. Still on Linux, the
"drops" field of /proc/net/udp or /proc/net/udp6 can be checked to ensure its value
is not increasing.
DEFAULT: Operating System default
KEY: [ bgp_daemon_pipe_size | bmp_daemon_pipe_size ] [GLOBAL]
DESC: Defines the size of the kernel socket used for BGP and BMP messaging. The socket is
highlighted below with "XXXX":
XXXX
[network] ----> [kernel] ----> [core process] ----> [plugin] ----> [backend]
[__________pmacct___________]
On Linux systems, if this configuration directive is not specified default socket size
awarded is defined in /proc/sys/net/core/rmem_default ; the maximum configurable socket
size (which can be changed via sysctl) is defined in /proc/sys/net/core/rmem_max
instead.
DEFAULT: Operating System default
KEY: plugin_pipe_size
DESC: Core Process and each plugin instance are run into different processes. To exchange
data, a circular queue is set up and highlighted below with "XXXX":
XXXX
[network] ----> [kernel] ----> [core process] ----> [plugin] ----> [backend]
[__________pmacct___________]
This directive activates the so-called home-grown queue and sets the total size,
in bytes, of such queue. Its default size is set to 4MB. Whenever facing heavy
traffic loads, this size can be adjusted to hold more data. In the following
example, the queue between the Core process and the plugin 'test' is set to 10MB:
...
plugins: memory[test]
plugin_pipe_size[test]: 10240000
...
It is HIGHLY recommended NOT to use the home-grown queue implementation except
for quick test purposes. Please use the ZeroMQ implementation configurable with
plugin_pipe_zmq and plugin_pipe_zmq_profile knobs. Read more in the "Internal
buffering and queueing" section of QUICKSTART.
When enabling debug, log messages about obtained and target pipe sizes are printed.
If obtained is less than target, it could mean the maximum socket size granted by
the Operating System has to be increased. On Linux systems default socket size awarded
is defined in /proc/sys/net/core/[rw]mem_default ; the maximum configurable socket
size (which can be changed via sysctl) is defined in /proc/sys/net/core/[rw]mem_max
instead.
In case of data loss messages containing the "missing data detected" string will be
logged - indicating the plugin affected and current settings.
DEFAULT: 4MB
KEY: plugin_buffer_size
DESC: By defining the transfer buffer size, in bytes, this directive enables buffering of
data transfers between core process and active plugins for the home-grown circular
queue implementation. Once a buffer is filled, it is delivered to the plugin. Setting
a larger value may improve throughput (ie. amount of CPU cycles required to transfer
data); setting a smaller value may improve latency, especially in scenarios with
little data influx. Buffering is disabled by default. The value has to be less/equal
to the size defined by 'plugin_pipe_size' and keeping a ratio between 1:100 and 1:1000
among the two is considered good practice; the circular queue of plugin_pipe_size size
is partitioned in chunks of plugin_buffer_size.
It is HIGHLY recommended NOT to use the home-grown queue implementation except
for quick test purposes. Please use the ZeroMQ implementation configurable with
plugin_pipe_zmq and plugin_pipe_zmq_profile knobs. Read more in the "Internal
buffering and queueing" section of QUICKSTART.
DEFAULT: Set to the size of the smallest element to buffer
KEY: plugin_pipe_zmq
VALUES: [ true | false ]
DESC: By defining this directive to 'true', a ZeroMQ queue is used for queueing and data
exchange between the Core Process and the plugins. This is the recommended approach
for internal pmacct queueing and buffering. This directive, along with all other
plugin_pipe_zmq_* directives, can be set globally or be applied on a per plugin
basis. Read more in the "Internal buffering and queueing" section of QUICKSTART.
DEFAULT: false
KEY: plugin_pipe_zmq_retry
DESC: Defines the interval of time, in seconds, after which a connection to the ZeroMQ
server (Core Process) should be retried by the client (Plugin) after a failure is
detected.
DEFAULT: 60
KEY: plugin_pipe_zmq_profile
VALUES: [ micro | small | medium | large | xlarge ]
DESC: Allows to select some standard buffering profiles. Following are the recommended
buckets in flows/samples/packets per second (the configured buffer value is
reported in brackets and is meant only to facilitate transitioning existing
deployments from plugin_buffer_size):
micro : up to 1K (0KB)
small : from 1K to 10-15K (10KB)
medium : from 10-10K to 100-125K (100KB)
large : from 100-125K to 250K (1MB)
xlarge : from 250K (10MB)
A symptom that the selected profile may be undersized is the missing data warnings
appearing in the logs; a symptom it is oversized instead is the latency in data
being purged out: in fact the buffer has to fill up in order to be released to the
plugin. The amount of flows/samples per second can be estimated as described in Q21
in the FAQS document; 'large' and 'xlarge' (and possibly also 'medium') profiles
may be counter-productive in case of a 'tee' plugin: excessive burstiness may cause
UDP drops due to small default kernel buffers. Should no profile fit the sizing,
the buffering value can be customised using the plugin_buffer_size directive.
DEFAULT: micro
KEY: plugin_pipe_zmq_hwm
DESC: Defines the messages high watermark, that is, "The high water mark is a hard
limit on the maximum number of outstanding messages ZeroMQ shall queue in
memory for any single peer that the specified socket is communicating with. A
value of zero means no limit.". If configured, upon reaching the set watermark
value, exceeding data will be discarded and an error log message will be output.
DEFAULT: 0
KEY: plugin_exit_any
VALUES: [ true | false ]
DESC: Daemons gracefully shut down (core process and all plugins) if either the core
process or all the registered plugins bail out. Setting this to true makes the
daemon to gracefully shut down in case any single one of the plugins bails out
and regardless there may be more plugins still active.
DEFAULT: false
KEY: propagate_signals [GLOBAL]
VALUES: [ true | false ]
DESC: When a signal is sent to the Core Process, propagate it to all active plugins;
this may come handy in scenarios where pmacct is run inside a (Docker) container.
DEFAULT: false
KEY: files_umask
DESC: Defines the mask for newly created files (log, pid, etc.) and their related directory
structure. A mask less than "002" is not accepted due to security reasons.
DEFAULT: 077
KEY: files_uid
DESC: Defines the system user id (UID) for files opened for writing (log, pid, etc.); this
is indeed possible only when running the daemon as super-user. This is also applied
to any intermediary directory structure which might be created. Both user string and
id are valid input.
DEFAULT: Operating System default (current user UID)
KEY: files_gid
DESC: Defines the system group id (GID) for files opened for writing (log, pid, etc.); this
is indeed possible only when running the daemon as super-user; this is also applied
to any intermediary directory structure which might be created. Both group string and
id are valud input.
DEFAULT: Operating System default (current user GID)
KEY: pcap_interface (-i) [GLOBAL, PMACCTD_ONLY]
DESC: Interface on which 'pmacctd' listens. If such directive isn't supplied, a libpcap
function is used to select a valid device. [ns]facctd can catch similar behaviour by
employing the [ns]facctd_ip directives; also, note that this directive is mutually
exclusive with 'pcap_savefile' (-I).
DEFAULT: Interface is selected by by the Operating System
KEY: pcap_interface_wait (-w) [GLOBAL, PMACCTD_ONLY]
VALUES: [ true | false ]
DESC: If set to true, this option causes 'pmacctd' to wait for the listening device to become
available; it will try to open successfully the device each few seconds. Whenever set to
false, 'pmacctd' will exit as soon as any error (related to the listening interface) is
detected.
DEFAULT: false
KEY: pcap_savefile (-I) [GLOBAL, NO_UACCTD, NO_PMBGPD]
DESC: File in libpcap savefile format to read data from (as an alternative to live data
collection). As soon as the daemon processed the file, it exits (unless, in pmacctd,
'pcap_savefile_wait' is specified). The directive is mutually exclusive with reading
live traffic (ie. pcap_interface (-i) for pmacctd, [ns]facctd_ip (-L) and
[ns]facctd_port (-l) for nfacctd and sfacctd respectively, bmp_daemon_ip for pmbmpd).
If using a traffic daemon (ie. nfacctd) with a BMP thread (ie. bmp_daemon: true) and
wanting to feed both with a savefile, only one file can be supplied (that is, only a
single pcap_savefile can be specified in the config): if having multiple files, ie.
one with traffic data and one with BMP data, these can be merged using, for example,
Wireshark which offers options to prepend, merge chronologically and append data.
Note: reading libpcap savefiles does use the cap_next_ex() call which seems not to be
highly portable, ie. a capture produced on a Linux does not always read on a MacOS.
Note: when using home-grown buffering (ie. not ZeroMQ), usleep() calls are placed
upon wrapping-up one buffer and starting up a new one. This may lead to under using
CPU and not the quickest processing experience; if a fastest rate is wanted, switch
buffering to ZeroMQ ('plugin_pipe_zmq: true').
DEFAULT: none
KEY: pcap_savefile_wait (-W) [GLOBAL, NO_UACCTD, NO_PMBGPD]
VALUES: [ true | false ]
DESC: If set to true, this option will cause the daemon to wait indefinitely for a signal
(ie. CTRL-C when not daemonized or 'killall -9 pmacctd' if it is) after being finished
processing the supplied libpcap savefile (pcap_savefile). This is particularly useful
when inserting fixed amounts of data into memory tables.
DEFAULT: false
KEY: pcap_savefile_delay (-Z) [GLOBAL, NO_UACCTD, NO_PMBGPD]
DESC: When reading from a pcap_savefile, sleep for the supplied amount of seconds before
(re)playing the file. For example this is useful to let a BGP session be established
and a RIB be finalised before playing a given file or buy time among replays so for
a dump event to trigger.
DEFAULT: 0
KEY: pcap_savefile_replay (-Y) [GLOBAL, NO_UACCTD, NO_PMBGPD]
DESC: When reading from a pcap_savefile, replay content for the specified amount of times.
Other than for testing in general, this may be useful when playing templated-based
protocols, ie. NetFlow v9/IPFIX, to replay data packets that could not be parsed
the first time due to the template not being sent yet.
DEFAULT: 1
KEY: [ pcap_direction | uacctd_direction ] [GLOBAL, ONLY_PMACCTD, ONLY_UACCTD]
VALUES: [ "in", "out" ]
DESC: Defines the traffic capturing direction with two possible values, "in" and "out". In
pmacctd this is used to 1) determine which primitive to populate, whether in_iface or
out_iface with the pcap_ifindex value and 2) tag / filter data basing on direction in
pre_tag_map. Not all platforms do support pcap_set_direction() and a quick test is to
check if tcpdump, ie. 'tcpdump -i <interface> -Q in', does work as intended. In uacctd
the only functionality is the latter of the two use-cases.
DEFAULT: none
KEY: pcap_ifindex [GLOBAL, PMACCTD_ONLY]
VALUES: [ "sys", "hash", "map", "none" ]
DESC: Defines how to source the ifindex of the capturing interface. If "sys" then a
if_nametoindex() call is triggered to the underlying OS and the result is used; if
"hash" an hashing algorithm is used against the interface name to generate a unique
number per interface; if "map" then ifindex definitions are expected as part of a
pcap_interfaces_map (see below).
DEFAULT: none
KEY: pcap_interfaces_map [GLOBAL, PMACCTD_ONLY, MAP]
DESC: Allows to listen for traffic data on multiple interfaces (compared to pcap_interface
where only a single interface can be defined). The map allows to define also ifindex
and capturing direction on a per-interface basis; to include the computed ifindex in
output data set pcap_ifindex to 'map'. The map can be reloaded at runtime by sending
the daemon a SIGUSR2 signal (ie. "killall -USR2 nfacctd"). Sample map in
examples/pcap_interfaces.map.example .
DEFAULT: none
KEY: promisc (-N) [GLOBAL, PMACCTD_ONLY]
VALUES: [ true | false ]
DESC: If set to true, puts the listening interface in promiscuous mode. It's mostly useful when
running 'pmacctd' in a box which is not a router, for example, when listening for traffic
on a mirroring port.
DEFAULT: true
KEY: imt_path (-p)
DESC: Specifies the full pathname where the memory plugin has to listen for client queries.
When multiple memory plugins are active, each one has to use its own file to communicate
with the client tool. Note that placing these files into a carefully protected directory
(rather than /tmp) is the proper way to control who can access the memory backend.
DEFAULT: /tmp/collect.pipe
KEY: imt_buckets (-b)
DESC: Defines the number of buckets of the memory table which is organized as a chained hash
table. A prime number is highly recommended. Read INTERNALS 'Memory table plugin' chapter
for further details.
DEFAULT: 32771
KEY: imt_mem_pools_number (-m)
DESC: Defines the number of memory pools the memory table is able to allocate; the size of each
pool is defined by the 'imt_mem_pools_size' directive. Here, a value of 0 instructs the
memory plugin to allocate new memory chunks as they are needed, potentially allowing the
memory structure to grow indefinitely. A value > 0 instructs the plugin to not try to
allocate more than the specified number of memory pools, thus placing an upper boundary
to the table size.
DEFAULT: 16
KEY: imt_mem_pools_size (-s)
DESC: Defines the size of each memory pool. For further details read INTERNALS 'Memory table
plugin'. The number of memory pools is defined by the 'imt_mem_pools_number' directive.
DEFAULT: 8192
KEY: syslog (-S) [GLOBAL]
VALUES: [ auth | mail | daemon | kern | user | local[0-7] ]
DESC: Enables syslog logging, using the specified facility.
DEFAULT: none (logging to stderr)
KEY: logfile [GLOBAL]
DESC: Enables logging to a file (bypassing syslog); expected value is a pathname. The target
file can be re-opened by sending a SIGHUP to the daemon so that, for example, logs can
be rotated.
DEFAULT: none (logging to stderr)
KEY: amqp_host
DESC: Defines the AMQP/RabbitMQ broker IP. All amqp_* directives are used by the AMQP plugin
of flow daemons only. Check *_amqp_host out (ie. bgp_daemon_msglog_amqp_host) for the
equivalent directives relevant to other RabbitMQ exports.
DEFAULT: localhost
KEY: [ bgp_daemon_msglog_amqp_host | bgp_table_dump_amqp_host | bmp_dump_amqp_host |
bmp_daemon_msglog_amqp_host | sfacctd_counter_amqp_host |
telemetry_daemon_msglog_amqp_host | telemetry_dump_amqp_host ] [GLOBAL]
DESC: See amqp_host. bgp_daemon_msglog_amqp_* directives are used by the BGP thread/daemon
to stream data out; bgp_table_dump_amqp_* directives are used by the BGP thread/daemon
to dump data out at regular time intervals; bmp_daemon_msglog_amqp_* directives are
used by the BMP thread/daemon to stream data out; bmp_dump_amqp_* directives are
used by the BMP thread/daemon to dump data out at regular time intervals;
sfacctd_counter_amqp_* directives are used by sfacctd to stream sFlow counter data out;
telemetry_daemon_msglog_amqp_* are used by the Streaming Telemetry thread/daemon to
stream data out; telemetry_dump_amqp_* directives are used by the Streaming Telemetry
thread/daemon to dump data out at regular time intervals.
DEFAULT: See amqp_host
KEY: amqp_vhost
DESC: Defines the AMQP/RabbitMQ server virtual host; see also amqp_host.
DEFAULT: "/"
KEY: [ bgp_daemon_msglog_amqp_vhost | bgp_table_dump_amqp_vhost | bmp_dump_amqp_vhost |
bmp_daemon_msglog_amqp_vhost | sfacctd_counter_amqp_vhost |
telemetry_daemon_msglog_amqp_vhost | telemetry_dump_amqp_vhost ] [GLOBAL]
DESC: See amqp_vhost; see also bgp_daemon_msglog_amqp_host.
DEFAULT: See amqp_vhost
KEY: amqp_user
DESC: Defines the username to use when connecting to the AMQP/RabbitMQ server; see also
amqp_host.
DEFAULT: guest
KEY: [ bgp_daemon_msglog_amqp_user | bgp_table_dump_amqp_user | bmp_dump_amqp_user |
bmp_daemon_msglog_amqp_user | sfacctd_counter_amqp_user |
telemetry_daemon_msglog_amqp_user | telemetry_dump_amqp_user ] [GLOBAL]
DESC: See amqp_user; see also bgp_daemon_msglog_amqp_host.
DEFAULT: See amqp_user
KEY: amqp_passwd
DESC: Defines the password to use when connecting to the server; see also amqp_host.
DEFAULT: guest
KEY: [ bgp_daemon_msglog_amqp_passwd | bgp_table_dump_amqp_passwd |
bmp_dump_amqp_passwd | bmp_daemon_msglog_amqp_passwd |
sfacctd_counter_amqp_passwd | telemetry_daemon_msglog_amqp_passwd |
telemetry_dump_amqp_passwd ]
[GLOBAL]
DESC: See amqp_passwd; see also bgp_daemon_msglog_amqp_host.
DEFAULT: See amqp_passwd
KEY: amqp_routing_key
DESC: Name of the AMQP routing key to attach to published data. Dynamic names are supported
through the use of variables, which are computed at the moment when data is purged to
the backend. The list of variables supported is:
$peer_src_ip Value of the peer_src_ip primitive of the record being processed.
$tag Value of the tag primitive of the record being processed.
$tag2 Value of the tag2 primitive of the record being processed.
$post_tag Configured value of post_tag.
$post_tag2 Configured value of post_tag2.
See also amqp_host.
DEFAULT: 'acct'
KEY: [ bgp_daemon_msglog_amqp_routing_key | bgp_table_dump_amqp_routing_key |
bmp_daemon_msglog_amqp_routing_key | bmp_dump_amqp_routing_key |
sfacctd_counter_amqp_routing_key | telemetry_daemon_msglog_amqp_routing_key |
telemetry_dump_amqp_routing_key ] [GLOBAL]
DESC: See amqp_routing_key; see also bgp_daemon_msglog_amqp_host. Variables supported by
the configuration directives described in this section:
$peer_src_ip BGP peer IP address (bgp_*) or sFlow agent IP address (sfacctd_*).
$bmp_router BMP peer IP address.
$telemetry_node Streaming Telemetry exporter IP address.
$peer_tcp_port BGP peer TCP port.
$bmp_router_port BMP peer TCP port.
$telemetry_node_port Streaming Telemetry exporter port.
DEFAULT: none
KEY: [ amqp_routing_key_rr | kafka_topic_rr ]
DESC: Performs round-robin load-balancing over a set of AMQP routing keys or Kafka topics.
The base name for the string is defined by amqp_routing_key or kafka_topic. This key
accepts a positive int value. If, for example, amqp_routing_key is set to 'blabla'
and amqp_routing_key_rr to 3 then the AMQP plugin will round robin as follows:
message #1 -> blabla_0, message #2 -> blabla_1, message #3 -> blabla_2, message #4
-> blabla_0 and so forth. This works in the same fashion for kafka_topic. By default
the feature is disabled, meaning all messages are sent to the base AMQP routing key
or Kafka topic (or the default one, if no amqp_routing_key or kafka_topic is being
specified).
For Kafka it is advised to create topics in advance with a tool like kafka-topics.sh
(ie. "kafka-topics.sh --zookeepeer <zookeeper URL> --topic <topic> --create") even
if auto.create.topics.enable is set to true (default) on the broker. This is because
topic creation, especially on distributed systems, may take time and lead to data
loss.
DEFAULT: 0
KEY: [ bgp_daemon_msglog_amqp_routing_key_rr | bgp_table_dump_amqp_routing_key_rr |
bmp_daemon_msglog_amqp_routing_key_rr | bmp_dump_amqp_routing_key_rr |
telemetry_daemon_msglog_amqp_routing_key_rr | telemetry_dump_amqp_routing_key_rr ]
[GLOBAL]
DESC: See amqp_routing_key_rr; see also bgp_daemon_msglog_amqp_host.
DEFAULT: See amqp_routing_key_rr
KEY: amqp_exchange
DESC: Name of the AMQP exchange to publish data; see also amqp_host.
DEFAULT: pmacct
KEY: [ bgp_daemon_msglog_amqp_exchange | bgp_table_dump_amqp_exchange |
bmp_daemon_msglog_amqp_exchange | bmp_dump_amqp_exchange |
sfacctd_counter_amqp_exchange | telemetry_daemon_msglog_amqp_exchange |
telemetry_dump_amqp_exchange ] [GLOBAL]
DESC: See amqp_exchange
DEFAULT: See amqp_exchange; see also bgp_daemon_msglog_amqp_host.
KEY: amqp_exchange_type
DESC: Type of the AMQP exchange to publish data to. 'direct', 'fanout' and 'topic'
types are supported; "rabbitmqctl list_exchanges" can be used to check the
exchange type. Upon mismatch of exchange type, ie. exchange type is 'direct'
but amqp_exchange_type is set to 'topic', an error will be returned.
DEFAULT: direct
KEY: [ bgp_daemon_msglog_amqp_exchange_type | bgp_table_dump_amqp_exchange_type |
bmp_daemon_msglog_amqp_exchange_type | bmp_dump_amqp_exchange_type |
sfactd_counter_amqp_exchange_type | telemetry_daemon_msglog_amqp_exchange_type |
telemetry_dump_amqp_exchange_type ] [GLOBAL]
DESC: See amqp_exchange_type; see also bgp_daemon_msglog_amqp_host.
DEFAULT: See amqp_exchange_type
KEY: amqp_persistent_msg
VALUES: [ true | false ]
DESC: Marks messages as persistent and sets Exchange as durable so to prevent data loss
if a RabbitMQ server restarts (it will still be consumer responsibility to declare
the queue durable). Note from RabbitMQ docs: "Marking messages as persistent does
not fully guarantee that a message won't be lost. Although it tells RabbitMQ to
save message to the disk, there is still a short time window when RabbitMQ has
accepted a message and hasn't saved it yet. Also, RabbitMQ doesn't do fsync(2) for
every message -- it may be just saved to cache and not really written to the disk.
The persistence guarantees aren't strong, but it is more than enough for our simple
task queue."; see also amqp_host.
DEFAULT: false
KEY: [ bgp_daemon_msglog_amqp_persistent_msg | bgp_table_dump_amqp_persistent_msg |
bmp_daemon_msglog_amqp_persistent_msg | bmp_dump_amqp_persistent_msg |
sfacctd_counter_persistent_msg | telemetry_daemon_msglog_amqp_persistent_msg |
telemetry_dump_amqp_persistent_msg ] [GLOBAL]
VALUES: See amqp_persistent_msg; see also bgp_daemon_msglog_amqp_host.
DESC: See amqp_persistent_msg
DEFAULT: See amqp_persistent_msg
KEY: amqp_frame_max
DESC: Defines the maximum size, in bytes, of an AMQP frame on the wire to request of the broker
for the connection. 4096 is the minimum size, 2^31-1 is the maximum; it may be needed to
up the value from its default especially when making use of amqp_multi_values which will
produce larger batched messages. See also amqp_host.
DEFAULT: 131072
KEY: [ bgp_daemon_msglog_amqp_frame_max | bgp_table_dump_amqp_frame_max |
bmp_daemon_msglog_amqp_frame_max | bmp_dump_amqp_frame_max |
sfacctd_counter_amqp_frame_max | telemetry_daemon_msglog_amqp_frame_max |
telemetry_dump_amqp_frame_max ] [GLOBAL]
DESC: See amqp_frame_max; see also bgp_daemon_msglog_amqp_host.
DEFAULT: See amqp_frame_max
KEY: amqp_heartbeat_interval
DESC: Defines the heartbeat interval in order to detect general failures of the RabbitMQ server.
The value is expected in seconds. By default the heartbeat mechanism is disabled with a
value of zero. According to RabbitMQ C API, detection takes place only upon publishing a
JSON message, ie. not at login or if idle. The maximum value supported is INT_MAX (or
2147483647); see also amqp_host.
DEFAULT: 0
KEY: [ bgp_daemon_msglog_amqp_heartbeat_interval | bgp_table_dump_amqp_heartbeat_interval |
bmp_daemon_msglog_amqp_heartbeat_interval | bmp_dump_amqp_heartbeat_interval |
sfacctd_counter_amqp_heartbeat_interval | telemetry_daemon_msglog_amqp_heartbeat_interval |
telemetry_dump_amqp_heartbeat_interval ] [GLOBAL]
DESC: See amqp_heartbeat_interval; see also bgp_daemon_msglog_amqp_host.
DEFAULT: See amqp_heartbeat_interval
KEY: [ bgp_daemon_msglog_amqp_retry | bmp_daemon_msglog_amqp_retry |
sfacctd_counter_amqp_retry | telemetry_daemon_msglog_amqp_retry ] [GLOBAL]
DESC: Defines the interval of time, in seconds, after which a connection to the RabbitMQ
server should be retried after a failure is detected; see also amqp_host. See also
bgp_daemon_msglog_amqp_host.
DEFAULT: 60
KEY: kafka_topic
DESC: Name of the Kafka topic to attach to published data. Dynamic names are supported by
kafka_topic through the use of variables, which are computed at the moment when data
is purged to the backend. The list of variables supported by amqp_routing_key:
$peer_src_ip Value of the peer_src_ip primitive of the record being processed.
$tag Value of the tag primitive of the record being processed.
$tag2 Value of the tagw primitive of the record being processed.
$post_tag Configured value of post_tag.
$post_tag2 Configured value of post_tag2.
It is advised to create topics in advance with a tool like kafka-topics.sh (ie.
"kafka-topics.sh --zookeepeer <zookeeper URL> --topic <topic> --create") even if
auto.create.topics.enable is set to true (default) on the broker. This is because
topic creation, especially on distributed systems, may take time and lead to data
loss.
DEFAULT: 'pmacct.acct'
KEY: kafka_config_file
DESC: Full pathname to a file containing directives to configure librdkafka. All knobs
whose values are string, integer, boolean, CSV are supported. Pointer values, ie.
for setting callbacks, are currently not supported through this infrastructure.
The syntax of the file is CSV and expected in the format: <type, key, value> where
'type' is one of 'global' or 'topic' and 'key' and 'value' are set according to
librdkafka doc https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md
Both 'key' and 'value' are passed onto librdkafka without any validation being
performed; the 'value' field can also contain commas no problem as it is also not
parsed. Examples are:
topic, compression.codec, snappy
global, socket.keepalive.enable, true
DEFAULT: none
KEY: kafka_broker_host
DESC: Defines one or multiple, comma-separated, Kafka brokers for the bootstrap process.
If only a single broker IP address is defined then the broker port is read via the
kafka_broker_port config directive (legacy syntax); if multiple brokers are defined
then each broker port, if not left to default 9092, is expected as part of this
directive, for example: broker1:10000,broker2 . When defining multiple brokers,
if the host is IPv4, the value is expected as address:port . If IPv6, it is
expected as [address]:port (although when defining a single broker, this is not
required as the IPv6 address is detected and wrapped-around '[' ']' symbols).
Resolvable hostnames are also accepted, if host resolves to multiple addresses it
will round-robin the addresses for each connection attempt. SSL connections can be
configured as ssl://broker3:9000,ssl://broker2 . All kafka_* directives are used
by the Kafka plugin of flow daemons only. Check other *_kafka_broker_host out (ie.
bgp_daemon_msglog_kafka_broker_host) for the equivalent directives relevant to
other Kafka exports.
DEFAULT: 127.0.0.1
KEY: kafka_broker_port
DESC: Defines the Kafka broker port. See also kafka_broker_host.
DEFAULT: 9092
KEY: kafka_partition
DESC: Defines the Kafka broker topic partition ID. RD_KAFKA_PARTITION_UA or ((int32_t)-1)
is to define the configured or default partitioner (slower than sending to a fixed
partition). See also kafka_broker_host.
DEFAULT: -1
KEY: kafka_partition_dynamic
VALUES [ true | false ]
DESC: Enables dynamic Kafka partitioning, ie. data is partitioned according to the value
of the Kafka broker topic partition key. See also kafka_partition_key.
DEFAULT: false
KEY: kafka_partition_key
DESC: Defines the Kafka broker topic partition key. A string of printable characters is
expected as value. Dynamic names are supported through the use of variables, which
are computed at the moment data is purged to the backend. The list of supported
variables follows:
$peer_src_ip Record value for peer_src_ip primitive (if primitive is not part
of the aggregation method then this will be set to a null value).
$tag Record value for tag primitive (if primitive is not part of the
aggregation method then this will be set to a null value).
$tag2 Record value for tag2 primitive (if primitive is not part of the
aggregation method then this will be set to a null value).
$src_host Record value for src_host primitive (if primitive is not part of
the aggregation method then this will be set to a null value).
$dst_host Record value for dst_host primitive (if primitive is not part of
the aggregation method then this will be set to a null value).
$src_port Record value for src_port primitive (if primitive is not part of
the aggregation method then this will be set to a null value).
$dst_port Record value for dst_port primitive (if primitive is not part of
the aggregation method then this will be set to a null value).
$proto Record value for proto primitive (if primitive is not part of
the aggregation method then this will be set to a null value).
$in_iface Record value for in_iface primitive (if primitive is not part of
the aggregation method then this will be set to a null value).
DEFAULT: none
KEY: [ bgp_daemon_msglog_kafka_broker_host | bgp_table_dump_kafka_broker_host |
bmp_daemon_msglog_kafka_broker_host | bmp_dump_kafka_broker_host |
sfacctd_counter_kafka_broker_host | telemetry_daemon_msglog_kafka_broker_host |
telemetry_dump_kafka_broker_host ] [GLOBAL]
DESC: See kafka_broker_host. bgp_daemon_msglog_kafka_* directives are used by the BGP thread/
daemon to stream data out; bgp_table_dump_kafka_* directives are used by the BGP thread/
daemon to dump data out at regular time intervals; bmp_daemon_msglog_kafka_* directives
are used by the BMP thread/daemon to stream data out; bmp_dump_kafka_* directives are
used by the BMP thread/daemon to dump data out at regular time intervals;
sfacctd_counter_kafka_* directives are used by sfacctd to stream sFlow counter data
out; telemetry_daemon_msglog_kafka_* are used by the Streaming Telemetry thread/daemon
to stream data out; telemetry_dump_kafka_* directives are used by the Streaming Telemetry
thread/daemon to dump data out at regular time intervals.
DEFAULT: See kafka_broker_host
KEY: [ bgp_daemon_msglog_kafka_broker_port | bgp_table_dump_kafka_broker_port |
bmp_daemon_msglog_kafka_broker_port | bmp_dump_kafka_broker_port |
sfacctd_counter_kafka_broker_port | telemetry_daemon_msglog_kafka_broker_port |
telemetry_dump_kafka_broker_port ] [GLOBAL]
DESC: See kafka_broker_port; see also bgp_daemon_msglog_kafka_broker_host.
DEFAULT: See kafka_broker_port
KEY: [ bgp_daemon_msglog_kafka_topic | bgp_table_dump_kafka_topic |
bmp_daemon_msglog_kafka_topic | bmp_dump_kafka_topic |
sfacctd_counter_kafka_topic | telemetry_daemon_msglog_kafka_topic |
telemetry_dump_kafka_topic ] [GLOBAL]
DESC: See kafka_topic; see also bgp_daemon_msglog_kafka_broker_host. Variables supported by
the configuration directives described in this section:
$peer_src_ip BGP peer IP address (bgp_*) or sFlow agent IP address (sfacctd_*).
$bmp_router BMP peer IP address.
$telemetry_node Streaming Telemetry exporter IP address.
$peer_tcp_port BGP peer TCP port.
$bmp_router_port BMP peer TCP port.
$telemetry_node_port Streaming Telemetry exporter port.
DEFAULT: none
KEY: [ bgp_daemon_msglog_kafka_topic_rr | bgp_table_dump_kafka_topic_rr |
bmp_daemon_msglog_kafka_topic_rr | bmp_dump_kafka_topic_rr |
telemetry_daemon_msglog_kafka_topic_rr | telemetry_dump_kafka_topic_rr ]
[GLOBAL]
DESC: See kafka_topic_rr; see also bgp_daemon_msglog_kafka_broker_host.
DEFAULT: See kafka_topic_rr
KEY: [ bgp_daemon_msglog_kafka_partition | bgp_table_dump_kafka_partition |
bmp_daemon_msglog_kafka_partition | bmp_dump_kafka_partition |
sfacctd_counter_kafka_partition | telemetry_daemon_msglog_kafka_partition |
telemetry_dump_kafka_partition ] [GLOBAL]
DESC: See kafka_partition; see also bgp_daemon_msglog_kafka_broker_host.
DEFAULT: See kafka_partition
KEY: [ bgp_daemon_msglog_kafka_partition_key |
bgp_table_dump_kafka_partition_key ] [GLOBAL]
DESC: Defines the Kafka broker topic partition key. A string of printable characters
is expected as value. Dynamic names are supported through the use of variables,
listed below:
$peer_src_ip The IP address of the BGP peer exporting data
$peer_tcp_port The TCP port of the BGP session. Useful in case of BGP x-connects
DEFAULT: none
KEY: [ bmp_daemon_msglog_kafka_partition_key |
bmp_dump_kafka_partition_key ] [GLOBAL]
DESC: Defines the Kafka broker topic partition key. A string of printable characters
is expected as value. Dynamic names are supported through the use of variables,
listed below:
$bmp_router The IP address of the router exporting data via BMP
$bmp_router_port The TCP port of the BMP session
DEFAULT: none
KEY: [ telemetry_daemon_msglog_kafka_partition_key |
telemetry_dump_kafka_partition_key ] [GLOBAL]
DESC: Defines the Kafka broker topic partition key. A string of printable characters
is expected as value. Dynamic names are supported through the use of variables,
listed below:
$telemetry_node The IP address of the node exporting Streaming Telemetry
$telemetry_node_port The TCP/UDP port of the Streaming Telemetry session
DEFAULT: none
KEY: [ bgp_daemon_msglog_kafka_retry | bmp_daemon_msglog_kafka_retry |
sfacctd_counter_kafka_retry | telemetry_daemon_msglog_kafka_retry ] [GLOBAL]
DESC: Defines the interval of time, in seconds, after which a connection to the Kafka
broker should be retried after a failure is detected.
DEFAULT: 60
KEY: [ bgp_daemon_msglog_kafka_config_file | bgp_table_dump_kafka_config_file |
bmp_daemon_msglog_kafka_config_file | bmp_dump_kafka_config_file |
sfacctd_counter_kafka_config_file | telemetry_daemon_msglog_kafka_config_file |
telemetry_dump_kafka_config_file ] [GLOBAL]