Browse Source

Imported Upstream version 1.6.0

tags/debian/1.6.0-1
Jonas Gröger 3 years ago
parent
commit
de3a707071
100 changed files with 48005 additions and 11648 deletions
  1. 3
    2
      AUTHORS
  2. 541
    238
      CONFIG-KEYS
  3. 1
    1
      COPYING
  4. 212
    2
      ChangeLog
  5. 33
    32
      FAQS
  6. 6
    9
      INSTALL
  7. 0
    4
      KNOWN-BUGS
  8. 3
    0
      Makefile.am
  9. 663
    205
      Makefile.in
  10. 0
    3
      NEWS
  11. 330
    131
      QUICKSTART
  12. 0
    15
      README
  13. 0
    10
      TODO
  14. 9
    5
      TOOLS
  15. 77
    0
      UPGRADE
  16. 0
    52
      acinclude.m4
  17. 1134
    127
      aclocal.m4
  18. 76
    0
      autogen.sh
  19. 25
    0
      bin/configure-help-replace.sh
  20. 30
    0
      bin/configure-help-replace.txt
  21. 1530
    0
      config.guess
  22. 1782
    0
      config.sub
  23. 17067
    4284
      configure
  24. 972
    0
      configure.ac
  25. 0
    1487
      configure.in
  26. 708
    0
      depcomp
  27. 8
    56
      docs/INTERNALS
  28. 6
    7
      docs/SIGNALS
  29. 4
    7
      docs/TRIGGER_VARS
  30. 86
    27
      examples/amqp/amqp_receiver.py
  31. 0
    38
      examples/amqp/amqp_receiver_trace.py
  32. 73
    0
      examples/kafka/kafka_consumer.py
  33. 6
    10
      examples/pretag.map.example
  34. 1
    1
      examples/primitives.lst.example
  35. 1
    1
      include/extract.h
  36. 1
    1
      include/fddi.h
  37. 1
    1
      include/ieee802_11.h
  38. 1
    1
      include/ip6.h
  39. 1
    1
      include/llc.h
  40. 1
    1
      include/sll.h
  41. 478
    202
      install-sh
  42. 9661
    0
      ltmain.sh
  43. 76
    0
      m4/ac_check_typedef.m4
  44. 14
    0
      m4/ac_linearize_path.m4
  45. 7983
    0
      m4/libtool.m4
  46. 384
    0
      m4/ltoptions.m4
  47. 123
    0
      m4/ltsugar.m4
  48. 23
    0
      m4/ltversion.m4
  49. 98
    0
      m4/lt~obsolete.m4
  50. 143
    112
      missing
  51. 0
    40
      mkinstalldirs
  52. 20
    0
      sql/README.export_proto
  53. 1
    1
      sql/README.iface
  54. 3
    0
      sql/README.mysql
  55. 3
    0
      sql/README.pgsql
  56. 3
    0
      sql/README.sqlite3
  57. 14
    4
      sql/README.timestamp
  58. 1
    1
      sql/pmacct-create-table_bgp_v1.pgsql
  59. 1
    1
      sql/pmacct-create-table_bgp_v1.sqlite3
  60. 4
    4
      sql/pmacct-create-table_v1.pgsql
  61. 1
    1
      sql/pmacct-create-table_v1.sqlite3
  62. 4
    4
      sql/pmacct-create-table_v2.pgsql
  63. 1
    1
      sql/pmacct-create-table_v2.sqlite3
  64. 4
    4
      sql/pmacct-create-table_v3.pgsql
  65. 1
    1
      sql/pmacct-create-table_v3.sqlite3
  66. 4
    4
      sql/pmacct-create-table_v4.pgsql
  67. 1
    1
      sql/pmacct-create-table_v4.sqlite3
  68. 4
    4
      sql/pmacct-create-table_v5.pgsql
  69. 1
    1
      sql/pmacct-create-table_v5.sqlite3
  70. 2
    2
      sql/pmacct-create-table_v6.pgsql
  71. 1
    1
      sql/pmacct-create-table_v6.sqlite3
  72. 2
    2
      sql/pmacct-create-table_v7.pgsql
  73. 1
    1
      sql/pmacct-create-table_v7.sqlite3
  74. 1
    1
      sql/pmacct-create-table_v8.sqlite3
  75. 2
    2
      sql/pmacct-create-table_v9.pgsql
  76. 1
    1
      sql/pmacct-create-table_v9.sqlite3
  77. 89
    40
      src/Makefile.am
  78. 1182
    334
      src/Makefile.in
  79. 1
    1
      src/acct.c
  80. 11
    57
      src/amqp_common.c
  81. 9
    18
      src/amqp_common.h
  82. 25
    100
      src/amqp_plugin.c
  83. 2
    1
      src/amqp_plugin.h
  84. 8
    0
      src/bgp/Makefile.am
  85. 627
    36
      src/bgp/Makefile.in
  86. 172
    2618
      src/bgp/bgp.c
  87. 88
    72
      src/bgp/bgp.h
  88. 51
    606
      src/bgp/bgp_aspath.c
  89. 6
    17
      src/bgp/bgp_aspath.h
  90. 63
    132
      src/bgp/bgp_community.c
  91. 7
    12
      src/bgp/bgp_community.h
  92. 63
    162
      src/bgp/bgp_ecommunity.c
  93. 8
    11
      src/bgp/bgp_ecommunity.h
  94. 20
    18
      src/bgp/bgp_hash.c
  95. 4
    5
      src/bgp/bgp_hash.h
  96. 350
    244
      src/bgp/bgp_logdump.c
  97. 12
    9
      src/bgp/bgp_logdump.h
  98. 711
    0
      src/bgp/bgp_lookup.c
  99. 40
    0
      src/bgp/bgp_lookup.h
  100. 0
    0
      src/bgp/bgp_msg.c

+ 3
- 2
AUTHORS View File

@@ -1,5 +1,5 @@
pmacct (Promiscuous mode IP Accounting package) v1.5.2
pmacct is Copyright (C) 2003-2015 by Paolo Lucente
pmacct (Promiscuous mode IP Accounting package) v1.6.0
pmacct is Copyright (C) 2003-2016 by Paolo Lucente

Founder:

@@ -40,6 +40,7 @@ Thanks to the following people for their strong support along the time:
A.O. Prokofiev
Edwin Punt
Anik Rahman
Job Snijders
Gabriel Snook
Rene Stoutjesdijk
Thomas Telkamp

+ 541
- 238
CONFIG-KEYS
File diff suppressed because it is too large
View File


+ 1
- 1
COPYING View File

@@ -1,5 +1,5 @@
pmacct (Promiscuous mode IP Accounting package)
pmacct is Copyright (C) 2003-2008 by Paolo Lucente
pmacct is Copyright (C) 2003-2016 by Paolo Lucente


GNU GENERAL PUBLIC LICENSE

+ 212
- 2
ChangeLog View File

@@ -1,5 +1,215 @@
pmacct (Promiscuous mode IP Accounting package) v1.5.2
pmacct is Copyright (C) 2003-2015 by Paolo Lucente
pmacct (Promiscuous mode IP Accounting package) v1.6.0
pmacct is Copyright (C) 2003-2016 by Paolo Lucente

1.6.0 -- 07-06-2016
+ Streamed telemetry daemon: quoting Cisco IOS-XR Telemetry Configuration
Guide at the time of this writing: "Streaming telemetry [ .. ] data
can be used for analysis and troubleshooting purposes to maintain the
health of the network. This is achieved by leveraging the capabilities of
machine-to-machine communication. [ .. ]" Streamed telemetry support comes
in two flavours: 1) a telemetry thread can be started in existing daemons,
ie. sFlow, NetFlow/IPFIX, etc. for the purpose of data correlation and 2)
a new daemon pmtelemetryd for standalone consumpton of data. Streamed
telemetry data can be logged real-time and/or dumped at regular time
intervals to flat-files, RabbitMQ or Kafka brokers.
+ BMP daemon: introduced support for Route Monitoring messages. RM messages
"provide an initial dump of all routes received from a peer as well as an
ongoing mechanism that sends the incremental routes advertised and
withdrawn by a peer to the monitoring station". Like for BMP events, RM
messages can be logged real-time and/or dumped at regular time intervals
to flat-files, RabbiMQ and Kafka brokers. RM messages are also saved in a
RIB structure for IP prefix lookup.
+ uacctd: ULOG support switched to NFLOG, the newer and L3 independent Linux
packet logging framework. One of the key advantages of NFLOG is support for
IPv4 and IPv6 (whereas ULOG was restricted to IPv4 only). The code has been
contributed by Vincent Bernat ( @vincentbernat ).
+ build system: it was modernized so not to rely on specific and old versions
of automake and autoconf, as it was the case until 1.5. Among the things,
pkg-config and libtool are leveraged and an autogen.sh script is generated.
The code has been contributed by Vincent Bernat ( @vincentbernat ).
+ sfacctd: RabbitMQ and Kafka support was introduced to real-time log and/
or dump at regular time intervals of sFlow counters. This is in addition
to existing support for flat-files.
+ maps_index: several improvements were carried out in the area of indexing
of maps: optimizations to pretag_index_fill() and pretag_index_lookup() to
improve lookup speeds; optimized id_entry structure, ie. by splitting key
and non-key parts, and hashing key in order to consume less memory; added
duplicate entry detection (cause of sudden index destruction);
pretag_index_destroy() destroys hash keys for each index entry, solving a
memory leak issue. Thanks to Job Snijders ( @job ) for his support.
+ Introduced 'export_proto_seqno' aggregation primitive to report on
sequence number of the export protocol (ie. NetFlow, sFlow, IPFIX). This
feature may enable more advanced offline analysis of packet loss, out of
orders, etc. over time windows than basic online analytics provided by the
daemons.
+ log.c: logging moved from standard output (stdout) to standard error
(stderr) so to not conflict with stdout printing of statistics (print
plugin). Thanks to Jim Westfall ( @jwestfall69 ) for his support.
+ print plugin: introduced a new print_output_lock_file config directive
to lock standard output (stdout) output so to prevent multiple processes
(instances of the same print plugin or different instances of print plugin)
overlap output. Thanks to Jim Westfall ( @jwestfall69 ) for his support.
+ pkt_handlers.c: euristics in NetFlow v9/IPFIX VLAN handler were improved
for the case of flows in egress direction. Also IP protocol checks were
removed for UDP/TCP ports and TCP flags in case the export protocol is
NetFlow v9/IPFIX. Thanks to Alexander Ponamarchuk for his support.
! Code refactoring: improved re-usability of much of the BGP code (so to
make it possible to use it as a library for some BMP daemon features, ie.
Route Monitoring messages support); consolidated functions to handle log
and print plugin output files; improved log messages to always include
process name and type.
! fix, bpf_filter.c: issue compiling against libpcap 1.7.x; introduced a
check for existing bpf_filter() in libpcap in order to prevent namespace
conflicts.
! fix, tmp_net_own_field default value changed to true. This knob can be
still switched to false for this release but is going to be removed soon.
! fix, cfg.c, cfg_handlers.c, pmacct.c: some configuration directives and
pmacct CL parameters requiring string parsing, ie. -T -O -c, are now
passed through tolower().
! fix, MongoDB plugin: removed version check around mongo_create_index()
and now defaulting to latest MongoDB C legacy driver API. This is due to
some versioning issue in the driver.
! fix, timestamp_arrival: primitive was reporting incorrect results (ie.
always zero) if timestamp_start or timestamp_end were not also specified
as part of the same aggregation method. Many thanks to Vincent Morel for
reporting the issue.
! fix, thread stack: a value of 0, default, leaves the stack size to the
system default or pmacct minimum (8192000) if system default is too low.
Some systems may throw an error if the defined size is not a multiple of
the system page size.
! fix, nfacctd: improved NetFlow v9/IPFIX parsing. Added new length checks
and fixed some existing checks. Thanks to Robert Wuttke ( @Benocs ) for his
support.
! fix, pretag_handlers.c: BPAS_map_bgp_nexthop_handler() and BPAS_map_bgp_
peer_dst_as_handler() were not setting a func_type.
! fix, JSON support: Jansson 2.2 does not have json_object_update_missing()
function which was introduced in 2.3. This is not provided as part of a
jansson.c file and compiled in conditionally, if needed. Jansson 2.2 is
still shipped along by some recent OS releases. Thanks to Vincent Bernat
( @vincentbernat ) for contributing the patch.
! fix, log.c: use a format string when calling syslog(). Passing directly a
potentially uncontrolled string could crash the program if the string
contains formatting parameters. Thanks to Vincent Bernat ( @vincentbernat )
for contributing the patch.
! fix, sfacctd.c: default value for config.sfacctd_counter_max_nodes was set
after sf_cnt_link_misc_structs(). Thanks to Robin Douine for his support
resolving the issue.
! fix, sfacctd.c: timestamp was consistently being reported as null in sFlow
counters output. Thanks to Robin Douine for his support resolving the issue.
! fix, SQL plugins: $SQL_HISTORY_BASETIME environment variable was reporting a
wrong value (next basetime) in the sql_trigger_exec script. Thanks to Rain
Nõmm for reporting the issue.
! fix, pretag.c: in pretag_index_fill(), replaced memcpy() with hash_dup_key()
also a missing res_fdata initialization in pretag_index_lookup() was solved;
these issues were originating false negatives upon lookup. Thanks to Rain
Nõmm fo his suppor.
! fix, ISIS daemon: hash_* functions renamed into isis_hash_* to avoid name
space clashes with their BGP daemon counter-parts.
! fix, kafka_common.c: rd_kafka_conf_set_log_cb moved to p_kafka_init_host()
due to crashes seen in p_kafka_connect_to_produce(). Thanks to Paul Mabey
for his support resolving the issue.
! fix, bgp_lookup.c: bgp_node_match_* were not returning any match in
bgp_follow_nexthop_lookup(). Thanks to Tim Jackson ( @jackson-tim ) for his
support resolving the issue.
! fix, sql_common.c: crashes observed when nfacctd_stitching was set to true
and nfacctd_time_new was set to false. Thanks to Jaroslav Jiráse
( @jjirasek ) for his support solving the issue.
- SQL plugins: sql_recovery_logfile feature was removed from the code due
to lack of support and interest. Along with it, also pmmyplay and pmpgplay
tools have been removed.
- pre_tag_map: removed support for mpls_pw_id due to lack of interest.

1.5.3 -- 14-01-2016
+ Introduced the Kafka plugin: Apache Kafka is publish-subscribe messaging
rethought as a distributed commit log. Its qualities being: fast, scalable,
durable and distributed by design. pmacct Kafka plugin is designed to
send aggregated network traffic data, in JSON format, through a Kafka
broker to 3rd party applications.
+ Introduced Kafka support to BGP and BMP daemons, in both their msglog
and dump flavors (ie. see [bgp|bmp]_daemon_msglog_kafka_broker_host and
[bgp_table|bmp]_dump_kafka_broker_host and companion config directives).
+ Introduced support for a Kafka broker to be used for queueing and data
exchange between Core Process and plugins. plugin_pipe_kafka directive,
along with all other plugin_pipe_kafka_* directives, can be set globally
or apply on a per plugin basis - similarly to what was done for RabbitMQ
(ie. plugin_pipe_amqp). Support is currently restricted only to print
plugin.
+ Added a new timestamp_arrival primitive to expose NetFlow/IPFIX records
observation time (ie. arrival at the collector), in addition to flows
start and end times (timestamp_start and timestamp_end respectively).
+ plugin_pipe_amqp: feature extended to the plugins missing it: nfprobe,
sfprobe and tee.
+ Introduced bgp_table_dump_latest_file: defines the full pathname to
pointer(s) to latest file(s). Update of the latest pointer is done
evaluating files modification time. Many thanks to Juan Camilo Cardona
( @jccardonar ) for proposing the feature.
+ Introduced pmacctd_nonroot config directive to allow to run pmacctd
from a user with non root privileges. This can be desirable on systems
supporting a tool like setcap, ie. 'setcap "cap_net_raw,cap_net_admin=ep"
/path/to/pmacctd', to assign specific system capabilities to unprivileged
users. Patch is courtesy by Laurent Oudot ( @loudot-tehtris ).
+ Introduced plugin_pipe_check_core_pid: when enabled (default), validates
the sender of data at the plugin side. Useful when plugin_pipe_amqp or
plugin_pipe_kafka are enabled and hence a broker sits between the daemon
Core Process and the Plugins.
+ A new debug_internal_msg config directive to specifically enable debug
of internal messaging between Core process and plugins.
! bgp_table_dump_refresh_time, bmp_dump_refresh_time: max allowed value
raised to 86400 from 3600.
! [n|s]facctd_as_new renamed [n|s]facctd_as; improved input checks to all
*_as (ie. nfacctd_as) and *_net (ie. nfacctd_net) config directives.
! pkt_handlers.c: NF_sampling_rate_handler(), SF_sampling_rate_handler()
now perform a renormalization check at last (instead of at first) so to
report the case of unknown (0) sampling rate.
! plugin_pipe_amqp_routing_key: default value changed to '$core_proc_name-
$plugin_name-$plugin_type'. Also, increased flexibility for customizing
the key with the use of variables (values computed at startup).
! Improved amqp_receiver.py example with CL arguments and better exception
handling. Also removed file amqp_receiver_trace.py, example is now merged
in amqp_receiver.py.
! fix, BGP daemon: several code optimizations and a few starving conditions
fixed. Thanks to Markus Weber ( @FvDxxx ) for his peer index round-robin
patch; thanks also to Job Snijders ( @job ) for his extensive support in
this area.
! fix, BMP daemon: greatly improved message parsing and segment reassembly;
RabbitMQ broker support found broken; several code optimizations are also
included.
! fix, bgp_table.c: bgp_table_top(), added input check to prevent crashes
in cases table contains no routes.
! fix, networks_file: missing atoi() for networks_cache_entries. Patch is
courtesy by Markus Weber ( @FvDxxx ).
! fix, plugin_pipe_amqp_routing_key: check introduced to prevent multiple
plugins to bind to the same RabbitMQ exchange, routing key combination.
Thanks to Jerred Horsman for reporting the issue.
! fix, MongoDB plugin: added a custom oid fuzz generator to prevent
concurrent inserts to fail; switched from deprecated mongo_connect() to
mongo_client(); added MONGO_CONTINUE_ON_ERROR flag to mongo_insert_batch
along with more verbose error reporting. Patches are all courtesy by
Russell Heilling ( @xchewtoyx ).
! fix, nl.c: increments made too early after introduction of MAX_GTP_TRIALS
Affected: pmacctd processing of GTP in releases 1.5.x. Patch is courtesy
by TANAKA Masayuki ( @tanakamasayuki ).
! fix, pkt_handlers.c: improved case for no SAMPLER_ID, ALU & IPFIX in
NF_sampling_rate_handler() on par with NF_counters_renormalize_handler().
! fix, SQL scripts: always use "DROP TABLE IF EXISTS" for both PostgreSQL
and SQLite. Pathes are courtesy by Vincent Bernat ( @vincentbernat ).
! fix, plugin_hooks.c: if p_amqp_publish_binary() calls were done while a
sleeper thread was launched, a memory corruption was observed.
! fix, util.c: mkdir() calls in mkdir_multilevel() now default to mode 777
instead of 700; this allows more play with files_umask (by default 077).
Thanks to Ruben Laban for reporting the issue.
! fix, BMP daemon: solved a build issue under MacOS X. Path is courtesy by
Junpei YOSHINO ( @junpei-yoshino ).
! fix, util.c: self-defined Malloc() can allocate more than 4GB of memory;
function is also now renamed pm_malloc().
! fix, PostgreSQL plugin: upon purge, call sql_query() only if status of
the entry is SQL_CACHE_COMMITTED. Thanks to Harry Foster ( @harryfoster )
for his support resolving the issue.
! fix, building system: link pfring before pcap to prevend failures when
linking. Patch is courtesy by @matthewsf .
! fix, plugin_common.c: memory leak discovered when pending queries queue
was involved (ie. cases where print_refresh_time > print_history). Thanks
to Edward Henigin for reporting the issue.

1.5.2 -- 07-09-2015
+ Introduced support for a RabbitMQ broker to be used for queueing and

+ 33
- 32
FAQS View File

@@ -1,24 +1,30 @@
pmacct (Promiscuous mode IP Accounting package)
pmacct is Copyright (C) 2003-2015 by Paolo Lucente
pmacct is Copyright (C) 2003-2016 by Paolo Lucente

Q1: What is pmacct project homepage ?
A: It is http://www.pmacct.net/ . There isn't any official mirror site.
A: pmacct homepage is http://www.pmacct.net/


Q2: 'pmacct', 'pmacctd', 'nfacctd', 'sfacctd', 'uacctd' -- but what do they mean ?
Q2: 'pmacct', 'pmacctd', 'nfacctd', 'sfacctd', 'uacctd' and 'pmtelemetryd' -- but
what do they mean ?
A: 'pmacct' is intended to be the name of the project; 'pmacctd' is the name of the
libpcap-based IPv4/IPv6 accounting daemon; 'nfacctd' is the name of the NetFlow
(versions supported NetFlow v1 to v9) and IPFIX accounting daemon; 'sfacctd' is
the name of the sFlow v2/v4/v5 accounting daemon; 'uacctd' is the name of the
Linux Netlink ULOG-based accounting daemon.
Linux Netlink NFLOG-based accounting daemon (historically, it was using ULOG,
hence its name); 'pmtelemetryd' is the name of the streamed telemetry collector
daemon, where quoting Cisco IOS-XR Telemetry Configuration Guide at the time of
this writing "Streaming telemetry [ .. ] data can be used for analysis and
troubleshooting purposes to maintain the health of the network. This is achieved
by leveraging the capabilities of machine-to-machine communication. [ .. ]".


Q3: Does pmacct stand for Promiscuous mode IP Accounting package ?
A: That is not entirely correct today, it was originally though. pmacct born as a
libpcap-based project only. Over the time it evolved to include NetFlow first,
sFlow shortly afterwards and ULOG more recently - striving to maintain a consistent
implementation over the set, unless technical considerations prevent that to happen
for specific cases.
sFlow shortly afterwards and NFLOG more recently - this is striving to maintain
a consistent implementation over the set, unless technical considerations
prevent that to happen for specific cases.


Q4: What are pmacct main features?
@@ -26,11 +32,11 @@ A: pmacct can collect, replicate and export network data. Collect in memory tabl
store persistently to RDBMS (MySQL, PostgreSQL, SQLite 3.x), noSQL databases
(key-value: BerkeleyDB 5.x via SQLite API or document-oriented: MongoDB) and
flat-files (csv, formatted, JSON output), publish to message exchanges via AMQP
(ie. to insert in ElasticSearch, Cassandra or CouchDB). Export speaking sFlow v5,
NetFlow v1/v5/v9 and IPFIX. pmacct is able to perform data aggregation, offering
a rich set of primitives to choose from; it can also filter, sample, renormalize,
tag and classify at L7. pmacct integrates a BGP daemon join routing visibility and
network traffic information.
(ie. to insert in ElasticSearch, Cassandra or CouchDB) and Kafka brokers. Export
speaking sFlow v5, NetFlow v1/v5/v9 and IPFIX. pmacct is able to perform data
aggregation, offering a rich set of primitives to choose from; it can also filter,
sample, renormalize, tag and classify at L7. pmacct integrates a BGP daemon join
routing visibility and network traffic information.


Q5: Does any of the pmacct daemons logs to flat files?
@@ -41,7 +47,7 @@ A: Yes, but in a specific way. In other tools flat-files are typically used to l
By inception, pmacct always aimed to a single-stage approach instead, ie. offer data
reduction tecniques and correlation tools to process network traffic data on the fly,
so to immediately offer the desired view(s) of the traffic. pmacct writes to files in
text-format (either csv or formatted via its 'print' plugin, see QUICKSTART doc for
text-format (json, csv or formatted via its 'print' plugin, see QUICKSTART doc for
further information) so to maximize potential integration with 3rd party applications
while keeping low the effort of customization.

@@ -66,10 +72,14 @@ A: CPU cycles are proportional to the amount of traffic (packets, flows, samples
ones. Kernel-to-userspace copies are critical and hence the first to be optimized;
for this purpose you may look at the following solutions:

libpcap-mmap, http://public.lanl.gov/cpw/ : a libpcap version which supports mmap()
on the linux kernel 2.[46].x . Applications, like pmacctd, need just to be linked
against the mmap()ed version of libpcap to work correctly.

Linux kernel has support for mmap() since 2.4. The kernel needs to
be 2.6.34+ or compiled with option CONFIG_PACKET_MMAP. You need at
least a 2.6.27 to get compatibility with 64bit. Starting from 3.10,
you get 20% increase of performance and packet capture rate. You
also need a matching libpcap library. mmap() support has been added
in 1.0.0. To take advantage of the performance boost from Linux
3.10, you need at least libpcap 1.5.0.
PF_RING, http://www.ntop.org/PF_RING.html : it's a new type of network socket that
improves the packet capture speed; it's available for Linux kernels 2.[46].x; it's
kernel based; has libpcap support for seamless integration with existing applications.
@@ -222,16 +232,7 @@ A: When IPv6 code is enabled, sfacctd and nfacctd will try to fire up an IPv6 so
commandline, use the following: 'nfacctd [ ... options ... ] -L 192.168.0.14'.


Q15: 32 bit counters are not large enough to me, in fact i see them rolling over and
returning inconsistent results. What to do ?
A: pmacct >= 0.9.2 optionally supports 64 bits counters via a '--enable-64bit' switch
while configuring the package for compilation. It will affect all counters: bytes,
packets and flows. Use such switch only when required as 32 bits counters allow to
save some memory. Usually, overflowing counters are recognizable by unexpected
fluctuations in the counters value - caused, as said, by one or multiple rollovers.

Q16: SQL table versions, what they are -- why and when do i need them ? Also, can i
Q15: SQL table versions, what they are -- why and when do i need them ? Also, can i
customize SQL tables ?
A: pmacct tarball gets with so called 'default' tables (IP and BGP); they are built
by SQL scripts stored in the 'sql/' section of the tarball. Default tables enable
@@ -246,7 +247,7 @@ A: pmacct tarball gets with so called 'default' tables (IP and BGP); they are bu
key. You will then be responsible for building the custom schema and indexes.


Q17: What is the best way to kill a running instance of pmacct avoiding data loss ?
Q16: What is the best way to kill a running instance of pmacct avoiding data loss ?
A: Two ways. a) Simply kill a specific plugin that you don't need anymore: you will
have to identify it and use the 'kill -INT <process number> command; b) kill the
whole pmacct instance: you can either use the 'killall -INT <daemon name>' command
@@ -262,7 +263,7 @@ A: Two ways. a) Simply kill a specific plugin that you don't need anymore: you w
the SO_REUSEADDR socket option.


Q18: I find interesting store network data in a SQL database. But i'm actually hitting
Q17: I find interesting store network data in a SQL database. But i'm actually hitting
poor performances. Do you have any tips to improve/optimize things ?
A: Few hints are summed below in order to improve SQL database performances. They are
not really tailored to a specific SQL engine but rather of general applicability.
@@ -298,7 +299,7 @@ A: Few hints are summed below in order to improve SQL database performances. The
in case of unsecured shutdowns (remember power failure is a variable ...).


Q19: I've configured the server hosting pmacct with my local timezone - which includes
Q18: I've configured the server hosting pmacct with my local timezone - which includes
DST (Daylight Saving Time). Is this allright?
A: In general, it's good rule to run the backend part of any accounting system as UTC;
pmacct uses the underlying system clock, expecially in the SQL plugins to calculate
@@ -306,7 +307,7 @@ A: In general, it's good rule to run the backend part of any accounting system a
but not recommended.


Q20: I'm using the 'tee' plugin with transparent mode set to true and keep receiving
Q19: I'm using the 'tee' plugin with transparent mode set to true and keep receiving
"Can't bridge Address Families when in transparent mode. Exiting ..." messages,
why?

@@ -320,7 +321,7 @@ A: It means you can't receive packets on an IPv4 address and transparently repli
IP address (nfacctd_ip), if IPv4 is used.


Q21: I've enabled IPv6 support in pmacct with --enable-ipv6. Even though the daemon
Q20: I've enabled IPv6 support in pmacct with --enable-ipv6. Even though the daemon
binds to the "::" address, i don't receive NetFlow/IPFIX/sFlow/BGP data sent via
IPv4, why?


+ 6
- 9
INSTALL View File

@@ -1,5 +1,5 @@
pmacct (Promiscuous mode IP Accounting package)
pmacct is Copyright (C) 2003-2009 by Paolo Lucente
pmacct is Copyright (C) 2003-2016 by Paolo Lucente

QUICK INSTALLATION:

@@ -13,15 +13,12 @@ Read accepted options with "<name> -h" and run with "<name> [options]".

(*) x.y.z is the release version.

The rationale behind pmacct compilation is that by default all features
are turned off (IPv6, MySQL, PostgreSQL, SQLite, multi-threading, 64bit
counters) making the package dependant only on a working libpcap. So if
you need any of these don't forget to turn them on manually.
The rationale behind compiling pmacct is that by default all features are
turned off (ie. IPv6, MySQL, PostgreSQL, SQLite, etc.) making the package
dependant only on a working libpcap. If you need any of these don't forget
to turn them on manually (full list of compiling options can be checked
out via a "./configure --help".

Some features will get turned on by default in future once they will
pick up sufficient ground (ie. IPv6) or will be sufficiently tested to
be considered stable (ie. multi-threading).

BASIC INSTALLATION:


+ 0
- 4
KNOWN-BUGS View File

@@ -1,4 +0,0 @@
pmacct (Promiscuous mode IP Accounting package)
pmacct is Copyright (C) 2003-2008 by Paolo Lucente

None.

+ 3
- 0
Makefile.am View File

@@ -1 +1,4 @@
SUBDIRS = src
ACLOCAL_AMFLAGS = -I m4
EXTRA_DIST = include sql examples docs \
CONFIG-KEYS FAQS QUICKSTART README.md TOOLS UPGRADE

+ 663
- 205
Makefile.in View File

@@ -1,6 +1,9 @@
# Makefile.in generated automatically by automake 1.4-p6 from Makefile.am
# Makefile.in generated by automake 1.11.6 from Makefile.am.
# @configure_input@

# Copyright (C) 1994, 1995-8, 1999, 2001 Free Software Foundation, Inc.
# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
# 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011 Free Software
# Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
@@ -10,93 +13,319 @@
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.


SHELL = @SHELL@

srcdir = @srcdir@
top_srcdir = @top_srcdir@
@SET_MAKE@
VPATH = @srcdir@
prefix = @prefix@
exec_prefix = @exec_prefix@

bindir = @bindir@
sbindir = @sbindir@
libexecdir = @libexecdir@
datadir = @datadir@
sysconfdir = @sysconfdir@
sharedstatedir = @sharedstatedir@
localstatedir = @localstatedir@
libdir = @libdir@
infodir = @infodir@
mandir = @mandir@
includedir = @includedir@
oldincludedir = /usr/include

DESTDIR =

am__make_dryrun = \
{ \
am__dry=no; \
case $$MAKEFLAGS in \
*\\[\ \ ]*) \
echo 'am--echo: ; @echo "AM" OK' | $(MAKE) -f - 2>/dev/null \
| grep '^AM OK$$' >/dev/null || am__dry=yes;; \
*) \
for am__flg in $$MAKEFLAGS; do \
case $$am__flg in \
*=*|--*) ;; \
*n*) am__dry=yes; break;; \
esac; \
done;; \
esac; \
test $$am__dry = yes; \
}
pkgdatadir = $(datadir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@

top_builddir = .

ACLOCAL = @ACLOCAL@
AUTOCONF = @AUTOCONF@
AUTOMAKE = @AUTOMAKE@
AUTOHEADER = @AUTOHEADER@

INSTALL = @INSTALL@
INSTALL_PROGRAM = @INSTALL_PROGRAM@ $(AM_INSTALL_PROGRAM_FLAGS)
INSTALL_DATA = @INSTALL_DATA@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
transform = @program_transform_name@

pkglibdir = $(libdir)/@PACKAGE@
pkglibexecdir = $(libexecdir)/@PACKAGE@
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
build_triplet = @build@
host_triplet = @host@
subdir = .
DIST_COMMON = $(am__configure_deps) $(srcdir)/Makefile.am \
$(srcdir)/Makefile.in $(top_srcdir)/configure AUTHORS COPYING \
ChangeLog INSTALL config.guess config.sub depcomp install-sh \
ltmain.sh missing
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/m4/ac_linearize_path.m4 \
$(top_srcdir)/m4/libtool.m4 $(top_srcdir)/m4/ltoptions.m4 \
$(top_srcdir)/m4/ltsugar.m4 $(top_srcdir)/m4/ltversion.m4 \
$(top_srcdir)/m4/lt~obsolete.m4 $(top_srcdir)/configure.ac
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
am__CONFIG_DISTCLEAN_FILES = config.status config.cache config.log \
configure.lineno config.status.lineno
mkinstalldirs = $(install_sh) -d
CONFIG_CLEAN_FILES =
CONFIG_CLEAN_VPATH_FILES =
AM_V_GEN = $(am__v_GEN_@AM_V@)
am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@)
am__v_GEN_0 = @echo " GEN " $@;
AM_V_at = $(am__v_at_@AM_V@)
am__v_at_ = $(am__v_at_@AM_DEFAULT_V@)
am__v_at_0 = @
SOURCES =
DIST_SOURCES =
RECURSIVE_TARGETS = all-recursive check-recursive dvi-recursive \
html-recursive info-recursive install-data-recursive \
install-dvi-recursive install-exec-recursive \
install-html-recursive install-info-recursive \
install-pdf-recursive install-ps-recursive install-recursive \
installcheck-recursive installdirs-recursive pdf-recursive \
ps-recursive uninstall-recursive
am__can_run_installinfo = \
case $$AM_UPDATE_INFO_DIR in \
n|no|NO) false;; \
*) (install-info --version) >/dev/null 2>&1;; \
esac
RECURSIVE_CLEAN_TARGETS = mostlyclean-recursive clean-recursive \
distclean-recursive maintainer-clean-recursive
AM_RECURSIVE_TARGETS = $(RECURSIVE_TARGETS:-recursive=) \
$(RECURSIVE_CLEAN_TARGETS:-recursive=) tags TAGS ctags CTAGS \
distdir dist dist-all distcheck
ETAGS = etags
CTAGS = ctags
DIST_SUBDIRS = $(SUBDIRS)
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
distdir = $(PACKAGE)-$(VERSION)
top_distdir = $(distdir)
am__remove_distdir = \
if test -d "$(distdir)"; then \
find "$(distdir)" -type d ! -perm -200 -exec chmod u+w {} ';' \
&& rm -rf "$(distdir)" \
|| { sleep 5 && rm -rf "$(distdir)"; }; \
else :; fi
am__relativize = \
dir0=`pwd`; \
sed_first='s,^\([^/]*\)/.*$$,\1,'; \
sed_rest='s,^[^/]*/*,,'; \
sed_last='s,^.*/\([^/]*\)$$,\1,'; \
sed_butlast='s,/*[^/]*$$,,'; \
while test -n "$$dir1"; do \
first=`echo "$$dir1" | sed -e "$$sed_first"`; \
if test "$$first" != "."; then \
if test "$$first" = ".."; then \
dir2=`echo "$$dir0" | sed -e "$$sed_last"`/"$$dir2"; \
dir0=`echo "$$dir0" | sed -e "$$sed_butlast"`; \
else \
first2=`echo "$$dir2" | sed -e "$$sed_first"`; \
if test "$$first2" = "$$first"; then \
dir2=`echo "$$dir2" | sed -e "$$sed_rest"`; \
else \
dir2="../$$dir2"; \
fi; \
dir0="$$dir0"/"$$first"; \
fi; \
fi; \
dir1=`echo "$$dir1" | sed -e "$$sed_rest"`; \
done; \
reldir="$$dir2"
DIST_ARCHIVES = $(distdir).tar.gz
GZIP_ENV = --best
distuninstallcheck_listfiles = find . -type f -print
am__distuninstallcheck_listfiles = $(distuninstallcheck_listfiles) \
| sed 's|^\./|$(prefix)/|' | grep -v '$(infodir)/dir$$'
distcleancheck_listfiles = find . -type f -print
ACLOCAL = @ACLOCAL@
AMTAR = @AMTAR@
AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@
AR = @AR@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CC = @CC@
CCDEPMODE = @CCDEPMODE@
CFLAGS = @CFLAGS@
CPP = @CPP@
CPPFLAGS = @CPPFLAGS@
CYGPATH_W = @CYGPATH_W@
DEFS = @DEFS@
DEPDIR = @DEPDIR@
DLLTOOL = @DLLTOOL@
DSYMUTIL = @DSYMUTIL@
DUMPBIN = @DUMPBIN@
ECHO_C = @ECHO_C@
ECHO_N = @ECHO_N@
ECHO_T = @ECHO_T@
EGREP = @EGREP@
EXEEXT = @EXEEXT@
EXTRABIN = @EXTRABIN@
FGREP = @FGREP@
GEOIPV2_CFLAGS = @GEOIPV2_CFLAGS@
GEOIPV2_LIBS = @GEOIPV2_LIBS@
GEOIP_CFLAGS = @GEOIP_CFLAGS@
GEOIP_LIBS = @GEOIP_LIBS@
GREP = @GREP@
INSTALL = @INSTALL@
INSTALL_DATA = @INSTALL_DATA@
INSTALL_PROGRAM = @INSTALL_PROGRAM@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
JANSSON_CFLAGS = @JANSSON_CFLAGS@
JANSSON_LIBS = @JANSSON_LIBS@
KAFKA_CFLAGS = @KAFKA_CFLAGS@
KAFKA_LIBS = @KAFKA_LIBS@
LD = @LD@
LDFLAGS = @LDFLAGS@
LIBOBJS = @LIBOBJS@
LIBS = @LIBS@
LIBTOOL = @LIBTOOL@
LIPO = @LIPO@
LN_S = @LN_S@
LTLIBOBJS = @LTLIBOBJS@
MAKE = @MAKE@
MAKEINFO = @MAKEINFO@
MANIFEST_TOOL = @MANIFEST_TOOL@
MKDIR_P = @MKDIR_P@
MONGODB_CFLAGS = @MONGODB_CFLAGS@
MONGODB_LIBS = @MONGODB_LIBS@
MYSQL_CFLAGS = @MYSQL_CFLAGS@
MYSQL_CONFIG = @MYSQL_CONFIG@
MYSQL_LIBS = @MYSQL_LIBS@
NFLOG_CFLAGS = @NFLOG_CFLAGS@
NFLOG_LIBS = @NFLOG_LIBS@
NM = @NM@
NMEDIT = @NMEDIT@
OBJDUMP = @OBJDUMP@
OBJEXT = @OBJEXT@
OTOOL = @OTOOL@
OTOOL64 = @OTOOL64@
PACKAGE = @PACKAGE@
PLUGINS = @PLUGINS@
PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
PACKAGE_NAME = @PACKAGE_NAME@
PACKAGE_STRING = @PACKAGE_STRING@
PACKAGE_TARNAME = @PACKAGE_TARNAME@
PACKAGE_URL = @PACKAGE_URL@
PACKAGE_VERSION = @PACKAGE_VERSION@
PATH_SEPARATOR = @PATH_SEPARATOR@
PGSQL_CFLAGS = @PGSQL_CFLAGS@
PGSQL_LIBS = @PGSQL_LIBS@
PKG_CONFIG = @PKG_CONFIG@
PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@
PKG_CONFIG_PATH = @PKG_CONFIG_PATH@
RABBITMQ_CFLAGS = @RABBITMQ_CFLAGS@
RABBITMQ_LIBS = @RABBITMQ_LIBS@
RANLIB = @RANLIB@
SERVER_LIBS = @SERVER_LIBS@
THREADS_SOURCES = @THREADS_SOURCES@
SED = @SED@
SET_MAKE = @SET_MAKE@
SHELL = @SHELL@
SQLITE3_CFLAGS = @SQLITE3_CFLAGS@
SQLITE3_LIBS = @SQLITE3_LIBS@
STRIP = @STRIP@
VERSION = @VERSION@

abs_builddir = @abs_builddir@
abs_srcdir = @abs_srcdir@
abs_top_builddir = @abs_top_builddir@
abs_top_srcdir = @abs_top_srcdir@
ac_ct_AR = @ac_ct_AR@
ac_ct_CC = @ac_ct_CC@
ac_ct_DUMPBIN = @ac_ct_DUMPBIN@
am__include = @am__include@
am__leading_dot = @am__leading_dot@
am__quote = @am__quote@
am__tar = @am__tar@
am__untar = @am__untar@
bindir = @bindir@
build = @build@
build_alias = @build_alias@
build_cpu = @build_cpu@
build_os = @build_os@
build_vendor = @build_vendor@
builddir = @builddir@
datadir = @datadir@
datarootdir = @datarootdir@
docdir = @docdir@
dvidir = @dvidir@
exec_prefix = @exec_prefix@
host = @host@
host_alias = @host_alias@
host_cpu = @host_cpu@
host_os = @host_os@
host_vendor = @host_vendor@
htmldir = @htmldir@
includedir = @includedir@
infodir = @infodir@
install_sh = @install_sh@
libdir = @libdir@
libexecdir = @libexecdir@
localedir = @localedir@
localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
pdfdir = @pdfdir@
prefix = @prefix@
program_transform_name = @program_transform_name@
psdir = @psdir@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
srcdir = @srcdir@
sysconfdir = @sysconfdir@
target_alias = @target_alias@
top_build_prefix = @top_build_prefix@
top_builddir = @top_builddir@
top_srcdir = @top_srcdir@
SUBDIRS = src
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
mkinstalldirs = $(SHELL) $(top_srcdir)/mkinstalldirs
CONFIG_CLEAN_FILES =
DIST_COMMON = README AUTHORS COPYING ChangeLog INSTALL Makefile.am \
Makefile.in NEWS TODO acinclude.m4 aclocal.m4 configure configure.in \
install-sh missing mkinstalldirs
ACLOCAL_AMFLAGS = -I m4
EXTRA_DIST = include sql examples docs \
CONFIG-KEYS FAQS QUICKSTART README.md TOOLS UPGRADE

all: all-recursive

DISTFILES = $(DIST_COMMON) $(SOURCES) $(HEADERS) $(TEXINFOS) $(EXTRA_DIST)

TAR = tar
GZIP_ENV = --best
all: all-redirect
.SUFFIXES:
$(srcdir)/Makefile.in: Makefile.am $(top_srcdir)/configure.in $(ACLOCAL_M4)
cd $(top_srcdir) && $(AUTOMAKE) --gnu --include-deps Makefile
am--refresh: Makefile
@:
$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
echo ' cd $(srcdir) && $(AUTOMAKE) --foreign'; \
$(am__cd) $(srcdir) && $(AUTOMAKE) --foreign \
&& exit 0; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign Makefile'; \
$(am__cd) $(top_srcdir) && \
$(AUTOMAKE) --foreign Makefile
.PRECIOUS: Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
echo ' $(SHELL) ./config.status'; \
$(SHELL) ./config.status;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe);; \
esac;

$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
$(SHELL) ./config.status --recheck

Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
cd $(top_builddir) \
&& CONFIG_FILES=$@ CONFIG_HEADERS= $(SHELL) ./config.status
$(top_srcdir)/configure: $(am__configure_deps)
$(am__cd) $(srcdir) && $(AUTOCONF)
$(ACLOCAL_M4): $(am__aclocal_m4_deps)
$(am__cd) $(srcdir) && $(ACLOCAL) $(ACLOCAL_AMFLAGS)
$(am__aclocal_m4_deps):

$(ACLOCAL_M4): configure.in acinclude.m4
cd $(srcdir) && $(ACLOCAL)
mostlyclean-libtool:
-rm -f *.lo

config.status: $(srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
$(SHELL) ./config.status --recheck
$(srcdir)/configure: $(srcdir)/configure.in $(ACLOCAL_M4) $(CONFIGURE_DEPENDENCIES)
cd $(srcdir) && $(AUTOCONF)
clean-libtool:
-rm -rf .libs _libs

distclean-libtool:
-rm -f libtool config.lt

# This directory's subdirectories are mostly independent; you can cd
# into them and run `make' without going through this Makefile.
@@ -104,13 +333,14 @@ $(srcdir)/configure: $(srcdir)/configure.in $(ACLOCAL_M4) $(CONFIGURE_DEPENDENCI
# (1) if the variable is set in `config.status', edit `config.status'
# (which will cause the Makefiles to be regenerated when you run `make');
# (2) otherwise, pass the desired values on the `make' command line.

@SET_MAKE@

all-recursive install-data-recursive install-exec-recursive \
installdirs-recursive install-recursive uninstall-recursive \
check-recursive installcheck-recursive info-recursive dvi-recursive:
@set fnord $(MAKEFLAGS); amf=$$2; \
$(RECURSIVE_TARGETS):
@fail= failcom='exit 1'; \
for f in x $$MAKEFLAGS; do \
case $$f in \
*=* | --[!k]*);; \
*k*) failcom='fail=yes';; \
esac; \
done; \
dot_seen=no; \
target=`echo $@ | sed s/-recursive//`; \
list='$(SUBDIRS)'; for subdir in $$list; do \
@@ -121,22 +351,32 @@ check-recursive installcheck-recursive info-recursive dvi-recursive:
else \
local_target="$$target"; \
fi; \
(cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \
|| case "$$amf" in *=*) exit 1;; *k*) fail=yes;; *) exit 1;; esac; \
($(am__cd) $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \
|| eval $$failcom; \
done; \
if test "$$dot_seen" = "no"; then \
$(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \
fi; test -z "$$fail"

mostlyclean-recursive clean-recursive distclean-recursive \
maintainer-clean-recursive:
@set fnord $(MAKEFLAGS); amf=$$2; \
$(RECURSIVE_CLEAN_TARGETS):
@fail= failcom='exit 1'; \
for f in x $$MAKEFLAGS; do \
case $$f in \
*=* | --[!k]*);; \
*k*) failcom='fail=yes';; \
esac; \
done; \
dot_seen=no; \
rev=''; list='$(SUBDIRS)'; for subdir in $$list; do \
rev="$$subdir $$rev"; \
test "$$subdir" != "." || dot_seen=yes; \
case "$@" in \
distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \
*) list='$(SUBDIRS)' ;; \
esac; \
rev=''; for subdir in $$list; do \
if test "$$subdir" = "."; then :; else \
rev="$$subdir $$rev"; \
fi; \
done; \
test "$$dot_seen" = "no" && rev=". $$rev"; \
rev="$$rev ."; \
target=`echo $@ | sed s/-recursive//`; \
for subdir in $$rev; do \
echo "Making $$target in $$subdir"; \
@@ -145,175 +385,393 @@ maintainer-clean-recursive:
else \
local_target="$$target"; \
fi; \
(cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \
|| case "$$amf" in *=*) exit 1;; *k*) fail=yes;; *) exit 1;; esac; \
($(am__cd) $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \
|| eval $$failcom; \
done && test -z "$$fail"
tags-recursive:
list='$(SUBDIRS)'; for subdir in $$list; do \
test "$$subdir" = . || (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) tags); \
test "$$subdir" = . || ($(am__cd) $$subdir && $(MAKE) $(AM_MAKEFLAGS) tags); \
done
ctags-recursive:
list='$(SUBDIRS)'; for subdir in $$list; do \
test "$$subdir" = . || ($(am__cd) $$subdir && $(MAKE) $(AM_MAKEFLAGS) ctags); \
done

ID: $(HEADERS) $(SOURCES) $(LISP) $(TAGS_FILES)
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) '{ files[$$0] = 1; nonempty = 1; } \
END { if (nonempty) { for (i in files) print i; }; }'`; \
mkid -fID $$unique
tags: TAGS

ID: $(HEADERS) $(SOURCES) $(LISP)
list='$(SOURCES) $(HEADERS)'; \
unique=`for i in $$list; do echo $$i; done | \
awk ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
here=`pwd` && cd $(srcdir) \
&& mkid -f$$here/ID $$unique $(LISP)

TAGS: tags-recursive $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) $(LISP)
tags=; \
TAGS: tags-recursive $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \
$(TAGS_FILES) $(LISP)
set x; \
here=`pwd`; \
if ($(ETAGS) --etags-include --version) >/dev/null 2>&1; then \
include_option=--etags-include; \
empty_fix=.; \
else \
include_option=--include; \
empty_fix=; \
fi; \
list='$(SUBDIRS)'; for subdir in $$list; do \
if test "$$subdir" = .; then :; else \
test -f $$subdir/TAGS && tags="$$tags -i $$here/$$subdir/TAGS"; \
fi; \
if test "$$subdir" = .; then :; else \
test ! -f $$subdir/TAGS || \
set "$$@" "$$include_option=$$here/$$subdir/TAGS"; \
fi; \
done; \
list='$(SOURCES) $(HEADERS)'; \
unique=`for i in $$list; do echo $$i; done | \
awk ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
test -z "$(ETAGS_ARGS)$$unique$(LISP)$$tags" \
|| (cd $(srcdir) && etags $(ETAGS_ARGS) $$tags $$unique $(LISP) -o $$here/TAGS)

mostlyclean-tags:

clean-tags:
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) '{ files[$$0] = 1; nonempty = 1; } \
END { if (nonempty) { for (i in files) print i; }; }'`; \
shift; \
if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \
test -n "$$unique" || unique=$$empty_fix; \
if test $$# -gt 0; then \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
"$$@" $$unique; \
else \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
$$unique; \
fi; \
fi
ctags: CTAGS
CTAGS: ctags-recursive $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \
$(TAGS_FILES) $(LISP)
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) '{ files[$$0] = 1; nonempty = 1; } \
END { if (nonempty) { for (i in files) print i; }; }'`; \
test -z "$(CTAGS_ARGS)$$unique" \
|| $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \
$$unique

GTAGS:
here=`$(am__cd) $(top_builddir) && pwd` \
&& $(am__cd) $(top_srcdir) \
&& gtags -i $(GTAGS_ARGS) "$$here"

distclean-tags:
-rm -f TAGS ID
-rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags

maintainer-clean-tags:

distdir = $(PACKAGE)-$(VERSION)
top_distdir = $(distdir)
distdir: $(DISTFILES)
$(am__remove_distdir)
test -d "$(distdir)" || mkdir "$(distdir)"
@srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
list='$(DISTFILES)'; \
dist_files=`for file in $$list; do echo $$file; done | \
sed -e "s|^$$srcdirstrip/||;t" \
-e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
case $$dist_files in \
*/*) $(MKDIR_P) `echo "$$dist_files" | \
sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
sort -u` ;; \
esac; \
for file in $$dist_files; do \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
if test -d $$d/$$file; then \
dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
if test -d "$(distdir)/$$file"; then \
find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
fi; \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \
find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
fi; \
cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \
else \
test -f "$(distdir)/$$file" \
|| cp -p $$d/$$file "$(distdir)/$$file" \
|| exit 1; \
fi; \
done
@list='$(DIST_SUBDIRS)'; for subdir in $$list; do \
if test "$$subdir" = .; then :; else \
$(am__make_dryrun) \
|| test -d "$(distdir)/$$subdir" \
|| $(MKDIR_P) "$(distdir)/$$subdir" \
|| exit 1; \
dir1=$$subdir; dir2="$(distdir)/$$subdir"; \
$(am__relativize); \
new_distdir=$$reldir; \
dir1=$$subdir; dir2="$(top_distdir)"; \
$(am__relativize); \
new_top_distdir=$$reldir; \
echo " (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) top_distdir="$$new_top_distdir" distdir="$$new_distdir" \\"; \
echo " am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)"; \
($(am__cd) $$subdir && \
$(MAKE) $(AM_MAKEFLAGS) \
top_distdir="$$new_top_distdir" \
distdir="$$new_distdir" \
am__remove_distdir=: \
am__skip_length_check=: \
am__skip_mode_fix=: \
distdir) \
|| exit 1; \
fi; \
done
-test -n "$(am__skip_mode_fix)" \
|| find "$(distdir)" -type d ! -perm -755 \
-exec chmod u+rwx,go+rx {} \; -o \
! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \
! -type d ! -perm -400 -exec chmod a+r {} \; -o \
! -type d ! -perm -444 -exec $(install_sh) -c -m a+r {} {} \; \
|| chmod -R a+r "$(distdir)"
dist-gzip: distdir
tardir=$(distdir) && $(am__tar) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz
$(am__remove_distdir)

dist-bzip2: distdir
tardir=$(distdir) && $(am__tar) | BZIP2=$${BZIP2--9} bzip2 -c >$(distdir).tar.bz2
$(am__remove_distdir)

dist-lzip: distdir
tardir=$(distdir) && $(am__tar) | lzip -c $${LZIP_OPT--9} >$(distdir).tar.lz
$(am__remove_distdir)

dist-lzma: distdir
tardir=$(distdir) && $(am__tar) | lzma -9 -c >$(distdir).tar.lzma
$(am__remove_distdir)

dist-xz: distdir
tardir=$(distdir) && $(am__tar) | XZ_OPT=$${XZ_OPT--e} xz -c >$(distdir).tar.xz
$(am__remove_distdir)

dist-tarZ: distdir
tardir=$(distdir) && $(am__tar) | compress -c >$(distdir).tar.Z
$(am__remove_distdir)

dist-shar: distdir
shar $(distdir) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).shar.gz
$(am__remove_distdir)

dist-zip: distdir
-rm -f $(distdir).zip
zip -rq $(distdir).zip $(distdir)
$(am__remove_distdir)

dist dist-all: distdir
tardir=$(distdir) && $(am__tar) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz
$(am__remove_distdir)

# This target untars the dist file and tries a VPATH configuration. Then
# it guarantees that the distribution is self-contained by making another
# tarfile.
distcheck: dist
-rm -rf $(distdir)
GZIP=$(GZIP_ENV) $(TAR) zxf $(distdir).tar.gz
mkdir $(distdir)/=build
mkdir $(distdir)/=inst
dc_install_base=`cd $(distdir)/=inst && pwd`; \
cd $(distdir)/=build \
&& ../configure --srcdir=.. --prefix=$$dc_install_base \
case '$(DIST_ARCHIVES)' in \
*.tar.gz*) \
GZIP=$(GZIP_ENV) gzip -dc $(distdir).tar.gz | $(am__untar) ;;\
*.tar.bz2*) \
bzip2 -dc $(distdir).tar.bz2 | $(am__untar) ;;\
*.tar.lzma*) \
lzma -dc $(distdir).tar.lzma | $(am__untar) ;;\
*.tar.lz*) \
lzip -dc $(distdir).tar.lz | $(am__untar) ;;\
*.tar.xz*) \
xz -dc $(distdir).tar.xz | $(am__untar) ;;\
*.tar.Z*) \
uncompress -c $(distdir).tar.Z | $(am__untar) ;;\
*.shar.gz*) \
GZIP=$(GZIP_ENV) gzip -dc $(distdir).shar.gz | unshar ;;\
*.zip*) \
unzip $(distdir).zip ;;\
esac
chmod -R a-w $(distdir); chmod u+w $(distdir)
mkdir $(distdir)/_build
mkdir $(distdir)/_inst
chmod a-w $(distdir)
test -d $(distdir)/_build || exit 0; \
dc_install_base=`$(am__cd) $(distdir)/_inst && pwd | sed -e 's,^[^:\\/]:[\\/],/,'` \
&& dc_destdir="$${TMPDIR-/tmp}/am-dc-$$$$/" \
&& am__cwd=`pwd` \
&& $(am__cd) $(distdir)/_build \
&& ../configure --srcdir=.. --prefix="$$dc_install_base" \
$(AM_DISTCHECK_CONFIGURE_FLAGS) \
$(DISTCHECK_CONFIGURE_FLAGS) \
&& $(MAKE) $(AM_MAKEFLAGS) \
&& $(MAKE) $(AM_MAKEFLAGS) dvi \
&& $(MAKE) $(AM_MAKEFLAGS) check \
&& $(MAKE) $(AM_MAKEFLAGS) install \
&& $(MAKE) $(AM_MAKEFLAGS) installcheck \
&& $(MAKE) $(AM_MAKEFLAGS) dist
-rm -rf $(distdir)
@banner="$(distdir).tar.gz is ready for distribution"; \
dashes=`echo "$$banner" | sed s/./=/g`; \
echo "$$dashes"; \
echo "$$banner"; \
echo "$$dashes"
dist: distdir
-chmod -R a+r $(distdir)
GZIP=$(GZIP_ENV) $(TAR) chozf $(distdir).tar.gz $(distdir)
-rm -rf $(distdir)
dist-all: distdir
-chmod -R a+r $(distdir)
GZIP=$(GZIP_ENV) $(TAR) chozf $(distdir).tar.gz $(distdir)
-rm -rf $(distdir)
distdir: $(DISTFILES)
-rm -rf $(distdir)
mkdir $(distdir)
-chmod 777 $(distdir)
@for file in $(DISTFILES); do \
d=$(srcdir); \
if test -d $$d/$$file; then \
cp -pr $$d/$$file $(distdir)/$$file; \
else \
test -f $(distdir)/$$file \
|| ln $$d/$$file $(distdir)/$$file 2> /dev/null \
|| cp -p $$d/$$file $(distdir)/$$file || :; \
fi; \
done
for subdir in $(SUBDIRS); do \
if test "$$subdir" = .; then :; else \
test -d $(distdir)/$$subdir \
|| mkdir $(distdir)/$$subdir \
|| exit 1; \
chmod 777 $(distdir)/$$subdir; \
(cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) top_distdir=../$(distdir) distdir=../$(distdir)/$$subdir distdir) \
|| exit 1; \
fi; \
done
info-am:
info: info-recursive
dvi-am:
dvi: dvi-recursive
&& $(MAKE) $(AM_MAKEFLAGS) uninstall \
&& $(MAKE) $(AM_MAKEFLAGS) distuninstallcheck_dir="$$dc_install_base" \
distuninstallcheck \
&& chmod -R a-w "$$dc_install_base" \
&& ({ \
(cd ../.. && umask 077 && mkdir "$$dc_destdir") \
&& $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" install \
&& $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" uninstall \
&& $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" \
distuninstallcheck_dir="$$dc_destdir" distuninstallcheck; \
} || { rm -rf "$$dc_destdir"; exit 1; }) \
&& rm -rf "$$dc_destdir" \
&& $(MAKE) $(AM_MAKEFLAGS) dist \
&& rm -rf $(DIST_ARCHIVES) \
&& $(MAKE) $(AM_MAKEFLAGS) distcleancheck \
&& cd "$$am__cwd" \
|| exit 1
$(am__remove_distdir)
@(echo "$(distdir) archives ready for distribution: "; \
list='$(DIST_ARCHIVES)'; for i in $$list; do echo $$i; done) | \
sed -e 1h -e 1s/./=/g -e 1p -e 1x -e '$$p' -e '$$x'
distuninstallcheck:
@test -n '$(distuninstallcheck_dir)' || { \
echo 'ERROR: trying to run $@ with an empty' \
'$$(distuninstallcheck_dir)' >&2; \
exit 1; \
}; \
$(am__cd) '$(distuninstallcheck_dir)' || { \
echo 'ERROR: cannot chdir into $(distuninstallcheck_dir)' >&2; \
exit 1; \
}; \
test `$(am__distuninstallcheck_listfiles) | wc -l` -eq 0 \
|| { echo "ERROR: files left after uninstall:" ; \
if test -n "$(DESTDIR)"; then \
echo " (check DESTDIR support)"; \
fi ; \
$(distuninstallcheck_listfiles) ; \
exit 1; } >&2
distcleancheck: distclean
@if test '$(srcdir)' = . ; then \
echo "ERROR: distcleancheck can only run from a VPATH build" ; \
exit 1 ; \
fi
@test `$(distcleancheck_listfiles) | wc -l` -eq 0 \
|| { echo "ERROR: files left in build directory after distclean:" ; \
$(distcleancheck_listfiles) ; \
exit 1; } >&2
check-am: all-am
check: check-recursive
installcheck-am:
installcheck: installcheck-recursive
install-exec-am:
all-am: Makefile
installdirs: installdirs-recursive
installdirs-am:
install: install-recursive
install-exec: install-exec-recursive

install-data-am:
install-data: install-data-recursive
uninstall: uninstall-recursive

install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
install: install-recursive
uninstall-am:
uninstall: uninstall-recursive
all-am: Makefile
all-redirect: all-recursive
install-strip:
$(MAKE) $(AM_MAKEFLAGS) AM_INSTALL_PROGRAM_FLAGS=-s install
installdirs: installdirs-recursive
installdirs-am:


installcheck: installcheck-recursive
install-strip:
if test -z '$(STRIP)'; then \
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
install; \
else \
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
"INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \
fi
mostlyclean-generic:

clean-generic:

distclean-generic:
-rm -f Makefile $(CONFIG_CLEAN_FILES)
-rm -f config.cache config.log stamp-h stamp-h[0-9]*
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
-test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES)

maintainer-clean-generic:
mostlyclean-am: mostlyclean-tags mostlyclean-generic
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-recursive

mostlyclean: mostlyclean-recursive
clean-am: clean-generic clean-libtool mostlyclean-am

clean-am: clean-tags clean-generic mostlyclean-am
distclean: distclean-recursive
-rm -f $(am__CONFIG_DISTCLEAN_FILES)
-rm -f Makefile
distclean-am: clean-am distclean-generic distclean-libtool \
distclean-tags

clean: clean-recursive
dvi: dvi-recursive

distclean-am: distclean-tags distclean-generic clean-am
dvi-am:

distclean: distclean-recursive
-rm -f config.status
html: html-recursive

maintainer-clean-am: maintainer-clean-tags maintainer-clean-generic \
distclean-am
@echo "This command is intended for maintainers to use;"
@echo "it deletes files that may require special tools to rebuild."
html-am:

info: info-recursive

info-am:

install-data-am:

install-dvi: install-dvi-recursive

install-dvi-am:

install-exec-am:

install-html: install-html-recursive

install-html-am:

install-info: install-info-recursive

install-info-am:

install-man:

install-pdf: install-pdf-recursive

install-pdf-am:

install-ps: install-ps-recursive

install-ps-am:

installcheck-am:

maintainer-clean: maintainer-clean-recursive
-rm -f config.status

.PHONY: install-data-recursive uninstall-data-recursive \
install-exec-recursive uninstall-exec-recursive installdirs-recursive \
uninstalldirs-recursive all-recursive check-recursive \
installcheck-recursive info-recursive dvi-recursive \
mostlyclean-recursive distclean-recursive clean-recursive \
maintainer-clean-recursive tags tags-recursive mostlyclean-tags \
distclean-tags clean-tags maintainer-clean-tags distdir info-am info \
dvi-am dvi check check-am installcheck-am installcheck install-exec-am \
install-exec install-data-am install-data install-am install \
uninstall-am uninstall all-redirect all-am all installdirs-am \
installdirs mostlyclean-generic distclean-generic clean-generic \
maintainer-clean-generic clean mostlyclean distclean maintainer-clean
-rm -f $(am__CONFIG_DISTCLEAN_FILES)
-rm -rf $(top_srcdir)/autom4te.cache
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic

mostlyclean: mostlyclean-recursive

mostlyclean-am: mostlyclean-generic mostlyclean-libtool

pdf: pdf-recursive

pdf-am:

ps: ps-recursive

ps-am:

uninstall-am:

.MAKE: $(RECURSIVE_CLEAN_TARGETS) $(RECURSIVE_TARGETS) ctags-recursive \
install-am install-strip tags-recursive

.PHONY: $(RECURSIVE_CLEAN_TARGETS) $(RECURSIVE_TARGETS) CTAGS GTAGS \
all all-am am--refresh check check-am clean clean-generic \
clean-libtool ctags ctags-recursive dist dist-all dist-bzip2 \
dist-gzip dist-lzip dist-lzma dist-shar dist-tarZ dist-xz \
dist-zip distcheck distclean distclean-generic \
distclean-libtool distclean-tags distcleancheck distdir \
distuninstallcheck dvi dvi-am html html-am info info-am \
install install-am install-data install-data-am install-dvi \
install-dvi-am install-exec install-exec-am install-html \
install-html-am install-info install-info-am install-man \
install-pdf install-pdf-am install-ps install-ps-am \
install-strip installcheck installcheck-am installdirs \
installdirs-am maintainer-clean maintainer-clean-generic \
mostlyclean mostlyclean-generic mostlyclean-libtool pdf pdf-am \
ps ps-am tags tags-recursive uninstall uninstall-am


# Tell versions [3.59,3.63) of GNU make to not export all variables.

+ 0
- 3
NEWS View File

@@ -1,3 +0,0 @@
NEWS:
see ChangeLog file


+ 330
- 131
QUICKSTART View File

</
@@ -1,5 +1,5 @@
pmacct (Promiscuous mode IP Accounting package)
pmacct is Copyright (C) 2003-2015 by Paolo Lucente
pmacct is Copyright (C) 2003-2016 by Paolo Lucente


TABLE OF CONTENTS:
@@ -8,21 +8,23 @@ II. Configuring pmacct for compilation and installing
III. Brief SQL (MySQL, PostgreSQL, SQLite 3.x) and noSQL (MongoDB) setup examples
IV. Running the libpcap-based daemon (pmacctd)
V. Running the NetFlow and sFlow daemons (nfacctd/sfacctd)
VI. Running the ULOG-based daemon (uacctd)
VI. Running the NFLOG-based daemon (uacctd)
VII. Running the pmacct client (pmacct)
VIII. Running the RabbitMQ/AMQP plugin
IX. Internal buffering and queueing
X. Quickstart guide to packet/stream classifiers
XI. Quickstart guide to setup a NetFlow agent/probe
XII. Quickstart guide to setup a sFlow agent/probe
XIII. Quickstart guide to setup the BGP daemon
XIV. Quickstart guide to setup a NetFlow/sFlow replicator
XV. Quickstart guide to setup the IS-IS daemon
XVI. Quickstart guide to setup the BMP daemon
XVII. Running the print plugin to write to flat-files
XVIII. Quickstart guide to setup GeoIP lookups
XIX. Using pmacct as traffic/event logger
XX. Notes on how to troubleshoot
IX. Running the Kafka plugin
X. Internal buffering and queueing
XI. Quickstart guide to packet/stream classifiers
XII. Quickstart guide to setup a NetFlow agent/probe
XIII. Quickstart guide to setup a sFlow agent/probe
XIV. Quickstart guide to setup the BGP daemon
XV. Quickstart guide to setup a NetFlow/sFlow replicator
XVI. Quickstart guide to setup the IS-IS daemon
XVII. Quickstart guide to setup the BMP daemon
XVIII. Quickstart guide to setup Streamed Telemetry accounting
XIX. Running the print plugin to write to flat-files
XX. Quickstart guide to setup GeoIP lookups
XXI. Using pmacct as traffic/event logger
XXII. Miscellaneous notes and troubleshooting tips


I. Plugins included with pmacct distribution
@@ -32,7 +34,8 @@ plugins. Here is a list of plugins included in the official pmacct distribution:
'memory': data is stored in a memory table and can be fetched via the pmacct
command-line client tool, 'pmacct'. This plugin also allows easily to
inject data into 3rd party tools like GNUplot, RRDtool or a Net-SNMP
server.
server. The plugin is good for prototype solutions and smaller-scale
environments.
'mysql': a working MySQL installation can be used for data storage.
'pgsql': a working PostgreSQL installation can be used for data storage.
'sqlite3': a working SQLite 3.x or BerkeleyDB 5.x (compiled in with the SQLite
@@ -44,15 +47,25 @@ plugins. Here is a list of plugins included in the official pmacct distribution:
'amqp': data is sent to a RabbitMQ message exchange, running AMQP protocol,
for delivery to consumer applications or tools. Popular consumers
are ElasticSearch, Cassandra and CouchDB.
'kafka': data is sent to a Kafka broker for delivery to consumer applications
or tools.
'tee': applies to nfacctd and sfacctd daemons only. It's a featureful packet
replicator for NetFlow/IPFIX/sFlow data.
'nfprobe': applies to pmacctd and uacctd daemons only. Exports collected data via
NetFlow v5/v9 or IPFIX.
'sfprobe': applies to pmacctd and uacctd daemons only. Exports collected data via
sFlow v5.


II. Configuring pmacct for compilation and installing
The simplest way to configure the package for compilation is to let the configure
script to probe default headers and libraries for you. Switches you are likely to
want enabled are already set so, ie. 64 bits counters and multi-threading (pre-
requisite for the BGP and IGP daemon codes). SQL plugins and IPv6 support are by
default disabled instead. A few examples will follow; as usual to get the list of
available switches, you can use the following command-line:
script to probe default headers and libraries for you. A first round of guessing
is done via pkg-config then, for some libraries, "typical" default locations
are checked, ie. /usr/local/lib. Switches you are likely to want enabled are
already set so, ie. 64 bits counters and multi-threading (pre- requisite for
the BGP, BMP, and IGP daemon codes). SQL plugins and IPv6 support are by default
disabled instead. A few examples will follow; as usual to get the list of available
switches, you can use the following command-line:

shell> ./configure --help

@@ -65,10 +78,19 @@ Examples on how to enable the support for (1) MySQL, (2) PostgreSQL, (3) SQLite,
(4) shell> ./configure --enable-mongodb
(5) shell> ./configure --enable-mysql --enable-pgsql

Then to compile and install simply:
Then to compile and install simply typing:

shell> make; make install

But, for example, should you want to compile pmacct with PostgreSQL support and
have installed PostgreSQL in /usr/local/postgresql and pkg-config is unable to
help, you can supply this non-default location as follows (assuming you are
running the bash shell):

shell> export PGSQL_LIBS="-L/usr/local/postgresql/lib -lpq"
shell> export PGSQL_CFLAGS="-I/usr/local/postgresql/include"
shell> ./configure --enable-pgsql

Once daemons are installed you can check:
* how to instrument each daemon via its usage help page:
shell> pmacctd -h
@@ -196,8 +218,7 @@ CREATE DATABASE pmacct;

USE pmacct;

DROP TABLE IS EXISTS acct;

DROP TABLE IF EXISTS acct;
CREATE TABLE acct (
vlan INT(2) UNSIGNED NOT NULL,
iface_in INT(4) UNSIGNED NOT NULL,
@@ -257,8 +278,9 @@ mongo_table: pmacct.acct
...

MongoDB release >= 2.2.0 is recommended. Installation of the MongoDB C driver 0.8,
also referred as legacy, is required. Version 0.9 of the driver is not supported
yet. The driver can be downloaded from http://api.mongodb.org/c/ .
also referred as legacy, is required. Version 0.9 of the driver and laters (also
referred as current) is not supported (yet). The legacy driver can be downloaded
at the following URL: https://github.com/mongodb/mongo-c-driver-legacy .

IV. Running the libpcap-based daemon (pmacctd)
@@ -441,24 +463,25 @@ name=vrf_id_ingress field_type=234 len=4 semantics=u_int
name=vrf_id_egress field_type=235 len=4 semantics=u_int
name=vrf_name field_type=236 len=32 semantics=str

VI. Running the ULOG-based daemon (uacctd)
VI. Running the NFLOG-based daemon (uacctd)
All examples about pmacctd are also valid for uacctd with the exception of directives
that apply exclusively to libpcap. If you've skipped examples in section 'IV', please
read them before continuing. All configuration keys available are in the CONFIG-KEYS
document.

The Linux ULOG infrastructure requires a couple parameters in order to work properly.
These are the ULOG multicast group (uacctd_group) to which captured packets have to be
sent to and the Netlink buffer size (uacctd_nl_size). The default buffer settings (4KB)
typically works OK for small environments. If the uacctd user is not already familiar
with the iptables ULOG target, it is adviceable to start with a tutorial, like the one
at the following URL ("6.5.15. ULOG target" section):
The daemon depends on the package libnetfilter-log-dev (in Debian/Ubuntu or equivalent
in the prefered Linux distribution). The Linux NFLOG infrastructure requires a couple
parameters in order to work properly: the NFLOG multicast group (uacctd_group) to
which captured packets have to be sent to and the Netlink buffer size (uacctd_nl_size).
The default buffer settings (128KB) typically works OK for small environments. The
traffic is captured with an iptables rule. For example in one of the following ways:

http://www.faqs.org/docs/iptables/targets.html
* iptables -t mangle -I POSTROUTING -j NFLOG --nflog-group 5
* iptables -t raw -I PREROUTING -j NFLOG --nflog-group 5

Apart from determining how and what traffic to capture with iptables, which is topic
outside the scope of this document, the most relevant point is the "--ulog-nlgroup"
iptables setting has to match with the "uacctd_group" uacctd one.
outside the scope of this document, the most relevant point is the "--nflog-nlgroup"
iptables setting has to match with the "uacctd_group" uacctd one.

A couple examples follow:

@@ -466,7 +489,7 @@ Run uacctd reading configuration from a specified file.
shell> uacctd -f uacctd.conf

Daemonize the process; aggregate data by sum_host (by host, summing inbound + outbound
traffic); write to a local MySQL server. Listen on ULOG multicast group #5. Let's make
traffic); write to a local MySQL server. Listen on NFLOG multicast group #5. Let's make
pmacct divide data into historical time-bins of 5 minutes. Let's disable UPDATE queries
and hence align refresh time with the timeslot length. Finally, let's make use of a SQL
table, version 4:
@@ -545,14 +568,20 @@ party applications. Requirements to use the plugin are:
* RabbitMQ C API, rabbitmq-c: https://github.com/alanxz/rabbitmq-c/
* Libjansson to cook JSON objects: http://www.digip.org/jansson/

Once these elements are installed, pmacct can be configured for compilation as
follows (assumptions: Jansson is installed in /usr/local/lib and RabbitMQ server
and rabbitmq-c are installed in /usr/local/rabbitmq as base path):
Once these elements are installed, pmacct can be configured for compiling. pmacct
makes use of pkg-config for finding libraries and headers location and checks some
"typical" default locations, ie. /usr/local/lib and /usr/local/include. So all
you should do is just:

./configure --enable-rabbitmq --enable-jansson

./configure --enable-rabbitmq \
--with-rabbitmq-libs=/usr/local/rabbitmq/lib/ \
--with-rabbitmq-includes=/usr/local/rabbitmq/include/ \
--enable-jansson
But, for example, should you have installed RabbitMQ in /usr/local/rabbitmq and
pkg-config is unable to help, you can supply this non-default location as follows
(assuming you are running the bash shell):

export RABBITMQ_LIBS="-L/usr/local/rabbitmq/lib -lrabbitmq"
export RABBITMQ_CFLAGS="-I/usr/local/rabbitmq/include"
./configure --enable-rabbitmq --enable-jansson

Then "make; make install" as usual. Following a configuration snippet showing a
basic RabbitMQ/AMQP plugin configuration (assumes: RabbitMQ server is available
@@ -565,8 +594,8 @@ aggregate: src_host, dst_host, src_port, dst_port, proto, tos
amqp_exchange: pmacct
amqp_routing_key: acct
amqp_refresh_time: 300
sql_history: 5m
sql_history_roundoff: m
amqp_history: 5m
amqp_history_roundoff: m
! ..

pmacct will only declare a message exchange and provide a routing key, ie. it
@@ -583,7 +612,56 @@ Improvements to the basic Python script provided and/or examples in different
languages are very welcome at this stage.


IX. Internal buffering and queueing
IX. Running the Kafka plugin
Apache Kafka is publish-subscribe messaging rethought as a distributed commit log.
Its qualities being: fast, scalable, durable and distributed by design. pmacct
Kafka plugin is designed to send aggregated network traffic data, in JSON format,
through a Kafka broker to 3rd party applications. Requirements to use the plugin
are:

* A working Kafka broker (and Zookeper server): http://kafka.apache.org/
* Librdkafka: https://github.com/edenhill/librdkafka/
* Libjansson to cook JSON objects: http://www.digip.org/jansson/

Once these elements are installed, pmacct can be configured for compiling. pmacct
makes use of pkg-config for finding libraries and headers location and checks some
"typical" default locations, ie. /usr/local/lib and /usr/local/include. So all
you should do is just:

./configure --enable-kafka --enable-jansson

But, for example, should you have installed Kafka in /usr/local/kafka and pkg-
config is unable to help, you can supply this non-default location as follows
(assuming you are running the bash shell):

export KAFKA_LIBS="-L/usr/local/kafka/lib -lrdkafka"
export KAFKA_CFLAGS="-I/usr/local/kafka/include"
./configure --enable-kafka --enable-jansson

Then "make; make install" as usual. Following a configuration snippet showing a
basic Kafka plugin configuration (assumes: Kafka broker is available at 127.0.0.1
on port 9092; look all configurable directives up in the CONFIG-KEYS document):

! ..
plugins: kafka
!
aggregate: src_host, dst_host, src_port, dst_port, proto, tos
kafka_topic: pmacct.acct
kafka_refresh_time: 300
kafka_history: 5m
kafka_history_roundoff: m
! ..

A basic consumer script, in Python, is provided as sample to: declare a group_id
and bind it to the topic and show consumed data on the screen. The script is located
in the pmacct default distribution tarball in: examples/kafka/kafka_consumer.py and
requires the python-kafka Python module installed. Should this not be available you
can read on the following page how to get it installed:

http://kafka-python.readthedocs.org/


X. Internal buffering and queueing
Two options are provided for internal buffering and queueing: 1) a home-grown circular
queue implementation available since day one of pmacct (configured via plugin_pipe_size
and documented in docs/INTERNALS) and 2) from release 1.5.2, use a RabbitMQ broker for
@@ -625,7 +703,7 @@ plugin_pipe_amqp_routing_key[blabla]: blabla-print
plugin_pipe_amqp_retry[blabla]: 60


X. Quickstart guide to packet classifiers
XI. Quickstart guide to packet classifiers
pmacct 0.10.0 sees the introduction of a packet classification feature. The approach
is fully extensible: classification patterns are based over regular expressions (RE),
must be placed into a common directory and have a .pat file extension. Patterns for
@@ -692,7 +770,7 @@ e) Ok, we are done! Fire the pmacct collector daemon:
values; get the time to take a read about them in the CONFIG-KEYS document.


XI. Quickstart guide to setup a NetFlow agent/probe
XII. Quickstart guide to setup a NetFlow agent/probe
pmacct 0.11.0 sees the introduction of traffic data export capabilities, through both
NetFlow and sFlow protocols. While NetFlow v5 is fixed by nature, v9 adds flexibility
by allowing to transport custom informations (for example, L7-classification tags to a
@@ -829,7 +907,7 @@ d) Ok, we are done ! Now fire both daemons:
shell b> nfacctd -f /path/to/configuration/nfacctd-memory.conf


XII. Quickstart guide to setup a sFlow agent/probe
XIII. Quickstart guide to setup a sFlow agent/probe
pmacct 0.11.0 sees the introduction of traffic data export capabilities via sFlow; such
protocol is quite different from NetFlow: in short, it works by exporting portions of
sampled packets rather than building uni-directional flows as it happens in NetFlow;
@@ -853,7 +931,7 @@ sfprobe_receiver: 1.2.3.4:6343
!...


XIII. Quickstart guide to setup the BGP daemon
XIV. Quickstart guide to setup the BGP daemon
The BGP daemon is run as a thread within the collector core process. The idea is
to receive data-plane information, ie. via NetFlow, sFlow, etc., and control
plane information, ie. full routing tables via BGP, from edge routers. Per-peer
@@ -878,7 +956,7 @@ med, etc., correctly populated while querying the memory table:
bgp_daemon: true
bgp_daemon_ip: X.X.X.X
bgp_daemon_max_peers: 100
nfacctd_as_new: bgp
nfacctd_as: bgp
[ ... ]
plugins: memory
aggregate: src_as, dst_as, local_pref, med, as_path, peer_dst_as
@@ -890,7 +968,7 @@ hence will never try to re-establish a fallen peering session.
For debugging purposes related to the BGP feed(s), bgp_daemon_msglog_* configuration
directives can be enabled in order to log BGP messaging.

XIIIa. Limiting AS-PATH and BGP community attributes length
XIVa. Limiting AS-PATH and BGP community attributes length
AS-PATH and BGP communities can by nature get easily long, when represented as strings.
Sometimes only a small portion of their content is relevant to the accounting task and
hence a filtering layer was developed to take special care of these attributes. The
@@ -906,7 +984,7 @@ bgp_stdcomm_pattern: 12345:
A detailed description of these configuration directives is, as usual, included in
the CONFIG-KEYS document.

XIIIb. The source peer AS case
XIVb. The source peer AS case
The peer_src_as primitive adds useful insight in understanding where traffic enters
the observed routing domain; but asymmetric routing impacts accuracy delivered by
devices configured with either NetFlow or sFlow and the peer-as feature (as it only
@@ -942,7 +1020,7 @@ NOTES:
highlighed in this paragraph apply to these aswell. Check CONFIG-KEYS out for the
src_[med|local_pref|as_path|std_comm|ext_comm]_[type|map] configuration directives.

XIIIc. Tracking entities on the own IP address space
XIVc. Tracking entities on the own IP address space
It might happen that not all entities attached to the service provider network are
running BGP but rather they get IP prefixes redistributed into iBGP (different
routing protocols, statics, directly connected, etc.). These can be private IP
@@ -958,7 +1036,7 @@ with a BGP standard community, this directive allows to map the community to a
peer AS/origin AS couple as per the following example: XXXXX:YYYYY => Peer-AS=XXXXX,
Origin-AS=YYYYY.

XIIId. Preparing the router to BGP peer
XIVd. Preparing the router to BGP peer
Once the collector is configured and started up the remaining step is to let routers
to export traffic samples to the collector and BGP peer with it. Configuring the same
source IP address across both NetFlow and BGP features allows the pmacct collector to
@@ -1007,7 +1085,7 @@ protocols bgp {
}
}

XIIIe. A working configuration example writing to a MySQL database
XIVe. A working configuration example writing to a MySQL database
The following setup is a realistic example for collecting an external traffic
matrix to the ASN level (ie. no IP prefixes collected) for a MPLS-enabled IP
carrier network. Samples are aggregated in a way which is suitable to get an
@@ -1039,7 +1117,7 @@ bgp_daemon_ip: X.X.X.X
bgp_daemon_max_peers: 100
bgp_aspath_radius: 3
bgp_follow_default: 1
nfacctd_as_new: bgp
nfacctd_as: bgp
bgp_peer_src_as_type: map
bgp_peer_src_as_map: /path/to/peers.map

@@ -1099,7 +1177,7 @@ Although table names are fixed in this example, ie. acct_bgp_5mins, it can be
highly adviceable in real-life to run dynamic SQL tables, ie. table names that
include time-related variables (see sql_table, sql_table_schema in CONFIG-KEYS).

XIIIf. Exporting routing tables and/or BGP messaging to files.
XIVf. Exporting routing tables and/or BGP messaging to files.
pmacct 1.5.0 introduces two new features: a) export/dump routing tables for
all BGP peers at regular time intervals and b) log BGP messaging, real-time,
with each of the BGP peers. Both features are useful for troubleshooting and
@@ -1111,10 +1189,19 @@ hijacks, etc.

Both features export data formatted as JSON messages, hence compiling pmacct
against libjansson is a requirement. Messages can be written to plain-text
files or pointed at AMQP exchanges (in which case compiling against RabbitMQ
is required; read more about this in the "Running the RabbitMQ/AMQP plugin"
section of this document):
files or pointed at AMQP or Kafka brokers (in which case compiling against
RabbitMQ or Kafka libraries is required; read more in respectively the
"Running the RabbitMQ/AMQP plugin" or "Running the Kafka plugin" sections
of this document):

shell> ./configure --enable-jansson

But, for example, should you have installed Jansson in /usr/local/jansson and
pkg-config is unable to help, you can supply this non-default location as
follows (assuming you are running the bash shell):

shell> export JANSSON_LIBS="-L/usr/local/jansson/lib -ljansson"
shell> export JANSSON_CFLAGS="-I/usr/local/jansson/include"
shell> ./configure --enable-jansson

A basic dump of BGP tables at regular intervals (60 secs) to plain-text files,
@@ -1128,7 +1215,7 @@ be rotated by an external tool/script) is configured as follows:

bgp_daemon_msglog_file: /path/to/spool/bgp/bgp-$peer_src_ip.log

XIIIg. BGP daemon implementation concluding notes
XIVg. BGP daemon implementation concluding notes
The implementation supports 4-bytes ASN, IPv4, IPv6, VPNv4 and VPNv6 (MP-BGP)
address families and ADD-PATH (draft-ietf-idr-add-paths); both IPv4 and IPv6
BGP sessions are supported. When storing data via SQL, BGP primitives can be
@@ -1141,7 +1228,7 @@ TCP MD5 signature for BGP messages is also supported. For a review of all knobs
and features see the CONFIG-KEYS document.


XIV. Quickstart guide to setup a NetFlow/sFlow replicator
XV. Quickstart guide to setup a NetFlow/sFlow replicator
A 'tee' plugin which is meant, in basic terms, to replicate NetFlow/sFlow data
to remote collectors. The plugin can also act transparently by preserving the
original IP address of the datagrams. Setting up a replicator is very easy. All
@@ -1168,9 +1255,9 @@ An example of content of a tee_receivers map, ie. /path/to/tee_receivers_a.lst,
is as follows ('id' is the pool ID and 'ip' a comma-separated list of receivers
for that pool):

id=1 ip=X.X.X.X:2100
id=1 ip=W.W.W.W:2100
id=2 ip=Y.Y.Y.Y:2100,Z.Z.Z.Z:2100
! id=1 ip=X.X.X.X:2100 tag=0
! id=1 ip=W.W.W.W:2100 tag=0
! id=2 ip=Y.Y.Y.Y:2100,Z.Z.Z.Z:2100 tag=100

Selective teeing allows to filter which pool of receivers has to receive which
@@ -1200,7 +1287,7 @@ and replicating data, two separate instances must be used (intuitively with the
replicator instance feeding the collector one).


XV. Quickstart guide to setup the IS-IS daemon
XVI. Quickstart guide to setup the IS-IS daemon
pmacct 0.14.0 integrates an IS-IS daemon into the IP accounting collectors part
of the toolset. Such daemon is run as a thread within the collector core process.
The idea is to receive data-plane information, ie. via NetFlow, sFlow, etc., and
@@ -1212,9 +1299,9 @@ might be configured to get summarized in BGP while crossing cluster boundaries.
Pre-requisite for the use of the IS-IS daemon is that the pmacct package has to
be configured for compilation with threads, this line will do it:

./configure --enable-threads
shell> ./configure

XVa. Preparing the collector for the L2 P2P IS-IS neighborship
XVIa. Preparing the collector for the L2 P2P IS-IS neighborship
It's assumed the collector sits on an Ethernet segment and has not direct link
(L2) connectivity to an IS-IS speaker, hence the need to establish a GRE tunnel.
While extensive literature and OS specific examples exist on the topic, a brief
@@ -1234,7 +1321,7 @@ isis_daemon_iface: gre2
isis_daemon_mtu: 1400
! isis_daemon_msglog: true

XVb. Preparing the router for the L2 P2P IS-IS neighborship
XVIb. Preparing the router for the L2 P2P IS-IS neighborship
Once the collector is ready, the remaining step is to configure a remote router
for the L2 P2P IS-IS neighborship. The following bit of configuration (based on
Cisco IOS) will match the above fragment of configuration for the IS-IS daemon:
@@ -1256,24 +1343,25 @@ router isis
!


XVI. Quickstart guide to setup the BMP daemon
The BMP daemon thread is introduced in pmacct 1.5.1. The implementation is
based on the draft-ietf-grow-bmp-07 IETF document. To quote the document:
"BMP is intended to provide a more convenient interface for obtaining route
views for research purpose than the screen-scraping approach in common use
today. The design goals are to keep BMP simple, useful, easily implemented,
and minimally service-affecting.". The BMP daemon currently supports BMP
events and stats only, ie. initiation, termination, peer up, peer down and
stats reports messages. Route Monitoring is future (upcoming) work but routes
can be currently sourced via the BGP daemon thread (best path only or ADD-PATH),
making the two daemons complementary. The daemon enables to write BMP messages
to files or AMQP queues, real-time (msglog) or at regular time intervals (dump).
XVII. Quickstart guide to setup the BMP daemon
The BMP daemon thread was introduced in pmacct 1.5. The implementation was
originally based on the draft-ietf-grow-bmp-07 IETF document (whereas the
current review is against draft-ietf-grow-bmp-17). If unfamiliar with BMP, to
quote the document: "BMP is intended to provide a more convenient interface for
obtaining route views for research purpose than the screen-scraping approach in
common use today. The design goals are to keep BMP simple, useful, easily
implemented, and minimally service-affecting.". The BMP daemon currently
supports BMP data, events and stats, ie. initiation, termination, peer up,
peer down, stats and route monitoring messages. The daemon enables to write BMP
messages to files, AMQP and Kafka brokers, real-time (msglog) or at regular time
intervals (dump). Also, route monitoring messages are saved in a RIB structure
for IP prefix lookup.

Following a simple example on how to configure nfacctd to enable the BMP daemon
to a) log, in real-time, BGP stats and events received via BMP to a text-file
(bmp_daemon_msglog_file) and b) dump the same (ie. BGP stats and events received
via BMP) to a text-file and at regular time intervals (bmp_dump_refresh_time,
bmp_dump_file):
to a) log, in real-time, BGP stats, events and routes received via BMP to a
text-file (bmp_daemon_msglog_file) and b) dump the same (ie. BGP stats and
events received via BMP) to a text-file and at regular time intervals
(bmp_dump_refresh_time, bmp_dump_file):

bmp_daemon: true
!
@@ -1282,8 +1370,8 @@ bmp_daemon_msglog_file: /path/to/bmp-$peer_src_ip.log
bmp_dump_file: /path/to/bmp-$peer_src_ip-%H%M.dump
bmp_dump_refresh_time: 60

Following is an example how a Cisco router running IOS should be configured
in order to export BMP data to a collector:
Following is an example how a Cisco router running IOS/IOS-XE should be
configured in order to export BMP data to a collector:

router bgp 64512
bmp server 1
@@ -1291,7 +1379,7 @@ router bgp 64512
initial-delay 60
failure-retry-delay 60
flapping-delay 60
stats-reporting-period 60
stats-reporting-period 300
activate
exit-bmp-server-mode
!
@@ -1300,9 +1388,79 @@ router bgp 64512
neighbor Z.Z.Z.Z remote-as 64514
neighbor Z.Z.Z.Z bmp-activate all

Any equivalent examples using IOS-XR or JunOS are much welcome.
Following is an example how a Cisco router running IOS-XR should be configured
in order to export BMP data to a collector:

router bgp 64512
neighbor Y.Y.Y.Y
bmp-activate server 1
neighbor Z.Z.Z.Z
bmp-activate server 1
!
!
bmp server 1
host X.X.X.X port 1790
initial-delay 60
initial-refresh delay 60
stats-reporting-period 300
!

Any equivalent example using Juniper JunOS and/or any other vendor implementing
BMP would be much welcome.
XVII. Running the print plugin to write to flat-files

XVIII. Quickstart guide to setup Streamed Telemetry accounting
Quoting Cisco IOS-XR Telemetry Configuration Guide at the time of this writing:
"Streaming telemetry lets users direct data to a configured receiver. This data
can be used for analysis and troubleshooting purposes to maintain the health of
the network. This is achieved by leveraging the capabilities of machine-to-
machine communication. The data is used by development and operations (DevOps)
personnel who plan to optimize networks by collecting analytics of the network
in real-time, locate where problems occur, and investigate issues in a
collaborative manner.". Streamed telemetry support comes in pmacct in two
flavours: 1) a telemetry thread can be started in existing daemons, ie. sFlow,
NetFlow/IPFIX, etc. for the purpose of data correlation and 2) a new daemon
pmtelemetryd for standalone consumpton of data. Streamed telemetry data can be
logged real-time and/or dumped at regular time intervals to flat-files, RabbitMQ
or Kafka brokers.

From a configuration standpoint both the thread (ie. telemetry configured part
of nfacctd) and the daemon (pmtelemetryd) are configured the same way except the
thread must be explicitely enabled with a 'telemetry_daemon: true' config line.
Hence the following examples hold for both the thread and the daemon setups.

Following is a config example to receive telemetry data in JSON format over UDP
port 1620 and log it real-time to flat-files:

! Telemetry thread configuration
! telemetry_daemon: true
!
telemetry_daemon_port_udp: 1620
telemetry_daemon_decoder: json
!