You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.


  1. pmacct [IP traffic accounting : BGP : BMP : IGP : Streaming Telemetry]
  2. pmacct is Copyright (C) 2003-2017 by Paolo Lucente
  4. I. Plugins included with pmacct distribution
  5. II. Configuring pmacct for compilation and installing
  6. III. Brief SQL (MySQL, PostgreSQL, SQLite 3.x) setup examples
  7. IV. Running the libpcap-based daemon (pmacctd)
  8. V. Running the NetFlow/IPFIX and sFlow daemons (nfacctd/sfacctd)
  9. VI. Running the NFLOG-based daemon (uacctd)
  10. VII. Running the pmacct client (pmacct)
  11. VIII. Running the RabbitMQ/AMQP plugin
  12. IX. Running the Kafka plugin
  13. X. Internal buffering and queueing
  14. XI. Quickstart guide to packet classification
  15. XII. Quickstart guide to setup a NetFlow/IPFIX agent/probe
  16. XIII. Quickstart guide to setup a sFlow agent/probe
  17. XIV. Quickstart guide to setup the BGP daemon
  18. XV. Quickstart guide to setup a NetFlow/IPFIX/sFlow replicator
  19. XVI. Quickstart guide to setup the IS-IS daemon
  20. XVII. Quickstart guide to setup the BMP daemon
  21. XVIII. Quickstart guide to setup Streaming Telemetry collection
  22. XIX. Running the print plugin to write to flat-files
  23. XX. Quickstart guide to setup GeoIP lookups
  24. XXI. Using pmacct as traffic/event logger
  25. XXII. Miscellaneous notes and troubleshooting tips
  26. I. Plugins included with pmacct distribution
  27. Given its open and pluggable architecture, pmacct is easily extensible with new
  28. plugins. Here is a list of plugins included in the official pmacct distribution:
  29. 'memory': data is stored in a memory table and can be fetched via the pmacct
  30. command-line client tool, 'pmacct'. This plugin also allows easily to
  31. inject data into 3rd party tools like GNUplot, RRDtool or a Net-SNMP
  32. server. The plugin is good for prototype solutions and smaller-scale
  33. environments. This plugin is compiled in by default.
  34. 'mysql': a working MySQL installation can be used for data storage. This
  35. plugin can be compiled using the --enable-mysql switch.
  36. 'pgsql': a working PostgreSQL installation can be used for data storage. This
  37. plugin can be compiled using the --enable-pgsql switch.
  38. 'sqlite3': a working SQLite 3.x or BerkeleyDB 5.x (compiled in with the SQLite
  39. API) installation can be used for data storage. This plugin can be
  40. compiled using the --enable-sqlite3 switch.
  41. 'print': data is printed at regular intervals to flat-files or standard output
  42. in tab-spaced, CSV and JSON formats. This plugin is compiled in by
  43. default.
  44. 'amqp': data is sent to a RabbitMQ message exchange, running AMQP protocol,
  45. for delivery to consumer applications or tools. Popular consumers
  46. are ElasticSearch, InfluxDB and Cassandra. This plugin can be compiled
  47. using the --enable-rabbitmq switch.
  48. 'kafka': data is sent to a Kafka broker for delivery to consumer applications
  49. or tools. Popular consumers are ElasticSearch, InfluxDB and Cassandra.
  50. This plugin can be compiled using the --enable-kafka switch.
  51. 'tee': applies to nfacctd and sfacctd daemons only. It's a featureful packet
  52. replicator for NetFlow/IPFIX/sFlow data. This plugin is compiled in by
  53. default.
  54. 'nfprobe': applies to pmacctd and uacctd daemons only. Exports collected data via
  55. NetFlow v5/v9 or IPFIX. This plugin is compiled in by default.
  56. 'sfprobe': applies to pmacctd and uacctd daemons only. Exports collected data via
  57. sFlow v5. This plugin is compiled in by default.
  58. II. Configuring pmacct for compilation and installing
  59. The simplest way to configure the package for compilation is to download the
  60. latest stable released tarball from and let the configure
  61. script to probe default headers and libraries for you. A first round of guessing
  62. is done via pkg-config then, for some libraries, "typical" default locations
  63. are checked, ie. /usr/local/lib. Switches you are likely to want enabled are
  64. already set so, ie. 64 bits counters and multi-threading (pre- requisite for
  65. the BGP, BMP, and IGP daemon codes); the full list of switches enabled by default
  66. are marked as 'default: yes' in the "./configure --help" output. SQL plugins, AMQP
  67. and Kafka support are all disabled by defaulf instead. A few examples will follow;
  68. to get the list of available switches, you can use the following command-line:
  69. shell> ./configure --help
  70. Examples on how to enable the support for (1) MySQL, (2) PostgreSQL, (3) SQLite,
  71. and any (4) mixed compilation:
  72. (1) shell> ./configure --enable-mysql
  73. (2) shell> ./configure --enable-pgsql
  74. (3) shell> ./configure --enable-sqlite3
  75. (4) shell> ./configure --enable-mysql --enable-pgsql
  76. If cloning the GitHub repository ( ) instead,
  77. the configure script has to be generated, resulting in one extra step than the
  78. process just described. Please refer to the Building section of the
  79. document for instruction about cloning the repo, generate the configure script
  80. along with the required installed packages.
  81. Then compile and install simply typing:
  82. shell> make; make install
  83. Should you want, for example, to compile pmacct with PostgreSQL support and
  84. have installed PostgreSQL in /usr/local/postgresql and pkg-config is unable
  85. to help, you can supply this non-default location as follows (assuming you
  86. are running the bash shell):
  87. shell> export PGSQL_LIBS="-L/usr/local/postgresql/lib -lpq"
  88. shell> export PGSQL_CFLAGS="-I/usr/local/postgresql/include"
  89. shell> ./configure --enable-pgsql
  90. By default all tools - flow, BGP, BMP and Streaming Telemetry - are compiled.
  91. Specific tool sets can be disabled. For example, to compile only flow tools
  92. (ie. no pmbgpd, pmbmpd, pmtelemetryd) the following command-line can be used:
  93. shell> ./configure --disable-bgp-bins --disable-bmp-bins --disable-st-bins
  94. Once daemons are installed you can check:
  95. * how to instrument each daemon via its usage help page:
  96. shell> pmacctd -h
  97. * review version and build details:
  98. shell> sfacctd -V
  99. * supported traffic aggregation primitives by the daemon, and their description:
  100. shell> nfacctd -a
  101. IIa. Compiling pmacct with JSON support
  102. JSON encoding is supported via the Jansson library (
  103. and; a library version >= 2.5 is required. To
  104. compile pmacct with JSON support simply do:
  105. shell> ./configure --enable-jansson
  106. However should you have installed Jansson in the /usr/local/jansson directory
  107. and pkg-config is unable to help, you can supply this non-default location as
  108. follows (assuming you are running the bash shell):
  109. shell> export JANSSON_LIBS="-L/usr/local/jansson/lib -ljansson"
  110. shell> export JANSSON_CFLAGS="-I/usr/local/jansson/include"
  111. shell> ./configure --enable-jansson
  112. IIb. Compiling pmacct with Apache Avro support
  113. Apache Avro encoding is supported via libavro library (
  114. and; to compile pmacct with
  115. Apache Avro support simply do:
  116. shell> ./configure --enable-avro
  117. However should you have installed libavro in the /usr/local/avro directory
  118. and pkg-config is unable to help, you can supply this non-default location as
  119. follows (assuming you are running the bash shell):
  120. export AVRO_LIBS="-L/usr/local/avro/lib -lavro"
  121. export AVRO_CFLAGS="-I/usr/local/avro/include"
  122. ./configure --enable-rabbitmq --enable-avro
  123. III. Brief SQL and noSQL setup examples
  124. RDBMS require a table schema to manage data. pmacct offers two options: use one
  125. of the few pre-determined table schemas available (sections IIIa, b and c) or
  126. compose a custom schema to fit your needs (section IIId). If you are blind to
  127. SQL the former approach is recommended, although it can pose scalability issues
  128. in larger deployments; if you know some SQL the latter is definitely the way to
  129. go. Scripts for setting up RDBMS are located in the 'sql/' tree of the pmacct
  130. distribution tarball. For further guidance read the relevant README files in
  131. such directory. One of the crucial concepts to deal with, when using default
  132. table schemas, is table versioning: please read more about this topic in the
  133. FAQS document (Q16).
  134. IIIa. MySQL examples
  135. shell> cd sql/
  136. - To create v1 tables:
  137. shell> mysql -u root -p < pmacct-create-db_v1.mysql
  138. shell> mysql -u root -p < pmacct-grant-db.mysql
  139. Data will be available in 'acct' table of 'pmacct' DB.
  140. - To create v2 tables:
  141. shell> mysql -u root -p < pmacct-create-db_v2.mysql
  142. shell> mysql -u root -p < pmacct-grant-db.mysql
  143. Data will be available in 'acct_v2' table of 'pmacct' DB.
  144. ... And so on for the newer versions.
  145. IIIb. PostgreSQL examples
  146. Which user has to execute the following two scripts and how to autenticate with the
  147. PostgreSQL server depends upon your current configuration. Keep in mind that both
  148. scripts need postgres superuser permissions to execute some commands successfully:
  149. shell> cp -p *.pgsql /tmp
  150. shell> su - postgres
  151. To create v1 tables:
  152. shell> psql -d template1 -f /tmp/pmacct-create-db.pgsql
  153. shell> psql -d pmacct -f /tmp/pmacct-create-table_v1.pgsql
  154. To create v2 tables:
  155. shell> psql -d template1 -f /tmp/pmacct-create-db.pgsql
  156. shell> psql -d pmacct -f /tmp/pmacct-create-table_v2.pgsql
  157. ... And so on for the newer versions.
  158. A few tables will be created into 'pmacct' DB. 'acct' ('acct_v2' or 'acct_v3') table is
  159. the default table where data will be written when in 'typed' mode (see 'sql_data' option
  160. in CONFIG-KEYS document; default value is 'typed'); 'acct_uni' ('acct_uni_v2' or
  161. 'acct_uni_v3') is the default table where data will be written when in 'unified' mode.
  162. Since v6, PostgreSQL tables are greatly simplified: unified mode is no longer supported
  163. and an unique table ('acct_v6', for example) is created instead.
  164. IIIc. SQLite examples
  165. shell> cd sql/
  166. - To create v1 tables:
  167. shell> sqlite3 /tmp/pmacct.db < pmacct-create-table.sqlite3
  168. Data will be available in 'acct' table of '/tmp/pmacct.db' DB. Of course, you can change
  169. the database filename basing on your preferences.
  170. - To create v2 tables:
  171. shell> sqlite3 /tmp/pmacct.db < pmacct-create-table_v2.sqlite3
  172. Data will be available in 'acct_v2' table of '/tmp/pmacct.db' DB.
  173. ... And so on for the newer versions.
  174. IIId. Custom SQL tables
  175. Custom tables can be built by creating your own SQL schema and indexes. This
  176. allows to mix-and-match the primitives relevant to your accounting scenario.
  177. To flag intention to build a custom table the sql_optimize_clauses directive
  178. must be set to true, ie.:
  179. sql_optimize_clauses: true
  180. sql_table: <table name>
  181. aggregate: <aggregation primitives list>
  182. How to build the custom schema? Let's say the aggregation method of choice
  183. (aggregate directive) is "vlan, in_iface, out_iface, etype" the table name is
  184. "acct" and the database of choice is MySQL. The SQL schema is composed of four
  185. main parts, explained below:
  186. 1) A fixed skeleton needed by pmacct logics:
  187. CREATE TABLE <table_name> (
  188. packets INT UNSIGNED NOT NULL,
  190. stamp_inserted DATETIME NOT NULL,
  191. stamp_updated DATETIME,
  192. );
  193. 2) Indexing: primary key (of your choice, this is only an example) plus
  194. any additional index you may find relevant.
  195. 3) Primitives enabled in pmacct, in this specific example the ones below; should
  196. one need more/others, these can be looked up in the sql/README.mysql file in
  197. the section named "Aggregation primitives to SQL schema mapping:" :
  198. vlan INT(2) UNSIGNED NOT NULL,
  199. iface_in INT(4) UNSIGNED NOT NULL,
  200. iface_out INT(4) UNSIGNED NOT NULL,
  201. etype INT(2) UNSIGNED NOT NULL,
  202. 4) Any additional fields, ignored by pmacct, that can be of use, these can be
  203. for lookup purposes, auto-increment, etc. and can be of course also part of
  204. the indexing you might choose.
  205. Putting the pieces together, the resulting SQL schema is below along with the
  206. required statements to create the database:
  208. CREATE DATABASE pmacct;
  209. USE pmacct;
  211. CREATE TABLE acct (
  212. vlan INT(2) UNSIGNED NOT NULL,
  213. iface_in INT(4) UNSIGNED NOT NULL,
  214. iface_out INT(4) UNSIGNED NOT NULL,
  215. etype INT(2) UNSIGNED NOT NULL,
  216. packets INT UNSIGNED NOT NULL,
  218. stamp_inserted DATETIME NOT NULL,
  219. stamp_updated DATETIME,
  220. PRIMARY KEY (vlan, iface_in, iface_out, etype, stamp_inserted)
  221. );
  222. To grant default pmacct user permission to write into the database look at the
  223. file sql/pmacct-grant-db.mysql
  224. IIIe. Historical accounting
  225. Enabling historical accounting allows to aggregate data over time (ie. 5 mins, hourly,
  226. daily) in a flexible and fully configurable way. Timestamps are lodged into two fields:
  227. 'stamp_inserted' which represents the basetime of the timeslot and 'stamp_updated' which
  228. says when a given timeslot was updated for the last time. Following there is a pretty
  229. standard configuration fragment to slice data into nicely aligned (or rounded-off) 5
  230. minutes timeslots:
  231. sql_history: 5m
  232. sql_history_roundoff: m
  233. IIIf. INSERTs-only
  234. UPDATE queries are demanding in terms of resources; this is why, even if they are
  235. supported by pmacct, a savy approach is to cache data for longer times in memory and
  236. write them off once per timeslot (sql_history): this produces a much lighter INSERTs-
  237. only environemnt. This is an example based on 5 minutes timeslots:
  238. sql_refresh_time: 300
  239. sql_history: 5m
  240. sql_history_roundoff: m
  241. sql_dont_try_update: true
  242. Note that sql_refresh_time is always expressed in seconds. An alternative approach
  243. for cases where sql_refresh_time must be kept shorter than sql_history (for example
  244. because a) of long sql_history periods, ie. hours or days, and/or because b) near
  245. real-time data feed is a requirement) is to set up a synthetic auto-increment 'id'
  246. field: it successfully prevents duplicates but comes at the expenses of GROUP BY
  247. queries when retrieving data.
  248. IV. Running the libpcap-based daemon (pmacctd)
  249. All deamons including pmacctd can be run with commandline options, using a config
  250. file or a mix of the two. Sample configuration files are in examples/ tree. Note also
  251. that most of the new features are available only as configuration directives. To be
  252. aware of the existing configuration directives, please read the CONFIG-KEYS document.
  253. Show all available pmacctd commandline switches:
  254. shell> pmacctd -h
  255. Run pmacctd reading configuration from a specified file (see examples/ tree for a brief
  256. list of some commonly useed keys; divert your eyes to CONFIG-KEYS for the full list).
  257. This example applies to all daemons:
  258. shell> pmacctd -f pmacctd.conf
  259. Daemonize the process; listen on eth0; aggregate data by src_host/dst_host; write to a
  260. MySQL server; limit traffic matching only source ip network; note that
  261. filters work the same as tcpdump. So, refer to libpcap/tcpdump man pages for examples
  262. and further reading.
  263. shell> pmacctd -D -c src_host,dst_host -i eth0 -P mysql src net
  264. Or written the configuration way:
  265. !
  266. daemonize: true
  267. plugins: mysql
  268. aggregate: src_host, dst_host
  269. interface: eth0
  270. pcap_filter: src net
  271. ! ...
  272. Print collected traffic data aggregated by src_host/dst_host over the screen; refresh
  273. data every 30 seconds and listen on eth0.
  274. shell> pmacctd -P print -r 30 -i eth0 -c src_host,dst_host
  275. Or written the configuration way:
  276. !
  277. plugins: print
  278. print_refresh_time: 30
  279. aggregate: src_host, dst_host
  280. interface: eth0
  281. ! ...
  282. Daemonize the process; let pmacct aggregate traffic in order to show in vs out traffic
  283. for network; send data to a PostgreSQL server. This configuration is not
  284. possible via commandline switches; the corresponding configuration follows:
  285. !
  286. daemonize: true
  287. plugins: pgsql[in], pgsql[out]
  288. aggregate[in]: dst_host
  289. aggregate[out]: src_host
  290. aggregate_filter[in]: dst net
  291. aggregate_filter[out]: src net
  292. sql_table[in]: acct_in
  293. sql_table[out]: acct_out
  294. ! ...
  295. The previous example looks nice! But how to make data historical ? Simple enough, let's
  296. suppose you want to split traffic by hour and write data into the DB every 60 seconds.
  297. !
  298. daemonize: true
  299. plugins: pgsql[in], pgsql[out]
  300. aggregate[in]: dst_host
  301. aggregate[out]: src_host
  302. aggregate_filter[in]: dst net
  303. aggregate_filter[out]: src net
  304. sql_table[in]: acct_in
  305. sql_table[out]: acct_out
  306. sql_refresh_time: 60
  307. sql_history: 1h
  308. sql_history_roundoff: h
  309. ! ...
  310. Let's now translate the same example in the memory plugin world. It's use is valuable
  311. expecially when it's required to feed bytes/packets/flows counters to external programs.
  312. Examples about the client program will follow later in this document. Now, note that
  313. each memory table need its own pipe file in order to get correctly contacted by the
  314. client:
  315. !
  316. daemonize: true
  317. plugins: memory[in], memory[out]
  318. aggregate[in]: dst_host
  319. aggregate[out]: src_host
  320. aggregate_filter[in]: dst net
  321. aggregate_filter[out]: src net
  322. imt_path[in]: /tmp/pmacct_in.pipe
  323. imt_path[out]: /tmp/pmacct_out.pipe
  324. ! ...
  325. As a further note, check the CONFIG-KEYS document about more imt_* directives as they
  326. will support in the task of fine tuning the size and boundaries of memory tables, if
  327. default values are not ok for your setup.
  328. Now, fire multiple instances of pmacctd, each on a different interface; again, because
  329. each instance will have its own memory table, it will require its own pipe file for
  330. client queries aswell (as explained in the previous examples):
  331. shell> pmacctd -D -i eth0 -m 8 -s 65535 -p /tmp/pipe.eth0
  332. shell> pmacctd -D -i ppp0 -m 0 -s 32768 -p /tmp/pipe.ppp0
  333. Run pmacctd logging what happens to syslog and using "local2" facility:
  334. shell> pmacctd -c src_host,dst_host -S local2
  335. NOTE: superuser privileges are needed to execute pmacctd correctly.
  336. V. Running the NetFlow/IPFIX and sFlow daemons (nfacctd/sfacctd)
  337. All examples about pmacctd are also valid for nfacctd and sfacctd with the exception
  338. of directives that apply exclusively to libpcap. If you've skipped examples in the
  339. previous section, please read them before continuing. All config keys available are
  340. in the CONFIG-KEYS document. Some examples:
  341. Run nfacctd reading configuration from a specified file.
  342. shell> nfacctd -f nfacctd.conf
  343. Daemonize the process; aggregate data by sum_host (by host, summing inbound + outbound
  344. traffic); write to a local MySQL server. Listen on port 5678 for incoming Netflow
  345. datagrams (from one or multiple NetFlow agents). Let's make pmacct refresh data each
  346. two minutes and let's make data historical, divided into timeslots of 10 minutes each.
  347. Finally, let's make use of a SQL table, version 4.
  348. shell> nfacctd -D -c sum_host -P mysql -l 5678
  349. And now written the configuration way:
  350. !
  351. daemonize: true
  352. plugins: mysql
  353. aggregate: sum_host
  354. nfacctd_port: 5678
  355. sql_refresh_time: 120
  356. sql_history: 10m
  357. sql_history_roundoff: mh
  358. sql_table_version: 4
  359. ! ...
  360. Va. NetFlow daemon & accounting NetFlow v9/IPFIX options
  361. NetFlow v9/IPFIX can send option records other than flow ones, typically used to send
  362. to a collector mappings among interface SNMP ifIndexes to interface names or VRF ID's
  363. to VRF names. nfacctd_account_options enables accounting of option records then these
  364. should be split from regular flow records. Below is a sample config:
  365. nfacctd_time_new: true
  366. nfacctd_account_options: true
  367. !
  368. plugins: print[data], print[data_options]
  369. !
  370. pre_tag_filter[data]: 100
  371. aggregate[data]: peer_src_ip, in_iface, out_iface, tos, vrf_id_ingress, vrf_id_egress
  372. print_refresh_time[data]: 300
  373. print_history[data]: 300
  374. print_history_roundoff[data]: m
  375. print_output_file_append[data]: true
  376. print_output_file[data]: /path/to/flow_%s
  377. print_output[data]: csv
  378. !
  379. pre_tag_filter[data_options]: 200
  380. aggregate[data_options]: vrf_id_ingress, vrf_name
  381. print_refresh_time[data_options]: 300
  382. print_history[data_options]: 300
  383. print_history_roundoff[data_options]: m
  384. print_output_file_append[data_options]: true
  385. print_output_file[data_options]: /path/to/options_%s
  386. print_output[data_options]: event_csv
  387. !
  388. aggregate_primitives: /path/to/primitives.lst
  389. pre_tag_map: /path/to/
  390. maps_refresh: true
  391. Below is the referenced
  392. set_tag=100 ip= sample_type=flow
  393. set_tag=200 ip= sample_type=option
  394. Below is the referenced primitives.lst:
  395. name=vrf_id_ingress field_type=234 len=4 semantics=u_int
  396. name=vrf_id_egress field_type=235 len=4 semantics=u_int
  397. name=vrf_name field_type=236 len=32 semantics=str
  398. VI. Running the NFLOG-based daemon (uacctd)
  399. All examples about pmacctd are also valid for uacctd with the exception of directives
  400. that apply exclusively to libpcap. If you've skipped examples in section 'IV', please
  401. read them before continuing. All configuration keys available are in the CONFIG-KEYS
  402. document.
  403. The daemon depends on the package libnetfilter-log-dev (in Debian/Ubuntu or equivalent
  404. in the prefered Linux distribution). The Linux NFLOG infrastructure requires a couple
  405. parameters in order to work properly: the NFLOG multicast group (uacctd_group) to
  406. which captured packets have to be sent to and the Netlink buffer size (uacctd_nl_size).
  407. The default buffer settings (128KB) typically works OK for small environments. The
  408. traffic is captured with an iptables rule. For example in one of the following ways:
  409. * iptables -t mangle -I POSTROUTING -j NFLOG --nflog-group 5
  410. * iptables -t raw -I PREROUTING -j NFLOG --nflog-group 5
  411. Apart from determining how and what traffic to capture with iptables, which is topic
  412. outside the scope of this document, the most relevant point is the "--nflog-nlgroup"
  413. iptables setting has to match with the "uacctd_group" uacctd one.
  414. A couple examples follow:
  415. Run uacctd reading configuration from a specified file.
  416. shell> uacctd -f uacctd.conf
  417. Daemonize the process; aggregate data by sum_host (by host, summing inbound + outbound
  418. traffic); write to a local MySQL server. Listen on NFLOG multicast group #5. Let's make
  419. pmacct divide data into historical time-bins of 5 minutes. Let's disable UPDATE queries
  420. and hence align refresh time with the timeslot length. Finally, let's make use of a SQL
  421. table, version 4:
  422. !
  423. uacctd_group: 5
  424. daemonize: true
  425. plugins: mysql
  426. aggregate: sum_host
  427. sql_refresh_time: 300
  428. sql_history: 5m
  429. sql_history_roundoff: mh
  430. sql_table_version: 4
  431. sql_dont_try_update: true
  432. ! ...
  433. VII. Running the pmacct client (pmacct)
  434. The pmacct client is used to retrieve data from memory tables. Requests and answers
  435. are exchanged via a pipe file: authorization is strictly connected to permissions on
  436. the pipe file. Note: while writing queries commandline, it may happen to write chars
  437. with a special meaning for the shell itself (ie. ; or *). Mind to either escape ( \;
  438. or \* ) them or put in quotes ( " ).
  439. Show all available pmacct client commandline switches:
  440. shell> pmacct -h
  441. Fetch data stored into the memory table:
  442. shell> pmacct -s
  443. Match data between source IP and destination IP and return
  444. a formatted output; display all fields (-a), this way the output is easy to be parsed
  445. by tools like awk/sed; each unused field will be zero-filled:
  446. shell> pmacct -c src_host,dst_host -M, -a
  447. Similar to the previous example; it is requested to reset data for matched entries;
  448. the server will return the actual counters to the client, then will reset them:
  449. shell> pmacct -c src_host,dst_host -M, -r
  450. Fetch data for IP address dst_host; we also ask for a 'counter only' output
  451. ('-N') suitable, this time, for injecting data in tools like MRTG or RRDtool (sample
  452. scripts are in the examples/ tree). Bytes counter will be returned (but the '-n' switch
  453. allows also select which counter to display). If multiple entries match the request (ie
  454. because the query is based on dst_host but the daemon is actually aggregating traffic
  455. as "src_host, dst_host") their counters will be summed:
  456. shell> pmacct -c dst_host -N
  457. Another query; this time let's contact the server listening on pipe file /tmp/pipe.eth0:
  458. shell> pmacct -c sum_port -N 80 -p /tmp/pipe.eth0
  459. Find all data matching host as either their source or destination address.
  460. In particular, this example shows how to use wildcards and how to spawn multiple queries
  461. (each separated by the ';' symbol). Take care to follow the same order when specifying
  462. the primitive name (-c) and its actual value ('-M' or '-N'):
  463. shell> pmacct -c src_host,dst_host -N ",*;*,"
  464. Find all web and smtp traffic; we are interested in have just the total of such traffic
  465. (for example, to split legal network usage from the total); the output will be a unique
  466. counter, sum of the partial (coming from each query) values.
  467. shell> pmacct -c src_port,dst_port -N "25,*;*,25;80,*;*,80" -S
  468. Show traffic between the specified hosts; this aims to be a simple example of a batch
  469. query; note that as value of both '-N' and '-M' switches it can be supplied a value like:
  470. 'file:/home/paolo/queries.list': actual values will be read from the specified file (and
  471. they need to be written into it, one per line) instead of commandline:
  472. shell> pmacct -c src_host,dst_host -N ",;,;,"
  473. shell> pmacct -c src_host,dst_host -N "file:/home/paolo/queries.list"
  474. VIII. Running the RabbitMQ/AMQP plugin
  475. The Advanced Message Queuing Protocol (AMQP) is an open standard for passing business
  476. messages between applications. RabbitMQ is a messaging broker, an intermediary for
  477. messaging, which implementes AMQP. pmacct RabbitMQ/AMQP plugin is designed to send
  478. aggregated network traffic data, in JSON or Avro format, through a RabbitMQ server
  479. to 3rd party applications. Requirements to use the plugin are:
  480. * A working RabbitMQ server:
  481. * RabbitMQ C API, rabbitmq-c:
  482. * Libjansson to cook JSON objects:
  483. Additionally, the Apache Avro C library ( needs to be
  484. installed to be able to send messages packed using Avro (you will also need to
  485. pass --enable-avro to the configuration script).
  486. Once these elements are installed, pmacct can be configured for compiling. pmacct
  487. makes use of pkg-config for finding libraries and headers location and checks some
  488. "typical" default locations, ie. /usr/local/lib and /usr/local/include. So all
  489. you should do is just:
  490. ./configure --enable-rabbitmq --enable-jansson
  491. But, for example, should you have installed RabbitMQ in /usr/local/rabbitmq and
  492. pkg-config is unable to help, you can supply this non-default location as follows
  493. (assuming you are running the bash shell):
  494. export RABBITMQ_LIBS="-L/usr/local/rabbitmq/lib -lrabbitmq"
  495. export RABBITMQ_CFLAGS="-I/usr/local/rabbitmq/include"
  496. ./configure --enable-rabbitmq --enable-jansson
  497. You can check further information on how to compile pmacct with JSON/libjansson
  498. support in the section "Compiling pmacct with JSON support" of this document.
  499. You can check further information on how to compile pmacct with Avro support in
  500. the section "Compiling pmacct with Apache Avro support" of this document.
  501. Then "make; make install" as usual. Following a configuration snippet showing a
  502. basic RabbitMQ/AMQP plugin configuration (assumes: RabbitMQ server is available
  503. at localhost; look all configurable directives up in the CONFIG-KEYS document):
  504. ! ..
  505. plugins: amqp
  506. !
  507. aggregate: src_host, dst_host, src_port, dst_port, proto, tos
  508. amqp_output: json
  509. amqp_exchange: pmacct
  510. amqp_routing_key: acct
  511. amqp_refresh_time: 300
  512. amqp_history: 5m
  513. amqp_history_roundoff: m
  514. ! ..
  515. pmacct will only declare a message exchange and provide a routing key, ie. it
  516. will not get involved with queues at all. A basic consumer script, in Python,
  517. is provided as sample to: declare a queue, bind the queue to the exchange and
  518. show consumed data on the screen. The script is located in the pmacct default
  519. distribution tarball in: examples/amqp/ and requires the pika
  520. Python module installed. Should this not be available you can read on the
  521. following page how to get it installed:
  523. Improvements to the basic Python script provided and/or examples in different
  524. languages are very welcome at this stage.
  525. IX. Running the Kafka plugin
  526. Apache Kafka is publish-subscribe messaging rethought as a distributed commit log.
  527. Its qualities being: fast, scalable, durable and distributed by design. pmacct
  528. Kafka plugin is designed to send aggregated network traffic data, in JSON or
  529. Avro format, through a Kafka broker to 3rd party applications. Requirements to
  530. use the plugin are:
  531. * A working Kafka broker (and Zookeper server):
  532. * Librdkafka:
  533. * Libjansson to cook JSON objects:
  534. Additionally, the Apache Avro C library ( needs to be
  535. installed to be able to send messages packed using Avro (you will also need to
  536. pass --enable-avro to the configuration script).
  537. Once these elements are installed, pmacct can be configured for compiling. pmacct
  538. makes use of pkg-config for finding libraries and headers location and checks some
  539. "typical" default locations, ie. /usr/local/lib and /usr/local/include. So all
  540. you should do is just:
  541. ./configure --enable-kafka --enable-jansson
  542. But, for example, should you have installed Kafka in /usr/local/kafka and pkg-
  543. config is unable to help, you can supply this non-default location as follows
  544. (assuming you are running the bash shell):
  545. export KAFKA_LIBS="-L/usr/local/kafka/lib -lrdkafka"
  546. export KAFKA_CFLAGS="-I/usr/local/kafka/include"
  547. ./configure --enable-kafka --enable-jansson
  548. You can check further information on how to compile pmacct with JSON/libjansson
  549. support in the section "Compiling pmacct with JSON support" of this document.
  550. You can check further information on how to compile pmacct with Avro support in
  551. the section "Compiling pmacct with Apache Avro support" of this document.
  552. Then "make; make install" as usual. Following a configuration snippet showing a
  553. basic Kafka plugin configuration (assumes: Kafka broker is available at
  554. on port 9092; look all configurable directives up in the CONFIG-KEYS document):
  555. ! ..
  556. plugins: kafka
  557. !
  558. aggregate: src_host, dst_host, src_port, dst_port, proto, tos
  559. kafka_output: json
  560. kafka_topic: pmacct.acct
  561. kafka_refresh_time: 300
  562. kafka_history: 5m
  563. kafka_history_roundoff: m
  564. ! ..
  565. A basic consumer script, in Python, is provided as sample to: declare a group_id
  566. and bind it to the topic and show consumed data on the screen. The script is located
  567. in the pmacct default distribution tarball in: examples/kafka/ and
  568. requires the python-kafka Python module installed. Should this not be available you
  569. can read on the following page how to get it installed:
  571. This is a pointer to the quick start guide to Kafka:
  573. When using Kafka over a dedicated node or VM, you will have to update the default
  574. Kafka server configuration. Edit with your favorite text editor the file named
  575. under the config folder of your kafka installation. Uncomment
  576. the following parameters:
  577. * listeners,
  578. * advertised.listeners,
  579. *
  580. and configure it according to your kafka design. Taking a simple example where
  581. there is one single Kafka node used for both zookeeper and Kafka and this node
  582. is using and ip address like Those tree parameters will look like
  583. this:
  584. listeners=PLAINTEXT://
  585. advertised.listeners=PLAINTEXT://
  587. Finally, when the amount of data published to Kafka is substantial, ie. in the
  588. order of thousands of entries per second, some care is needed in order to avoid
  589. every single entry originating a produce call to Kafka. Two strategies are being
  590. available for batching: 1) kafka_multi_values feature of pmacct; 2) as per
  591. librdkafka documentation, "The two most important configuration properties for
  592. performance tuning are:
  593. * batch.num.messages : the minimum number of messages to wait for to accumulate
  594. in the local queue before sending off a message set.
  595. * : how long to wait for batch.num.messages to fill up
  596. in the local queue."
  597. Also, intuitively, queue.buffering.max.messages, the "Maximum number of messages
  598. allowed on the producer queue" should be kept greater than the batch.num.messages.
  599. These knobs can all be in pmacct to Kafka via a file pointed by kafka_config_file,
  600. as global settings, ie.:
  601. global, queue.buffering.max.messages, 8000000
  602. global, batch.num.messages, 100000
  603. X. Internal buffering and queueing
  604. Two options are provided for internal buffering and queueing: 1) a home-grown circular
  605. queue implementation available since day one of pmacct (configured via plugin_pipe_size
  606. and documented in docs/INTERNALS) and 2) a ZeroMQ queue (configured via plugin_pipe_zmq
  607. and plugin_pipe_zmq_* directives).
  608. For a quick comparison: while relying on a ZeroMQ queue does introduce an external
  609. dependency, ie. libzmq, it reduces the amount of trial and error needed to fine
  610. tune plugin_buffer_size and plugin_pipe_size directives needed by the home-grown
  611. queue implementation.
  612. The home-grown cicular queue has no external dependencies and is configured, for
  613. example, as:
  614. plugins: print[blabla]
  615. plugin_buffer_size[blabla]: 10240
  616. plugin_pipe_size[blabla]: 1024000
  617. For more information about the home-grown circular queue, consult plugin_buffer_size
  618. and plugin_pipe_size entries in CONFIG-KEYS and docs/INTERNALS "Communications between
  619. core process and plugins" chapter.
  620. ZeroMQ, from 0MQ The Guide, "looks like an embeddable networking library but acts like
  621. a concurrency framework. It gives you sockets that carry atomic messages across various
  622. transports like in-process, inter-process, TCP, and multicast. You can connect sockets
  623. N-to-N with patterns like fan-out, pub-sub, task distribution, and request-reply. It's
  624. fast enough to be the fabric for clustered products. Its asynchronous I/O model gives
  625. you scalable multicore applications, built as asynchronous message-processing tasks.
  626. [ .. ]". pmacct integrates ZeroMQ using a pub-sub queue architecture, using ephemeral
  627. TCP ports and implementing plain authentication (username and password, auto-generated
  628. at runtime).
  629. The only requirement to use a ZeroMQ queue is to have the latest available stable
  630. release of libzmq installed on the system ( ,
  631. Once this is installed, pmacct can be
  632. configured for compiling. pmacct makes use of pkg-config for finding libraries and
  633. headers location and checks some "typical" default locations, ie. /usr/local/lib and
  634. /usr/local/include. So all you should do is just:
  635. ./configure --enable-zmq
  636. But, for example, should you have installed ZeroMQ in /usr/local/zeromq and should also
  637. pkg-config be unable to help, the non-default location can be supplied as follows (bash
  638. shell assumed):
  639. export ZMQ_LIBS="-L/usr/local/zeromq/lib -lzmq"
  640. export ZMQ_CFLAGS="-I/usr/local/zeromq/include"
  641. ./configure --enable-zmq
  642. Then "make; make install" as usual. Following a configuration snippet showing how easy
  643. is to leverage ZeroMQ for queueing (see CONFIG-KEYS for all ZeroMQ-related options):
  644. plugins: print[blabla]
  645. plugin_pipe_zmq[blabla]: true
  646. plugin_pipe_zmq_profile[blabla]: micro
  647. Please review the standard buffer profiles, plugin_pipe_zmq_profile, in CONFIG-KEYS;
  648. Q21 of FAQS describes how to estimate the amount of flows/samples per second of your
  649. deployment.
  650. XI. Quickstart guide to packet classification
  651. Packet classification is a feature available for pmacctd (libpcap-based daemon) and
  652. uacctd (NFLOG-based daemon) (please get in touch if packet classification against
  653. the sFlow raw header sample is desired). The current approach is to leverage the
  654. popular free, open-source nDPI library. To enable the feature please follow these
  655. steps:
  656. 1) Download pmacct from its webpage ( or from its GitHub
  657. repository (
  658. 2) Download nDPI from its GitHub repository ( pmacct
  659. code is tested against the latest stable version of the nDPI library and hence
  660. that is the recommended download.
  661. 3) Configure for compiling, compile and install the downloaded nDPI library, ie.
  662. inside the nDPI directory:
  663. shell> ./; ./configure; make; make install
  664. 4) Configure for compiling pmacct with the --enable-ndpi switch. Then compile and
  665. install, ie.:
  666. If downloading a release from , from inside the pmacct
  667. directory:
  668. shell> ./configure --enable-ndpi; make; make install
  669. If downloading code from , from inside the
  670. pmacct directory:
  671. shell> ./; ./configure --enable-ndpi; make; make install
  672. If using a nDPI library that is not installed (or not installed in a default
  673. location) on the system, then NDPI_LIBS and NDPI_CFLAGS should be set to the
  674. location where nDPI headers and dynamic library are lying. Additionally, the
  675. configure switch --with-ndpi-static-lib allows to specify the location for the
  676. static version of the library:
  677. shell> NDPI_LIBS=-L/path/to/nDPI/src/lib/.libs
  678. shell> NDPI_CFLAGS=-I/path/to/nDPI/src/include
  679. shell> export NDPI_LIBS NDPI_CFLAGS
  680. shell> ./configure --enable-ndpi --with-ndpi-static-lib=/path/to/nDPI/src/lib/.libs
  681. shell> make; make install
  682. 5) Configure pmacct. The following sample configuration is based on pmacctd and
  683. the print plugin with formatted output to stdout:
  684. daemonize: true
  685. interface: eth0
  686. snaplen: 700
  687. !
  688. plugins: print
  689. !
  690. aggregate: src_host, dst_host, src_port, dst_port, proto, tos, class
  691. What enables packet classification is the use of the 'class' primitive as part
  692. of the supplied aggregation method. Further classification-related options,
  693. such as timers, attempts, etc., are documented in the CONFIG-KEYS document
  694. (classifier_* directives).
  695. 6) Execute pmacct as:
  696. shell> pmacctd -f /path/to/pmacctd.conf
  697. XII. Quickstart guide to setup a NetFlow/IPFIX agent/probe
  698. pmacct is able to export traffic data through both NetFlow and sFlow protocols. This
  699. section covers NetFlow/IPFIX and next one covers sFlow. While NetFlow v5 is fixed by
  700. nature, v9 adds flexibility allowing to transport custom informations (for example,
  701. classification information or custom tags to remote collectors). Below the guide:
  702. a) usual initial steps: download pmacct, unpack it, compile it.
  703. b) build NetFlow probe configuration, using pmacctd:
  704. !
  705. daemonize: true
  706. interface: eth0
  707. aggregate: src_host, dst_host, src_port, dst_port, proto, tos
  708. plugins: nfprobe
  709. nfprobe_receiver:
  710. nfprobe_version: 9
  711. ! nfprobe_engine: 1:1
  712. ! nfprobe_timeouts: tcp=120:maxlife=3600
  713. !
  714. ! networks_file: /path/to/networks.lst
  715. !...
  716. This is a basic working configuration. Additional probe features include:
  717. 1) generate ASNs by using a networks_file pointing to a valid Networks File (see
  718. examples/ directory) and adding src_as, dst_as primitives to the 'aggregate'
  719. directive; alternatively, it is possible to generate ASNs from the pmacctd BGP
  720. thread. The following fragment can be added to the config above:
  721. pmacctd_as: bgp
  722. bgp_daemon: true
  723. bgp_daemon_ip:
  724. bgp_agent_map: /path/to/
  725. bgp_daemon_port: 17917
  726. The bgp_daemon_port can be changed from the standard BGP port (179/TCP) in order to
  727. co-exist with other BGP routing software which might be running on the same host.
  728. Furthermore, they can safely peer each other by using as bgp_daemon_ip.
  729. In pmacctd, bgp_agent_map does the trick of mapping to the IP address of
  730. the BGP peer (ie. 'set_tag= ip='); this setup, while
  731. generic, was tested working in conjunction with Quagga 0.99.14. Following a relevant
  732. fragment of the Quagga configuration:
  733. router bgp Y
  734. bgp router-id X.X.X.X
  735. neighbor remote-as Y
  736. neighbor port 17917
  737. neighbor update-source X.X.X.X
  738. !
  739. NOTE: if configuring a BGP neighbor over localhost via Quagga CLI the following
  740. message is returned: "% Can not configure the local system as neighbor". This
  741. is not returned when configuring the neighborship directly in the bgpd config
  742. file.
  743. 2) encode flow classification information in NetFlow v9 like Cisco does with its
  744. NBAR/NetFlow v9 integration. This can be done by introducing the 'class' primitive
  745. to the afore mentioned 'aggregate' and add the extra configuration directive:
  746. aggregate: class, src_host, dst_host, src_port, dst_port, proto, tos
  747. snaplen: 700
  748. Further information on this topic can be found in the 'Quickstart guide to packet
  749. classification' section of this document.
  750. 3) add direction (ingress, egress) awareness to measured IP traffic flows. Direction
  751. can be defined statically (in, out) or inferred dinamically (tag, tag2) via the use
  752. of the nfprobe_direction directive. Let's look at a dynamic example using tag2;
  753. first, add the following lines to the daemon configuration:
  754. nfprobe_direction[plugin_name]: tag2
  755. pre_tag_map: /path/to/
  756. then edit the tag map as follows. A return value of '1' means ingress while '2' is
  757. translated to egress. It is possible to define L2 and/or L3 addresses to recognize
  758. flow directions. The 'set_tag2' primitive (tag2) will be used to carry the return
  759. value:
  760. set_tag2=1 filter='dst host XXX.XXX.XXX.XXX'
  761. set_tag2=2 filter='src host XXX.XXX.XXX.XXX'
  762. set_tag2=1 filter='ether src XX:XX:XX:XX:XX:XX'
  763. set_tag2=2 filter='ether dst XX:XX:XX:XX:XX:XX'
  764. Indeed in such a case, the 'set_tag' primitive (tag) can be leveraged to other uses
  765. (ie. filter sub-set of the traffic for flow export);
  766. 4) add interface (input, output) awareness to measured IP traffic flows. Interfaces
  767. can be defined only in addition to direction. Interface can be either defined
  768. statically (<1-4294967295>) or inferred dynamically (tag, tag2) with the use of the
  769. nfprobe_ifindex directive. Let's look at a dynamic example using tag; first add the
  770. following lines to the daemon config:
  771. nfprobe_direction[plugin_name]: tag
  772. nfprobe_ifindex[plugin_name]: tag2
  773. pre_tag_map: /path/to/
  774. then edit the tag map as follows:
  775. set_tag=1 filter='dst net XXX.XXX.XXX.XXX/WW' jeq=eval_ifindexes
  776. set_tag=2 filter='src net XXX.XXX.XXX.XXX/WW' jeq=eval_ifindexes
  777. set_tag=1 filter='dst net YYY.YYY.YYY.YYY/ZZ' jeq=eval_ifindexes
  778. set_tag=2 filter='src net YYY.YYY.YYY.YYY/ZZ' jeq=eval_ifindexes
  779. set_tag=1 filter='ether src YY:YY:YY:YY:YY:YY' jeq=eval_ifindexes
  780. set_tag=2 filter='ether dst YY:YY:YY:YY:YY:YY' jeq=eval_ifindexes
  781. set_tag=999 filter='net'
  782. !
  783. set_tag2=100 filter='dst host XXX.XXX.XXX.XXX' label=eval_ifindexes
  784. set_tag2=100 filter='src host XXX.XXX.XXX.XXX'
  785. set_tag2=200 filter='dst host YYY.YYY.YYY.YYY'
  786. set_tag2=200 filter='src host YYY.YYY.YYY.YYY'
  787. set_tag2=200 filter='ether src YY:YY:YY:YY:YY:YY'
  788. set_tag2=200 filter='ether dst YY:YY:YY:YY:YY:YY'
  789. The set_tag=999 works as a catch all for undefined L2/L3 addresses so
  790. to prevent searching further in the map. In the example above direction
  791. is set first then, if found, interfaces are set, using the jeq/label
  792. pre_tag_map construct.
  793. c) build NetFlow collector configuration, using nfacctd:
  794. !
  795. daemonize: true
  796. nfacctd_ip:
  797. nfacctd_port: 2100
  798. plugins: memory[display]
  799. aggregate[display]: src_host, dst_host, src_port, dst_port, proto
  800. ! aggregate[display]: class, src_host, dst_host, src_port, dst_port, proto
  801. d) Ok, we are done ! Now fire both daemons:
  802. shell a> pmacctd -f /path/to/configuration/pmacctd-nfprobe.conf
  803. shell b> nfacctd -f /path/to/configuration/nfacctd-memory.conf
  804. XIII. Quickstart guide to setup a sFlow agent/probe
  805. pmacct can export traffic data via sFlow; such protocol is different from NetFlow/
  806. IPFIX: in short, it works by exporting portions of sampled packets rather than
  807. caching and building uni-directional flows as it happens in NetFlow; this not
  808. stateful approach makes sFlow a light export protocol well-tailored for high-
  809. speed networks. Furthermore, sFlow v5 can be extended much like NetFlow v9: meaning
  810. classification information (if nDPI is compiled in, see 'Quickstart guide to packet
  811. classification' section of this document), tags or basic Extended Gateway info
  812. (ie. src_as, dst_as) can be easily included in the record structure being exported.
  813. Below a quickstarter guide:
  814. b) build sFlow probe configuration, using pmacctd:
  815. !
  816. daemonize: true
  817. interface: eth0
  818. plugins: sfprobe
  819. sampling_rate: 20
  820. sfprobe_agentsubid: 1402
  821. sfprobe_receiver:
  822. !
  823. ! networks_file: /path/to/networks.lst
  824. ! snaplen: 700
  825. !...
  826. XIV. Quickstart guide to setup the BGP daemon
  827. BGP can be run as a stand-alone collector daemon (pmbgpd, from 1.6.1) or as a
  828. thread within one of the traffic accounting daemons (ie. nfacctd). The stand-
  829. alone daemon is suitable for consuming BGP data only, real-time or at regular
  830. intervals; the thread solution is suitable for correlation of BGP with other
  831. data sources, ie. NetFlow, IPFIX, sFlow, etc.. The thread implementation idea
  832. is to receive data-plane information, ie. via NetFlow, sFlow, etc., and control
  833. plane information, ie. full routing tables via BGP, from edge routers. Per-peer
  834. BGP RIBs are maintained to ensure local views of the network, a behaviour close
  835. to that of a BGP route-server. In case of routers with default-only or partial
  836. BGP views, the default route can be followed up (bgp_default_follow); also it
  837. might be desirable in certain situations, for example trading-off resources to
  838. accuracy, to ntirely map one or a set of agents to a BGP peer (bgp_agent_map).
  839. Pre-requisite is that the pmacct package is configured for compiling with support
  840. for threads. Nowadays this is the default setting hence the following line will
  841. do it:
  842. shell> ./configure
  843. The following configuration snippet shows how to set up a BGP thread (ie. part
  844. of the NetFlow/IPFIX collector, nfacctd) which will bind to an IP address and
  845. will support up to a maximum number of 100 peers. Once PE routers start sending
  846. flow telemetry data and peer up, it should be possible to see the BGP-related
  847. fields, ie. as_path, peer_as_dst, local_pref, med, etc., correctly populated
  848. while querying the memory table:
  849. bgp_daemon: true
  850. bgp_daemon_ip: X.X.X.X
  851. bgp_daemon_max_peers: 100
  852. ! bgp_daemon_as: 65555
  853. nfacctd_as: bgp
  854. [ ... ]
  855. plugins: memory
  856. aggregate: src_as, dst_as, local_pref, med, as_path, peer_dst_as
  857. Setting up the stand-alone BGP collector daemon, pmbgpd, is not very different
  858. at all from the configuration above:
  859. bgp_daemon_ip: X.X.X.X
  860. bgp_daemon_max_peers: 100
  861. ! bgp_daemon_as: 65555
  862. bgp_table_dump_file: /path/to/spool/bgp-$peer_src_ip-%H%M.log
  863. bgp_table_dump_refresh_time: 300
  864. Essentially: the 'bgp_daemon: true' line is not required and there is no need
  865. to instantiate plugins. On the other hand, the BGP daemon is instructed to dump
  866. BGP tables to disk every 300 secs with file names embedding the BGP peer info
  867. ($peer_src_ip) and time reference (%H%M).
  868. The BGP implementation, by default, reads the remote ASN upon receipt of a BGP
  869. OPEN message and dynamically presents itself as part of the same ASN - this is
  870. to ensure an iBGP relationship is established even in multi ASN scenarios. As
  871. of 1.6.2, it is possible to put pmacct in a specific ASN of choice by using the
  872. bgp_daemon_as configuration directive, for example, to establish an eBGP kind
  873. of relationship. Also, the daemon acts as a passive BGP neighbor and hence will
  874. never try to re-establish a fallen peering session. For debugging purposes
  875. related to the BGP feed(s), bgp_daemon_msglog_* configuration directives can be
  876. enabled in order to log BGP messaging.
  877. XIVa. Limiting AS-PATH and BGP community attributes length
  878. AS-PATH and BGP communities can by nature get easily long, when represented as strings.
  879. Sometimes only a small portion of their content is relevant to the accounting task and
  880. hence a filtering layer was developed to take special care of these attributes. The
  881. bgp_aspath_radius cuts the AS-PATH down after a specified amount of hops; whereas the
  882. bgp_stdcomm_pattern does a simple sub-string matching against standard BGP communities,
  883. filtering in only those that match (optionally, for better precision, a pre-defined
  884. number of characters can be wildcarded by employing the '.' symbol, like in regular
  885. expressions). See a typical usage example below:
  886. bgp_aspath_radius: 3
  887. bgp_stdcomm_pattern: 12345:
  888. A detailed description of these configuration directives is, as usual, included in
  889. the CONFIG-KEYS document.
  890. XIVb. The source peer AS case
  891. The peer_src_as primitive adds useful insight in understanding where traffic enters
  892. the observed routing domain; but asymmetric routing impacts accuracy delivered by
  893. devices configured with either NetFlow or sFlow and the peer-as feature (as it only
  894. performs a reverse lookup, ie. a lookup on the source IP address, in the BGP table
  895. hence saying where it would route such traffic). pmacct offers a few ways to perform
  896. some mapping to tackle this issue and easily model both private and public peerings,
  897. both bi-lateral or multi-lateral. Find below how to use a map, reloadable at runtime,
  898. and its contents (for full syntax guide lines, please see the ''
  899. file within the examples section):
  900. bgp_peer_src_as_type: map
  901. bgp_peer_src_as_map: /path/to/
  902. [/path/to/]
  903. set_tag=12345 ip=A.A.A.A in=10 bgp_nexthop=X.X.X.X
  904. set_tag=34567 ip=A.A.A.A in=10
  905. set_tag=45678 ip=B.B.B.B in=20 src_mac=00:11:22:33:44:55
  906. set_tag=56789 ip=B.B.B.B in=20 src_mac=00:22:33:44:55:66
  907. Even though all this mapping is static, it can be auto-provisioned to a good degree
  908. by means of external scripts running at regular intervals and, for example, querying
  909. relevant routers via SNMP. In this sense, the bgpPeerTable MIB is a good starting
  910. point. Alternatively pmacct also offers the option to perform reverse BGP lookups.
  911. NOTES:
  912. * When mapping, the peer_src_as primitive doesn't really apply to egress NetFlow
  913. (or egress sFlow) as it mainly relies on either the input interface index
  914. (ifIndex), the source MAC address, a reverse BGP next-hop lookup or a combination
  915. of these.
  916. * "Source" MED, local preference, communities and AS-PATH have all been allocated
  917. aggregation primitives. Each carries its own peculiarities but the general concepts
  918. highlighed in this chapter apply to these aswell. Check CONFIG-KEYS out for the
  919. src_[med|local_pref|as_path|std_comm|ext_comm|lrg_comm]_[type|map] configuration
  920. directives.
  921. XIVc. Tracking entities on the own IP address space
  922. It might happen that not all entities attached to the service provider network are
  923. running BGP but rather they get IP prefixes redistributed into iBGP (different
  924. routing protocols, statics, directly connected, etc.). These can be private IP
  925. addresses or segments of the SP public address space. The common factor to all of
  926. them is that while being present in iBGP, these prefixes can't be tracked any
  927. further due to the lack of attributes like AS-PATH or an ASN. To overcome this
  928. situation the simplest approach is to employ a bgp_peer_src_as_map directive,
  929. described previously (ie. making use of interface descriptions as a possible way
  930. to automate the process). Alterntively, the bgp_stdcomm_pattern_to_asn directive
  931. was developed to fit into this scenario: assuming procedures of a SP are (or can
  932. be changed) to label every relevant non-BGP speaking entity IP prefixes uniquely
  933. with a BGP standard community, this directive allows to map the community to a
  934. peer AS/origin AS couple as per the following example: XXXXX:YYYYY => Peer-AS=XXXXX,
  935. Origin-AS=YYYYY.
  936. XIVd. Preparing the router to BGP peer
  937. Once the collector is configured and started up the remaining step is to let routers
  938. to export traffic samples to the collector and BGP peer with it. Configuring the same
  939. source IP address across both NetFlow and BGP features allows the pmacct collector to
  940. perform the required correlations. Also, setting the BGP Router ID accordingly allows
  941. for more clear log messages. It's adviceable to configure the collector at the routers
  942. as a Route-Reflector (RR) client.
  943. A relevant configuration example for a Cisco router follows:
  944. ip flow-export source Loopback12345
  945. ip flow-export version 5
  946. ip flow-export destination X.X.X.X 2100
  947. !
  948. router bgp 12345
  949. neighbor X.X.X.X remote-as 12345
  950. neighbor X.X.X.X update-source Loopback12345
  951. neighbor X.X.X.X version 4
  952. neighbor X.X.X.X send-community
  953. neighbor X.X.X.X route-reflector-client
  954. neighbor X.X.X.X description nfacctd
  955. A relevant configuration example for a Juniper router follows:
  956. forwarding-options {
  957. sampling {
  958. output {
  959. cflowd X.X.X.X {
  960. port 2100;
  961. source-address Y.Y.Y.Y;
  962. version 5;
  963. }
  964. }
  965. }
  966. }
  967. protocols bgp {
  968. group rr-netflow {
  969. type internal;
  970. local-address Y.Y.Y.Y;
  971. family inet {
  972. any;
  973. }
  974. cluster Y.Y.Y.Y;
  975. neighbor X.X.X.X {
  976. description "nfacctd";
  977. }
  978. }
  979. }
  980. XIVe. Example: writing flows augmented by BGP to a MySQL database
  981. The following setup is a realistic example for collecting an external traffic
  982. matrix to the ASN level (ie. no IP prefixes collected) for a MPLS-enabled IP
  983. carrier network. Samples are aggregated in a way which is suitable to get an
  984. overview of traffic trajectories, collecting much information where these enter
  985. the AS and where they get out.
  986. daemonize: true
  987. nfacctd_port: 2100
  988. nfacctd_time_new: true
  989. plugins: mysql[5mins], mysql[hourly]
  990. sql_optimize_clauses: true
  991. sql_dont_try_update: true
  992. sql_multi_values: 1024000
  993. sql_history_roundoff[5mins]: m
  994. sql_history[5mins]: 5m
  995. sql_refresh_time[5mins]: 300
  996. sql_table[5mins]: acct_bgp_5mins
  997. sql_history_roundoff[hourly]: h
  998. sql_history[hourly]: 1h
  999. sql_refresh_time[hourly]: 3600
  1000. sql_table[hourly]: acct_bgp_1hr
  1001. bgp_daemon: true
  1002. bgp_daemon_ip: X.X.X.X
  1003. bgp_daemon_max_peers: 100
  1004. bgp_aspath_radius: 3
  1005. bgp_follow_default: 1
  1006. nfacctd_as: bgp
  1007. bgp_peer_src_as_type: map
  1008. bgp_peer_src_as_map: /path/to/
  1009. plugin_buffer_size: 10240
  1010. plugin_pipe_size: 1024000
  1011. aggregate: tag, src_as, dst_as, peer_src_as, peer_dst_as, peer_src_ip, peer_dst_ip, local_pref, as_path
  1012. pre_tag_map: /path/to/
  1013. maps_refresh: true
  1014. maps_entries: 3840
  1015. The content of the maps (bgp_peer_src_as_map, pre_tag_map) is meant to be pretty
  1016. standard and will not be shown. As it can be grasped from the above configuration,
  1017. the SQL schema was customized. Below a suggestion on how this can be modified for
  1018. more efficiency - with additional INDEXes, to speed up specific queries response
  1019. time, remaining to be worked out:
  1020. create table acct_bgp_5mins (
  1022. agent_id INT(4) UNSIGNED NOT NULL,
  1023. as_src INT(4) UNSIGNED NOT NULL,
  1024. as_dst INT(4) UNSIGNED NOT NULL,
  1025. peer_as_src INT(4) UNSIGNED NOT NULL,
  1026. peer_as_dst INT(4) UNSIGNED NOT NULL,
  1027. peer_ip_src CHAR(15) NOT NULL,
  1028. peer_ip_dst CHAR(15) NOT NULL,
  1029. as_path CHAR(21) NOT NULL,
  1030. local_pref INT(4) UNSIGNED NOT NULL,
  1031. packets INT UNSIGNED NOT NULL,
  1033. stamp_inserted DATETIME NOT NULL,
  1034. stamp_updated DATETIME,
  1035. PRIMARY KEY (id),
  1036. INDEX ...
  1038. create table acct_bgp_1hr (
  1040. agent_id INT(4) UNSIGNED NOT NULL,
  1041. as_src INT(4) UNSIGNED NOT NULL,
  1042. as_dst INT(4) UNSIGNED NOT NULL,
  1043. peer_as_src INT(4) UNSIGNED NOT NULL,
  1044. peer_as_dst INT(4) UNSIGNED NOT NULL,
  1045. peer_ip_src CHAR(15) NOT NULL,
  1046. peer_ip_dst CHAR(15) NOT NULL,
  1047. as_path CHAR(21) NOT NULL,
  1048. local_pref INT(4) UNSIGNED NOT NULL,
  1049. packets INT UNSIGNED NOT NULL,
  1051. stamp_inserted DATETIME NOT NULL,
  1052. stamp_updated DATETIME,
  1053. PRIMARY KEY (id),
  1054. INDEX ...
  1056. Although table names are fixed in this example, ie. acct_bgp_5mins, it can be
  1057. highly adviceable in real-life to run dynamic SQL tables, ie. table names that
  1058. include time-related variables (see sql_table, sql_table_schema in CONFIG-KEYS).
  1059. XIVf. Example: exporting BGP tables or messaging to files or AMQP/Kafka brokers.
  1060. Both the stand-alone BGP collector daemon (pmbgpd) and the BGP thread within one
  1061. of the traffic accounting daemons can: a) export/dump routing tables for all BGP
  1062. peers at regular time intervals and b) log BGP messaging, real-time, with each
  1063. of the BGP peers. Both features are useful for producing data useful for
  1064. analytics, for troubleshooting and debugging. The former is beneficial to gain
  1065. visibility in extra BGP data while providing event compression; the latter
  1066. enables BGP analytics and BGP event management, for example spot unstable
  1067. routes, trigger alarms on route hijacks, etc.
  1068. Both features export data formatted as JSON messages, hence compiling pmacct
  1069. against libjansson is a requirement. See how to compile pmacct with JSON/
  1070. libjansson support in the section "Compiling pmacct with JSON support" of this
  1071. document. If writing to AMQP or Kafka brokers compiling against RabbitMQ or
  1072. Kafka libraries is required; read more in respectively the "Running the
  1073. RabbitMQ/AMQP plugin" or "Running the Kafka plugin" sections of this document.
  1074. A basic dump of BGP tables at regular intervals (60 secs) to plain-text files,
  1075. split by BGP peer and time of the day, is configured as follows:
  1076. bgp_table_dump_file: /path/to/spool/bgp/bgp-$peer_src_ip-%H%M.txt
  1077. bgp_table_dump_refresh_time: 60
  1078. A basic log of BGP messaging in near real-time to a plain-text file (which can
  1079. be rotated by an external tool/script) is configured as follows:
  1080. bgp_daemon_msglog_file: /path/to/spool/bgp/bgp-$peer_src_ip.log
  1081. A basic dump of BGP tables at regular intervals (60 secs) to a Kafka broker,
  1082. listening on the localost and default port, is configured as follows:
  1083. bgp_table_dump_kafka_topic: pmacct.bgp
  1084. bgp_table_dump_refresh_time: 60
  1085. The equivalent bgp_table_dump_amqp_routing_key config directive can be used to
  1086. make the above example work against a RabbitMQ broker.
  1087. A basic log of BGP messaging in near real-time to a Kafka broker, listening on
  1088. the localhost and default port, is configured as follows:
  1089. bgp_daemon_msglog_kafka_topic: pmacct.bgp
  1090. The equivalent bgp_daemon_msglog_amqp_routing_key config directive can be used
  1091. to make the above example work against a RabbitMQ broker.
  1092. A sample of both the BGP msglog and dump formats are captured in the following
  1093. document: docs/MSGLOG_DUMP_FORMATS
  1094. XIVg. BGP daemon implementation concluding notes
  1095. The implementation supports 4-bytes ASN, IPv4, IPv6, VPNv4 and VPNv6 (MP-BGP)
  1096. address families and ADD-PATH (draft-ietf-idr-add-paths); both IPv4 and IPv6
  1097. BGP sessions are supported. When storing data via SQL, BGP primitives can be
  1098. freely mix-and-matched with other primitives (ie. L2/L3/L4) when customizing
  1099. the SQL table (sql_optimize_clauses: true). Environments making use of BGP
  1100. Multi-Path should make use of ADD-PATH to advertise known paths in which case
  1101. the correct BGP info is linked to traffic data using BGP next-hop (or IP next-
  1102. hop if use_ip_next_hop is set to true) as selector among the paths available
  1103. (on the assumption that ADD-PATH is used for route diversity; all checked
  1104. implementations seem to tend to not advertise paths with the same next-hop).
  1105. TCP MD5 signature for BGP messages is also supported. For a review of all knobs
  1106. and features see the CONFIG-KEYS document.
  1107. XV. Quickstart guide to setup a NetFlow/IPFIX/sFlow replicator
  1108. The 'tee' plugin is meant to replicate NetFlow/sFlow data to remote collectors.
  1109. The plugin can act transparently, by preserving the original IP address of the
  1110. datagrams, or as a proxy. Basic configuration of up a replicator is very easy:
  1111. all is needed is where to listen to for incoming packets, where to replicate
  1112. them to and optionally a filtering layer, if required. Filtering bases on the
  1113. standard pre_tag_map infrastructure; here is presented only coarse-grained
  1114. filtering against the NetFlow/sFlow source IP address (see next section for
  1115. finer-grained filtering).
  1116. nfacctd_port: 2100
  1117. nfacctd_ip: X.X.X.X
  1118. !
  1119. plugins: tee[a], tee[b]
  1120. tee_receivers[a]: /path/to/tee_receivers_a.lst
  1121. tee_receivers[b]: /path/to/tee_receivers_b.lst
  1122. ! tee_transparent: true
  1123. !
  1124. ! pre_tag_map: /path/to/
  1125. !
  1126. plugin_buffer_size: 10240
  1127. plugin_pipe_size: 1024000
  1128. nfacctd_pipe_size: 1024000
  1129. An example of content of a tee_receivers map, ie. /path/to/tee_receivers_a.lst,
  1130. is as follows ('id' is the pool ID and 'ip' a comma-separated list of receivers
  1131. for that pool):
  1132. id=1 ip=W.W.W.W:2100
  1133. id=2 ip=Y.Y.Y.Y:2100,Z.Z.Z.Z:2100
  1134. ! id=1 ip=W.W.W.W:2100 tag=0
  1135. ! id=2 ip=Y.Y.Y.Y:2100,Z.Z.Z.Z:2100 tag=100
  1136. Number of tee_receivers map entries (by default 384) can be modified via
  1137. maps_entries. Content can be reloaded at runtime by sending the daemon a
  1138. SIGUSR2 signal (ie. "killall -USR2 nfacctd").
  1139. Selective teeing allows to filter which pool of receivers has to receive which
  1140. datagrams. Tags are applied via a pre_tag_map, the one illustrated below applies
  1141. tag 100 to packets exported from agents A.A.A.A, B.B.B.B and C.C.C.C; in case
  1142. there was also an agent D.D.D.D exporting towards the replicator, its packets
  1143. would intuitively remain untagged. Tags are matched by a tee_receivers map, see
  1144. above the two pool definitions commented out containing the 'tag' keyword: the
  1145. definition would cause untagged packets (tag=0) to be replicated only to pool
  1146. #1 whereas packets tagged as 100 (tag=100) to be replicated only to pool #2.
  1147. More examples in the and tee_receivers.lst.example files in
  1148. the examples/ sub-tree:
  1149. set_tag=100 ip=A.A.A.A
  1150. set_tag=100 ip=B.B.B.B
  1151. set_tag=100 ip=C.C.C.C
  1152. To enable the transparent mode, the tee_transparent should be commented out. It
  1153. preserves the original IP address of the NetFlow/sFlow sender while replicating
  1154. by essentially spoofing it. This feature is not global and can be freely enabled
  1155. only on a subset of the active replicators. It requires super-user permissions
  1156. in order to run.
  1157. Concluding note: 'tee' plugin is not compatible with different plugins - within
  1158. the same daemon instance. So if in the need of using pmacct for both collecting
  1159. and replicating data, two separate instances must be used (intuitively with the
  1160. replicator instance feeding the collector one).
  1161. XVa. Splitting and dissecting sFlow flow samples
  1162. Starting with pmacct 1.6.2, it is possible to perform finer-grained filtering,
  1163. ie. against flow-specific primitives, when replicating. For example: replicate
  1164. flows from or to MAC address X1, X2 .. Xn to receiver Y or replicate flows in
  1165. VLAN W to receiver Z. The feature works by inspecting the original packet and
  1166. dissecting it as needed, the most popular use-case being IXPs replicating flows
  1167. back to the members originating and/or receiving them. Some of the supported
  1168. primitives are: source and destination MAC addresses, input/output interfaces
  1169. ifindex; the full list is available in examples/ (look for
  1170. "sfacctd, nfacctd when in 'tee' mode").
  1171. The feature is configured just like selective teeing shown in the previous
  1172. section. Incoming packets are tagged with a pre_tag_map and then matched to a
  1173. receiver in tee_receivers. Also, setting tee_dissect_send_full_pkt to true (by
  1174. default false) the original full frame is sent over to the tee plugin. For
  1175. example: replicate flows from/to MAC address XX:XX:XX:XX:XX:XX to receiver Y,
  1176. replicate flows from/to MAC address WW:WW:WW:WW:WW:WW to receiver Z, replicate
  1177. any remaining flows plus original frames to receiver J.
  1178. This is the pre_tag_map map:
  1179. set_tag=100 ip= src_mac=XX:XX:XX:XX:XX:XX
  1180. set_tag=100 ip= dst_mac=XX:XX:XX:XX:XX:XX
  1181. set_tag=200 ip= src_mac=WW:WW:WW:WW:WW:WW
  1182. set_tag=200 ip= dst_mac=WW:WW:WW:WW:WW:WW
  1183. set_tag=999 ip=
  1184. This is the tee_receivers map:
  1185. id=100 ip=Y.Y.Y.Y:2100 tag=100
  1186. id=200 ip=Z.Z.Z.Z:2100 tag=200
  1187. id=999 ip=J.J.J.J:2100 tag=999
  1188. This is the relevant section from sfacctd.conf:
  1189. [ .. ]
  1190. !
  1191. tee_transparent: true
  1192. maps_index: true
  1193. !
  1194. plugins: tee[a]
  1195. !
  1196. tee_receivers[a]: /path/to/tee_receivers.lst
  1197. pre_tag_map[a]: /path/to/
  1198. tee_dissect_send_full_pkt[a]: true
  1199. There are a few restrictions to the feature: 1) only sFlow v5 is supported, ie.
  1200. no NetFlow/IPFIX and no sFlow v2-v4; 2) only sFlow flow samples are supported,
  1201. ie. no counter samples. There are also a few known limitations all boiling down
  1202. to non-contextual replication: 1) once split, flows are not muxed back together,
  1203. ie. in case multiple samples part of the same packet are to be replicated to the
  1204. same receiver; 2) sequence numbers are untouched: the most obvious cases being
  1205. receivers may detect non-contiguous sequencing progressions or false duplicates.
  1206. If you are negatively affected by any of these restrictions or limitations or
  1207. you need other primitives to be supported by this feature, please do get in
  1208. touch.
  1209. XVI. Quickstart guide to setup the IS-IS daemon
  1210. pmacct integrates an IS-IS daemon as part of the IP accounting collectors. Such
  1211. daemon is run as a thread within the collector core process. The idea is to
  1212. receive data-plane information, ie. via NetFlow, sFlow, etc., and control-plane
  1213. information via IS-IS. Currently a single L2 P2P neighborship, ie. over a GRE
  1214. tunnel, is supported. The daemon is currently used for the purpose of route
  1215. resolution. A sample scenario could be that more specific internal routes might
  1216. be configured to get summarized in BGP while crossing cluster boundaries.
  1217. Pre-requisite for the use of the IS-IS daemon is that the pmacct package has to
  1218. be configured for compilation with threads, this line will do it:
  1219. shell> ./configure
  1220. XVIa. Preparing the collector for the L2 P2P IS-IS neighborship
  1221. It's assumed the collector sits on an Ethernet segment and has not direct link
  1222. (L2) connectivity to an IS-IS speaker, hence the need to establish a GRE tunnel.
  1223. While extensive literature and OS specific examples exist on the topic, a brief
  1224. example for Linux, consistent with rest of the chapter, is provided below:
  1225. ip tunnel add gre2 mode gre remote local ttl 255
  1226. ip link set gre2 up
  1227. The following configuration fragment is sufficient to set up an IS-IS daemon
  1228. which will bind to a network interface gre2 configured with IP address
  1229. in an IS-IS area 49.0001 and a CLNS MTU set to 1400:
  1230. isis_daemon: true
  1231. isis_daemon_ip:
  1232. isis_daemon_net: 49.0001.0100.0000.1001.00
  1233. isis_daemon_iface: gre2
  1234. isis_daemon_mtu: 1400
  1235. ! isis_daemon_msglog: true
  1236. XVIb. Preparing the router for the L2 P2P IS-IS neighborship
  1237. Once the collector is ready, the remaining step is to configure a remote router
  1238. for the L2 P2P IS-IS neighborship. The following bit of configuration (based on
  1239. Cisco IOS) will match the above fragment of configuration for the IS-IS daemon:
  1240. interface Tunnel0
  1241. ip address
  1242. ip router isis
  1243. tunnel source FastEthernet0
  1244. tunnel destination XXX.XXX.XXX.XXX
  1245. clns mtu 1400
  1246. isis metric 1000
  1247. !
  1248. router isis
  1249. net 49.0001.0100.0000.1002.00
  1250. is-type level-2-only
  1251. metric-style wide
  1252. log-adjacency-changes
  1253. passive-interface Loopback0
  1254. !
  1255. XVII. Quickstart guide to setup the BMP daemon
  1256. BMP can be run as a stand-alone collector daemon (pmbmpd, from 1.6.1) or as a
  1257. thread within one of the traffic accounting daemons (ie. nfacctd). The stand-
  1258. alone daemon is suitable for consuming BMP data only, real-time or at regular
  1259. intervals; the thread solution is suitable for correlation of BMP with other
  1260. data sources, ie. NetFlow, IPFIX, sFlow, etc.. The implementation was originally
  1261. based on the draft-ietf-grow-bmp-07 IETF document (whereas the current review is
  1262. against draft-ietf-grow-bmp-17). If unfamiliar with BMP, to quote the IETF
  1263. document: "BMP is intended to provide a more convenient interface for obtaining
  1264. route views for research purpose than the screen-scraping approach in common use
  1265. today. The design goals are to keep BMP simple, useful, easily implemented, and
  1266. minimally service-affecting.". The BMP daemon currently supports BMP data,
  1267. events and stats, ie. initiation, termination, peer up, peer down, stats and
  1268. route monitoring messages. The daemon enables to write BMP messages to files,
  1269. AMQP and Kafka brokers, real-time (msglog) or at regular time intervals (dump).
  1270. Also, route monitoring messages are saved in a RIB structure for IP prefix
  1271. lookup.
  1272. All features export data formatted as JSON messages, hence compiling pmacct
  1273. against libjansson is a requirement. See how to compile pmacct with JSON/
  1274. libjansson support in the section "Compiling pmacct with JSON support" of this
  1275. document. If writing to AMQP or Kafka brokers compiling against RabbitMQ or
  1276. Kafka libraries is required; read more in respectively the "Running the
  1277. RabbitMQ/AMQP plugin" or "Running the Kafka plugin" sections of this document.
  1278. Following a simple example on how to configure nfacctd to enable the BMP thread
  1279. to a) log, in real-time, BGP stats, events and routes received via BMP to a
  1280. text-file (bmp_daemon_msglog_file) and b) dump the same (ie. BGP stats and
  1281. events received via BMP) to a text-file and at regular time intervals
  1282. (bmp_dump_refresh_time, bmp_dump_file):
  1283. bmp_daemon: true
  1284. !
  1285. bmp_daemon_msglog_file: /path/to/bmp-$peer_src_ip.log
  1286. !
  1287. bmp_dump_file: /path/to/bmp-$peer_src_ip-%H%M.dump
  1288. bmp_dump_refresh_time: 60
  1289. Following a simple example on how to configure nfacctd to enable the BMP thread
  1290. to a) log, in real-time, BGP stats, events and routes received via BMP to a
  1291. Kafka broker (bmp_daemon_msglog_kafka_topic) and b) dump the same (ie. BGP stats
  1292. and events received via BMP) to a text-file and at regular time intervals
  1293. (bmp_dump_refresh_time, bmp_dump_kafka_topic):
  1294. bmp_daemon: true
  1295. !
  1296. bmp_daemon_msglog_kafka_topic: pmacct.bmp-msglog
  1297. !
  1298. bmp_dump_kafka_topic: pmacct.bmp-dump
  1299. bmp_dump_refresh_time: 60
  1300. The equivalent bmp_daemon_msglog_amqp_routing_key and bmp_dump_amqp_routing_key
  1301. config directives can be used to make the above example work against a RabbitMQ
  1302. broker.
  1303. A sample of both the BMP msglog and dump formats are captured in the following
  1304. document: docs/MSGLOG_DUMP_FORMATS
  1305. Setting up the stand-alone BMP collector daemon, pmbmpd, is the exact same as
  1306. the configuration above except the 'bmp_daemon: true' line can be skipped.
  1307. Following is an example how a Cisco router running IOS/IOS-XE should be
  1308. configured in order to export BMP data to a collector:
  1309. router bgp 64512
  1310. bmp server 1
  1311. address X.X.X.X port-number 1790
  1312. initial-delay 60
  1313. failure-retry-delay 60
  1314. flapping-delay 60
  1315. stats-reporting-period 300
  1316. activate
  1317. exit-bmp-server-mode
  1318. !
  1319. neighbor Y.Y.Y.Y remote-as 64513
  1320. neighbor Y.Y.Y.Y bmp-activate all
  1321. neighbor Z.Z.Z.Z remote-as 64514
  1322. neighbor Z.Z.Z.Z bmp-activate all
  1323. Following is an example how a Cisco router running IOS-XR should be configured
  1324. in order to export BMP data to a collector:
  1325. router bgp 64512
  1326. neighbor Y.Y.Y.Y
  1327. bmp-activate server 1
  1328. neighbor Z.Z.Z.Z
  1329. bmp-activate server 1
  1330. !
  1331. !
  1332. bmp server 1
  1333. host X.X.X.X port 1790
  1334. initial-delay 60
  1335. initial-refresh delay 60
  1336. stats-reporting-period 300
  1337. !
  1338. Following is an example how a Juniper router should be configured in order to
  1339. export BMP data to a collector:
  1340. routing-options {
  1341. bmp {
  1342. station FQDN {
  1343. connection-mode active;
  1344. monitor enable;
  1345. route-monitoring {
  1346. pre-policy;
  1347. post-policy;
  1348. }
  1349. station-address X.X.X.X;
  1350. station-port 1790;
  1351. }
  1352. }
  1353. }
  1354. Any equivalent examples for other vendor implementing BMP are welcome.
  1355. XVIII. Quickstart guide to setup Streaming Telemetry collection
  1356. Quoting Cisco IOS-XR Telemetry Configuration Guide at the time of this writing:
  1357. "Streaming telemetry lets users direct data to a configured receiver. This data
  1358. can be used for analysis and troubleshooting purposes to maintain the health of
  1359. the network. This is achieved by leveraging the capabilities of machine-to-
  1360. machine communication. The data is used by development and operations (DevOps)
  1361. personnel who plan to optimize networks by collecting analytics of the network
  1362. in real-time, locate where problems occur, and investigate issues in a
  1363. collaborative manner.". Streaming telemetry support comes in pmacct in two
  1364. flavours: 1) a telemetry thread can be started in existing daemons, ie. sFlow,
  1365. NetFlow/IPFIX, etc. for the purpose of data correlation and 2) a new daemon
  1366. pmtelemetryd for standalone consumpton of data. Streaming telemetry data can
  1367. be logged real-time and/or dumped at regular time intervals to flat-files,
  1368. RabbitMQ or Kafka brokers.
  1369. All features export data formatted as JSON messages, hence compiling pmacct
  1370. against libjansson is a requirement. See how to compile pmacct with JSON/
  1371. libjansson support in the section "Compiling pmacct with JSON support" of this
  1372. document. If writing to AMQP or Kafka brokers compiling against RabbitMQ or
  1373. Kafka libraries is required; read more in respectively the "Running the
  1374. RabbitMQ/AMQP plugin" or "Running the Kafka plugin" sections of this document.
  1375. From a configuration standpoint both the thread (ie. telemetry configured part
  1376. of nfacctd) and the daemon (pmtelemetryd) are configured the same way except the
  1377. thread must be explicitely enabled with a 'telemetry_daemon: true' config line.
  1378. Hence the following examples hold for both the thread and the daemon setups.
  1379. Following is a config example to receive telemetry data in JSON format over UDP
  1380. port 1620 and log it real-time to flat-files:
  1381. ! Telemetry thread configuration
  1382. ! telemetry_daemon: true
  1383. !
  1384. telemetry_daemon_port_udp: 1620
  1385. telemetry_daemon_decoder: json
  1386. !
  1387. telemetry_daemon_msglog_file: /path/to/spool/telemetry-msglog-$peer_src_ip.txt
  1388. ! telemetry_daemon_msglog_amqp_routing_key: telemetry-msglog
  1389. ! telemetry_daemon_msglog_kafka_topic: telemetry-msglog
  1390. Following is a config example to receive telemetry data with Cisco proprietary
  1391. header (12 bytes), in compressed JSON format over TCP port 1620 and dump it at
  1392. 60 secs time intervals to flat-files:
  1393. ! Telemetry thread configuration
  1394. ! telemetry_daemon: true
  1395. !
  1396. telemetry_daemon_port_tcp: 1620
  1397. telemetry_daemon_decoder: cisco_zjson
  1398. !
  1399. telemetry_dump_file: /path/to/spool/telemetry-dump-$peer_src_ip-%Y%m%d-%H%M.txt
  1400. telemetry_dump_latest_file: /path/to/spool/telemetry-dump-$peer_src_ip.latest
  1401. ! telemetry_dump_amqp_routing_key: telemetry-dump
  1402. ! telemetry_dump_kafka_topic: telemetry-dump
  1403. !
  1404. telemetry_dump_refresh_time: 60
  1405. A sample of both the Streaming Telemetry msglog and dump formats are captured in
  1406. the following document: docs/MSGLOG_DUMP_FORMATS
  1407. XIX. Running the print plugin to write to flat-files
  1408. pmacct can also output to files via its 'print' plugin. Dynamic filenames are
  1409. supported. Output is either text-based using JSON, CSV or formatted outputs, or
  1410. binary-based using the Apache Avro file container ('print_output' directive).
  1411. Interval between writes can be configured via the 'print_refresh_time'
  1412. directive. An example follows on how to write to files on a 15 mins basis in
  1413. CSV format:
  1414. print_refresh_time: 900
  1415. print_history: 15m
  1416. print_output: csv
  1417. print_output_file: /path/to/file-%Y%m%d-%H%M.txt
  1418. print_history_roundoff: m
  1419. Which, over time, would produce a would produce a series of files as follows:
  1420. -rw------- 1 paolo paolo 2067 Nov 21 00:15 blabla-20111121-0000.txt
  1421. -rw------- 1 paolo paolo 2772 Nov 21 00:30 blabla-20111121-0015.txt
  1422. -rw------- 1 paolo paolo 1916 Nov 21 00:45 blabla-20111121-0030.txt
  1423. -rw------- 1 paolo paolo 2940 Nov 21 01:00 blabla-20111121-0045.txt
  1424. JSON output requires compiling pmacct against Jansson library. See how to
  1425. compile pmacct with JSON/libjansson support in the section "Compiling pmacct
  1426. with JSON support" of this document.
  1427. Avro output requires compiling pmacct against libavro library. See how to
  1428. compile pmacct with Avro support in the section "Compiling pmacct with Apache
  1429. Avro support" of this document.
  1430. Splitting data into time bins is supported via print_history directive. When
  1431. enabled, time-related variable substitutions of dynamic print_output_file names
  1432. are determined using this value. It is supported to define print_refresh_time
  1433. values shorter than print_history ones by setting print_output_file_append to
  1434. true (which is generally also recommended to prevent that unscheduled writes to
  1435. disk, ie. due to caching issues, overwrite existing file content). A sample
  1436. config follows:
  1437. print_refresh_time: 300
  1438. print_history: 5m
  1439. print_output: csv
  1440. print_output_file: /path/to/%Y/%Y-%m/%Y-%m-%d/file-%Y%m%d-%H%M.txt
  1441. print_history: 15m
  1442. print_history_roundoff: m
  1443. print_output_file_append: true
  1444. XX. Quickstart guide to setup GeoIP lookups
  1445. pmacct can perform GeoIP country lookups against a Maxmind DB v1 (--enable-geoip)
  1446. and against a Maxmind DB v2 (--enable-geoipv2). A v1 database enables resolution
  1447. of src_host_country and dst_host_country primitives only. A v2 database enables
  1448. resolution of presently supported GeoIP-related primitives, ie. src_host_country,
  1449. src_host_pocode, dst_host_country, dst_host_pocode. Pre-requisite for the feature
  1450. to work are: a) a working installed Maxmind GeoIP library and headers and b) a
  1451. Maxmind GeoIP database (freely available). Two steps to quickly start with GeoIP
  1452. lookups in pmacct:
  1453. GeoIP v1 (libGeoIP):
  1454. * Have libGeoIP library and headers available to compile against; have a GeoIP
  1455. database also available:
  1456. * To compile the pmacct package with support for GeoIP lookups, the code must
  1457. be configured for compilation as follows:
  1458. ./configure --enable-geoip [ ... ]
  1459. But, for example, should you have installed libGeoIP in /usr/local/geoip and
  1460. pkg-config is unable to help, you can supply this non-default location as
  1461. follows (assuming you are running the bash shell):
  1462. export GEOIP_LIBS="-L/usr/local/geoip/lib -lgeoip"
  1463. export GEOIP_CFLAGS="-I/usr/local/geoip/include"
  1464. ./configure --enable-geoip [ ... ]
  1465. * Include as part of the pmacct configuration the following fragment:
  1466. ...
  1467. geoip_ipv4_file: /path/to/GeoIP/GeoIP.dat
  1468. aggregate: src_host_country, dst_host_country, ...
  1469. ...
  1470. GeoIP v2 (libmaxminddb):
  1471. * Have libmaxminddb library and headers to compile against, available at:
  1472. ; have also a database
  1473. available: . Only the
  1474. database binary format is supported.
  1475. * To compile the pmacct package with support for GeoIP lookups, the code must
  1476. be configured for compilation as follows:
  1477. ./configure --enable-geoipv2 [ ... ]
  1478. But, for example, should you have installed libmaxminddb in /usr/local/geoipv2
  1479. and pkg-config is unable to help, you can supply this non-default location
  1480. as follows (assuming you are running the bash shell):
  1481. export GEOIPV2_LIBS="-L/usr/local/geoipv2/lib -lmaxminddb"
  1482. export GEOIPV2_CFLAGS="-I/usr/local/geoipv2/include"
  1483. ./configure --enable-geoipv2 [ ... ]
  1484. * Include as part of the pmacct configuration the following fragment:
  1485. ...
  1486. geoipv2_file: /path/to/GeoIP/GeoLite2-Country.mmdb
  1487. aggregate: src_host_country, dst_host_country, ...
  1488. ...
  1489. Concluding notes: 1) The use of --enable-geoip is mutually exclusive with
  1490. --enable-geoipv2; 2) more fine-grained GeoIP lookup primitives (ie. cities,
  1491. states, counties, metro areas, zip codes, etc.) are not yet supported: should
  1492. you be interested into any of these, please get in touch.
  1493. XXI. Using pmacct as traffic/event logger
  1494. pmacct was originally conceived as a traffic aggregator. It is now possible
  1495. to use pmacct as a traffic/event logger as well, such development had been
  1496. fostered particularly by the use of NetFlow/IPFIX as generic transport,
  1497. see for example Cisco NEL and Cisco NSEL. Key to logging are time-stamping
  1498. primitives, timestamp_start and timestamp_end: the former records the likes
  1499. of libpcap packet timestamp, sFlow sample arrival time, NetFlow observation
  1500. time and flow first switched time; timestamp_end currently only makes sense
  1501. for logging flows via NetFlow. Still, the exact boundary between aggregation
  1502. and logging can be defined via the aggregation method, ie. no assumptions are
  1503. made. An example to log traffic flows follows:
  1504. ! ...
  1505. !
  1506. plugins: print[traffic]
  1507. !
  1508. aggregate[traffic]: src_host, dst_host, peer_src_ip, peer_dst_ip, in_iface, out_iface, timestamp_start, timestamp_end, src_port, dst_port, proto, tos, src_mask, dst_mask, src_as, dst_as, tcpflags
  1509. print_output_file[traffic]: /path/to/traffic-%Y%m%d_%H%M.txt
  1510. print_output[traffic]: csv
  1511. print_history[traffic]: 5m
  1512. print_history_roundoff[traffic]: m
  1513. print_refresh_time[traffic]: 300
  1514. ! print_cache_entries[traffic]: 9999991
  1515. print_output_file_append[traffic]: true
  1516. !
  1517. ! ...
  1518. An example to log specifically CGNAT (Carrier Grade NAT) events from a
  1519. Cisco ASR1K box follows:
  1520. ! ...
  1521. !
  1522. plugins: print[nat]
  1523. !
  1524. aggregate[nat]: src_host, post_nat_src_host, src_port, post_nat_src_port, proto, nat_event, timestamp_start
  1525. print_output_file[nat]: /path/to/nat-%Y%m%d_%H%M.txt
  1526. print_output[nat]: json
  1527. print_history[nat]: 5m
  1528. print_history_roundoff[nat]: m
  1529. print_refresh_time[nat]: 300
  1530. ! print_cache_entries[nat]: 9999991
  1531. print_output_file_append[nat]: true
  1532. !
  1533. ! ...
  1534. The two examples above can intuitively be merged in a single configuration
  1535. so to log down in parallel both traffic flows and events. To split flows
  1536. accounting from events, ie. to different files, a pre_tag_map and two print
  1537. plugins can be used as follows:
  1538. ! ...
  1539. !
  1540. pre_tag_map: /path/to/
  1541. !
  1542. plugins: print[traffic], print[nat]
  1543. !
  1544. pre_tag_filter[traffic]: 10
  1545. aggregate[traffic]: src_host, dst_host, peer_src_ip, peer_dst_ip, in_iface, out_iface, timestamp_start, timestamp_end, src_port, dst_port, proto, tos, src_mask, dst_mask, src_as, dst_as, tcpflags
  1546. print_output_file[traffic]: /path/to/traffic-%Y%m%d_%H%M.txt
  1547. print_output[traffic]: csv
  1548. print_history[traffic]: 5m
  1549. print_history_roundoff[traffic]: m
  1550. print_refresh_time[traffic]: 300
  1551. ! print_cache_entries[traffic]: 9999991
  1552. print_output_file_append[traffic]: true
  1553. !
  1554. pre_tag_filter[nat]: 20
  1555. aggregate[nat]: src_host, post_nat_src_host, src_port, post_nat_src_port, proto, nat_event, timestamp_start
  1556. print_output_file[nat]: /path/to/nat-%Y%m%d_%H%M.txt
  1557. print_output[nat]: json
  1558. print_history[nat]: 5m
  1559. print_history_roundoff[nat]: m
  1560. print_refresh_time[nat]: 300
  1561. ! print_cache_entries[nat]: 9999991
  1562. print_output_file_append[nat]: true
  1563. !
  1564. ! ...
  1565. In the above configuration both plugins will log their data in 5 mins files
  1566. basing on the 'print_history[<plugin name>]: 5m' configuration directive, ie.
  1567. traffic-20130802-1345.txt traffic-20130802-1350.txt traffic-20130802-1355.txt
  1568. etc. Granted appending to output file is set to true, data can be refreshed
  1569. at shorter intervals than 300 secs. This is a snippet from /path/to/
  1570. referred above:
  1571. set_tag=10 ip=A.A.A.A sample_type=flow
  1572. set_tag=20 ip=A.A.A.A sample_type=event
  1573. set_tag=10 ip=B.B.B.B sample_type=flow
  1574. set_tag=20 ip=B.B.B.B sample_type=event
  1575. !
  1576. ! ...
  1577. XXII. Miscellaneous notes and troubleshooting tips
  1578. This chapter will hopefully build up to the point of providing a taxonomy of
  1579. popular cases to troubleshoot by daemon and what to do. Although that is the
  1580. plan, the current format is sparse notes.
  1581. When reporting a bug: please report in all cases the pmacct version that you
  1582. are experiencing your issue against; the CLI option -V of the daemon you are
  1583. using returns all the info needed (daemon, version, specific release and
  1584. options compiled in). Do realise that if using a pre-packaged version from
  1585. your OS and/or old code (ie. not master code on GitHub or latest official
  1586. release), you may be very possibly asked to try one of these first. Finally,
  1587. please refrain to open issues on GitHub if not using master code (use the
  1588. pmacct-discussion mailing list or unicast email instead).
  1589. a) Here are recap some popular issues when compiling pmacct or linking it at
  1590. runtime against shared libraries:
  1591. 1) /usr/local/sbin/pmacctd: error while loading shared libraries:
  1592. cannot open shared object file: No such file or directory
  1593. This can happen at runtime and, especially in case of freshly downloaded
  1594. and compiled libraries, it is a symptom that after installing the shared
  1595. library, ldconfig was not called. Or alternatively that the directory
  1596. where the library is located is not inserted in /etc/ or in any
  1597. files included it includes.
  1598. 2) json_array_foreach(json_list, key, value) {
  1599. ^
  1600. nfv9_template.c: In function ‘nfacctd_offline_read_json_template’:
  1601. nfv9_template.c:572:53: error: expected ‘;’ before ‘{’ token
  1602. This can happen at compile time and and is a bit tricky to hint. In this
  1603. example the function json_array_foreach() is not being recognized, in
  1604. other words while the library could be located, it does not contain the
  1605. specific function. This is a symptom that the library version in use is
  1606. too old. Typical situation is when using a packaged library rather than
  1607. a freshly downloaded and compiled latest stable release.
  1608. 3) /usr/local/lib/ undefined reference to `pcap_lex'
  1609. collect2: error: ld returned 1 exit status
  1610. make[2]: *** [pmacctd] Error 1
  1611. This can happen at compile time and it is a symptom that the needed
  1612. library could not be located by the linker. This is a symptom that the
  1613. library could be in some non-standard location and the linked need an
  1614. hint. For libpcap --with-pcap-libs knob is available at configure time;
  1615. for all other libraries the library_LIBS and library_CFLAGS environment
  1616. variables are available. See examples in the "Configuring pmacct for
  1617. compilation and installing" section of this document.
  1618. b) In case of crashes of an any process, regardless if predictable or not, the
  1619. advice is to run the daemon with "ulimit -c unlimited" so to generate a core
  1620. dump. The file is placed in the directory where the daemon is started so it
  1621. is good to take care of that. pmacct developers will then ask for one or both
  1622. of the following: 1) the core file along with the crashing executable and its
  1623. configuration be made available for further inspection and/or 2) a backtrace
  1624. in GDB obtained via the following two steps:
  1625. shell> gdb /path/to/executable /path/to/core
  1626. Then once in the gdb console the backtrace output can be obtained with the
  1627. following command:
  1628. gdb> bt
  1629. Optionally, especially if the issue can be easily reproduced, the daemon can
  1630. be re-configured for compiling with the --debug flag so to produce extra info
  1631. suitable for troubleshooting.
  1632. c) In case of (suspected) memory leaks, the advice is to: 1) re-compile pmacct
  1633. with "./configure --debug <any other flags already in use>"; --debug sets as
  1634. CFLAGS -O0 -g -Wall where especially -O0 is capital since it disables any code
  1635. optimizations the compiler may introduce; 2) run the resulting daemon under
  1636. valgrind, ie. "valgrind --leak-check=yes <pmacct command-line>". A memory leak
  1637. is confirmed if the amount of "definitely lost" bytes keeps increasing over
  1638. time.
  1639. d) In the two cases of nfacctd/sfacctd or nfprobe/sfprobe not showing signs of
  1640. input/output data: 1) check with tcpdump, ie. "tcpdump -i <interface> -n port
  1641. <sfacctd/nfacctd listening port>", that packets are emitted/received. Optionally
  1642. Wireshark (or its commandline counterpart tshark) can be used, in conjunction
  1643. with decoders ('cflow' for NetFlow/IPFIX and 'sflow' for sFlow), to validate
  1644. packets are consistent; this proofs there is no filtering taking place in
  1645. between exporters and collector; 2) check firewall settings on the collector
  1646. box, ie. "iptables -L -n" on Linux (disable or do appropriate holes): tcpdump
  1647. may see packets hitting the listening port as, in normal kernel operations, the
  1648. filtering happens after the raw socket (the one used by tcpdump) is served; you
  1649. can additionally certainly check with 3rd party equivalent applications or, say,
  1650. 'netcat' that the same behaviour is obtained as with pmacct ones 3) especially
  1651. in case of copy/paste of configs or if using a config from a production system
  1652. in lab, disable or double-check values for internal buffering: if set too high
  1653. they will likely retain data internally to the daemon; 4) if multiple interfaces
  1654. are configured on a system, try to disable (at least for a test) rp_filtering.
  1655. See for more info
  1656. on RP filtering. To disable RP filtering he value in the rp_filter files in
  1657. /proc must be set to zero; 5) in case aggregate_filter is in use: the feature
  1658. expects a libpcap-style filter as value. BPF filters are sensible to both VLAN
  1659. tags and MPLS labels: if, for example, the traffic is VLAN tagged and the value
  1660. of aggregate_filter is 'src net X.X.X.X/Y', there will be no match for VLAN-
  1661. tagged traffic from src net X.X.X.X/Y; the filter should be re-written as
  1662. 'vlan and src net X.X.X.X/Y'; 6) in case of NetFlow v9/IPFIX collection, two
  1663. protocols that are template-based, the issue may be with templates not being
  1664. received by nfacctd (in which case by enabling debug you may see "Discarded
  1665. NetFlow v9/IPFIX packet (R: unknown template [ .. ]" messages in your logs);
  1666. you can confirm whether templates are being exported/replicated/received with a
  1667. touch of "tshark -d udp.port==<NFv9/IPFIX port>,cflow -R cflow.template_id".
  1668. e) Replay packets can be needed, for example, to troubleshoot the behaviour
  1669. of one of the pmacct daemons. A capture in libpcap format, suitable for
  1670. replay, can be produced with tcpdump, ie. for NetFlow/IPFIX/sFlow via the
  1671. "tcpdump -i <interface> -n -s 0 -w <output file> port <sfacctd/nfacctd
  1672. listening port>" command-line. The output file can be replayed by using the
  1673. pcap_savefile (-I) and, optionally, the pcap_savefile_wait (-W) directives,
  1674. ie.: "nfacctd -I <pcap savefile> <.. >". For more advanced use-cases, ie.
  1675. loop indefinitely through the pcap file and run it with a speed multiplicator
  1676. in order to stress test the daemon, the tcpreplay tool can be used for the
  1677. purpose. In this case, before replaying NetFlow/IPFIX/sFlow, L2/L3 of
  1678. captured packets must be adjusted to reflect the lab environment; this can
  1679. be done with the tcprewrite tool of the tcpreplay package, ie.: "tcprewrite
  1680. --enet-smac=<src MAC address> --enet-dmac= <dst MAC address> -S <src IP
  1681. address rewrite> -D <dst IP address rewrite> --fixcsum --infile= <input
  1682. file, ie. output from tcpdump> --outfile=<output file>". Then the output
  1683. file from tcprewrite can be supplied to tcpreplay for the actual replay to
  1684. the pmacct daemon, ie.: "tcpreplay -x <speed multiplicator> -i <output
  1685. interface> <input file>".
  1686. f) Buffering is often an element to tune. While buffering internal to pmacct,
  1687. configured with plugin_buffer_size and plugin_pipe_size, returns warning
  1688. messages in case of data loss and brings solid queueing alterantives like
  1689. ZeroMQ (plugin_pipe_zmq), buffering between pmacct and the kernel, configured
  1690. with nfacctd_pipe_size and its equivalents, is more tricky and issues with it
  1691. can only be inferred by symptoms like sequence number checks failing (and
  1692. only for protocols like NetFlow v9/IPFIX supporting this feature). Two
  1693. commands useful to check this kind of buffering on Linux systems are:
  1694. 1) "cat /proc/net/udp" or "cat /proc/net/udp6" ensuring that "drops" value
  1695. is not increasing and 2) "netstat -s" ensuring, under the section UDP, that
  1696. errors are not increasing (since this command returns system-wide counters,
  1697. the counter-check would be: stop the pmacct daemon running and, granted the
  1698. counter was increasing, verify it does not increase anymore). As suggested
  1699. in CONFIG-KEYS description for the nfacctd_pipe_size configuration directive,
  1700. any lift in the buffering must be also supported by the kernel, adjusting
  1701. /proc/sys/net/core/rmem_max.
  1702. g) packet classification using the nDPI library is among the new features of
  1703. pmacct 1.7. As with any major and complex feature, it is expected that not
  1704. everything may work great and smooth at the first round of implementation.
  1705. In this section you will find a few tips on how to help providing meaningful
  1706. report of issues you may be experiencing in this area. 1) Please follow
  1707. guidelines in the section "Quickstart guide to packet classification" of this
  1708. document; 2) avoid generic reporting a-la "it doesn't work" or "there is too
  1709. much unknown traffic" or "i know protocol X is in my traffic mix but it's not
  1710. being classified properly"; 3) it is OK to contact the author directly given
  1711. sensitiveness of data may be involved; 4) it is OK to compare classification
  1712. results achieved with a 3rd party tool also using nDPI for classification; in
  1713. case of different results, show the actual results when reporting the issue
  1714. and please elaborate as much as possible how the comparison was done (ie.
  1715. say how it is being ensured that the two data- sets are the same or as much
  1716. as possible similar); 5) remember that the most effective way troubleshoot
  1717. any issue related to packet classification is by the author being able to
  1718. reproduce the issue or for him to verify first hand the problem: whenever
  1719. possible please share a traffic capture in pcap format or grant remote-access
  1720. to your testbed; 6) excluded from these guidelines are problems related to
  1721. nDPI but unrelated to classification, ie. memory leaks, performance issues,
  1722. crashes, etc. for which you can follow the other guide lines in this
  1723. "Miscellaneous notes and troubleshooting tips" section.