Appenders
Decanter appenders receive the data from the collectors, and store the data into a storage backend.
Log
The Decanter Log Appender creates a log message for each event received from the collectors.
The decanter-appender-log
feature installs the log appender:
karaf@root()> feature:install decanter-appender-log
The log appender doesn’t require any configuration.
Elasticsearch Appender
The Decanter Elasticsearch Appender stores the data (coming from the collectors) into an Elasticsearch instance. It transforms the data as a json document, stored into Elasticsearch.
The Decanter Elasticsearch appender uses the Elasticsearch Rest client provided since Elasticsearch 5.x. It can be used with any Elasticsearch versions.
The decanter-appender-elasticsearch
feature installs this appender:
karaf@root()> feature:install decanter-appender-elasticsearch
You can configure the appender (especially the Elasticsearch location) in etc/org.apache.karaf.decanter.appender.elasticsearch.cfg
configuration file.
File
The Decanter File appender stores the collected data in a CSV file.
The decanter-appender-file
feature installs the file appender:
karaf@root()> feature:install decanter-appender-file
By default, the file appender stores the collected data in ${karaf.data}/decanter/appender.csv
file. You can change the file where to store the data
using the filename
property in etc/org.apache.karaf.decanter.appender.file.cfg
configuration file.
Note
|
The default file changed from ${karaf.data}/decanter to ${karaf.data}/decanter/appender.csv in verson 2.7.0 due to a
conflict with the Alerting Service.
|
You can also change the marshaller to use. By default, the marshaller used is the CSV one. But you can switch to the JSON one
using the marshaller.target
property in etc/org.apache.karaf.decanter.appender.file.cfg
configuration file.
JDBC
The Decanter JDBC appender allows you to store the data (coming from the collectors) into a database.
The Decanter JDBC appender transforms the data as a json string. The appender stores the json string and the timestamp into the database.
The decanter-appender-jdbc
feature installs the jdbc appender:
karaf@root()> feature:install decanter-appender-jdbc
This feature also installs the etc/org.apache.karaf.decanter.appender.jdbc.cfg
configuration file:
####################################### # Decanter JDBC Appender Configuration ####################################### # Name of the JDBC datasource datasource.name=jdbc/decanter # Name of the table storing the collected data table.name=decanter # Dialect (type of the database) # The dialect is used to create the table # Supported dialects are: generic, derby, mysql # Instead of letting Decanter created the table, you can create the table by your own dialect=generic
This configuration file allows you to specify the connection to the database:
-
the
datasource.name
property contains the name of the JDBC datasource to use to connect to the database. You can create this datasource using the Karafjdbc:create
command (provided by thejdbc
feature). -
the
table.name
property contains the table name in the database. The Decanter JDBC appender automatically creates the table for you, but you can create the table by yourself. The table is simple and contains just two columns:-
timestamp as INTEGER
-
content as VARCHAR or CLOB
-
-
the
dialect
property allows you to specify the database type (generic, mysql, derby). This property is only used for the table creation.
JMS
The Decanter JMS appender "forwards" the data (collected by the collectors) to a JMS broker.
The appender sends a JMS Map message to the broker. The Map message contains the harvested data.
The decanter-appender-jms
feature installs the JMS appender:
karaf@root()> feature:install decanter-appender-jms
This feature also installs the etc/org.apache.karaf.decanter.appender.jms.cfg
configuration file containing:
##################################### # Decanter JMS Appender Configuration ##################################### # Name of the JMS connection factory connection.factory.name=jms/decanter # Name of the destination destination.name=decanter # Type of the destination (queue or topic) destination.type=queue # Connection username # username= # Connection password # password=
This configuration file allows you to specify the connection properties to the JMS broker:
-
the
connection.factory.name
property specifies the JMS connection factory to use. You can create this JMS connection factory using thejms:create
command (provided by thejms
feature). -
the
destination.name
property specifies the JMS destination name where to send the data. -
the
destination.type
property specifies the JMS destination type (queue or topic). -
the
username
property is optional and specifies the username to connect to the destination. -
the
password
property is optional and specifies the username to connect to the destination.
Camel
The Decanter Camel appender sends the data (collected by the collectors) to a Camel endpoint.
It’s a very flexible appender, allowing you to use any Camel route to transform and forward the harvested data.
The Camel appender creates a Camel exchange and set the "in" message body with a Map of the harvested data. The exchange is send to a Camel endpoint.
The decanter-appender-camel
feature installs the Camel appender:
karaf@root()> feature:install decanter-appender-camel
This feature also installs the etc/org.apache.karaf.decanter.appender.camel.cfg
configuration file containing:
# # Decanter Camel appender configuration # # The destination.uri contains the URI of the Camel endpoint # where Decanter sends the collected data destination.uri=direct-vm:decanter
This file allows you to specify the Camel endpoint where to send the data:
-
the
destination.uri
property specifies the URI of the Camel endpoint where to send the data.
The Camel appender sends an exchange. The "in" message body contains a Map of the harvested data.
For instance, in this configuration file, you can specify:
destination.uri=direct-vm:decanter
And you can deploy the following Camel route definition:
<?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> <camelContext xmlns="http://camel.apache.org/schema/blueprint"> <route id="decanter"> <from uri="direct-vm:decanter"/> ... ANYTHING ... </route> </camelContext> </blueprint>
This route will receive the Map of harvested data. Using the body of the "in" message, you can do what you want:
-
transform and convert to another data format
-
use any Camel EIPs (Enterprise Integration Patterns)
-
send to any Camel endpoint
Kafka
The Decanter Kafka appender sends the data (collected by the collectors) to a Kafka topic.
The decanter-appender-kafka
feature installs the Kafka appender:
karaf@root()> feature:install decanter-appender-kafka
This feature installs a default etc/org.apache.karaf.decanter.appender.kafka.cfg
configuration file containing:
################################## # Decanter JMS Kafka Configuration ################################## # A list of host/port pairs to use for establishing the initial connection to the Kafka cluster #bootstrap.servers=localhost:9092 # An id string to pass to the server when making requests # client.id # The compression type for all data generated by the producer # compression.type=none # The number of acknowledgments the producer requires the leader to have received before considering a request complete # - 0: the producer doesn't wait for ack # - 1: the producer just waits for the leader # - all: the producer waits for leader and all followers (replica), most secure # acks=all # Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error # retries=0 # The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition # batch.size=16384 # The total bytes of memory the producer can use to buffer records waiting to be sent to the server. # If records are sent faster than they can be delivered to the server the producer will either block or throw an exception # buffer.memory=33554432 # Serializer class for key that implements the Serializer interface # key.serializer=org.apache.kafka.common.serialization.StringSerializer # Serializer class for value that implements the Serializer interface. # value.serializer=org.apache.kafka.common.serialization.StringSerializer # Producer request timeout # request.timeout.ms=5000 # Max size of the request # max.request.size=2097152 # Name of the topic # topic=decanter # Security (SSL) # security.protocol=SSL # SSL truststore location (Kafka broker) and password # ssl.truststore.location=${karaf.etc}/keystores/keystore.jks # ssl.truststore.password=karaf # SSL keystore (if client authentication is required) # ssl.keystore.location=${karaf.etc}/keystores/clientstore.jks # ssl.keystore.password=karaf # ssl.key.password=karaf # (Optional) SSL provider (default uses the JVM one) # ssl.provider= # (Optional) SSL Cipher suites # ssl.cipher.suites= # (Optional) SSL Protocols enabled (default is TLSv1.2,TLSv1.1,TLSv1) # ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1 # (Optional) SSL Truststore type (default is JKS) # ssl.truststore.type=JKS # (Optional) SSL Keystore type (default is JKS) # ssl.keystore.type=JKS # Security (SASL) # For SASL, you have to configure Java System property as explained in http://kafka.apache.org/documentation.html#security_ssl
This file allows you to define how the messages are sent to the Kafka broker:
-
the
bootstrap.servers
contains a list of host:port of the Kafka brokers. Default value islocalhost:9092
. -
the
client.id
is optional. It identifies the client on the Kafka broker. -
the
compression.type
defines if the messages have to be compressed on the Kafka broker. Default value isnone
meaning no compression. -
the
acks
defines the acknowledgement policy. Default value isall
. Possible values are:-
0
means the appender doesn’t wait for an acknowledge from the Kafka broker. Basically, it means there’s no guarantee that messages have been received completely by the broker. -
1
means the appender waits for the acknowledge only from the leader. If the leader falls down, its possible messages are lost if the replicas have not yet been created on the followers. -
all
means the appender waits for the acknowledge from the leader and all followers. This mode is the most reliable as the appender will receive the acknowledge only when all replicas have been created. NB: this mode doesn’t make sense if you have a single node Kafka broker or a replication factor set to 1.
-
-
the
retries
defines the number of retries performed by the appender in case of error. The default value is0
meaning no retry at all. -
the
batch.size
defines the size of the batch records. The appender will attempt to batch records together into fewer requests whenever multiple records are being sent to the same Kafka partition. The default value is 16384. -
the
buffer.memory
defines the size of the buffer the appender uses to send to the Kafka broker. The default value is 33554432. -
the
key.serializer
defines the fully qualified class name of the Serializer used to serialize the keys. The default is a String serializer (org.apache.kafka.common.serialization.StringSerializer
). -
the
value.serializer
defines the full qualified class name of the Serializer used to serialize the values. The default is a String serializer (org.apache.kafka.common.serialization.StringSerializer
). -
the
request.timeout.ms
is the time the producer wait before considering the message production on the broker fails (default is 5s). -
the
max.request.size
is the max size of the request sent to the broker (default is 2097152 bytes). -
the
topic
defines the name of the topic where to send data on the Kafka broker.
It’s also possible to enable SSL security (with Kafka 0.9.x) using the SSL properties.
Loki
Loki (https://grafana.com/oss/loki/) is a log aggregation system. The Decanter Loki appender is able to push collected data to Loki via the Push API.
The Decanter Loki appender converts any kind of collected data (coming from the dispatcher) as a log string that can be stored in Loki.
The decanter-appender-loki
feature installs the Loki appender:
karaf@root()> feature:install decanter-appender-loki
This feature also adds etc/org.apache.karaf.decanter.appender.loki.cfg
configuration file:
###################################### # Decanter Loki Appender Configuration ###################################### # Loki push API location #loki.url=http://localhost:3100/loki/api/v1/push # Loki tenant #loki.tenant=my-tenant # Loki basic authentication #loki.username= #loki.password= # Marshaller #marshaller.target=(dataFormat=raw)
-
loki.url
is the location of the Loki push API -
loki.tenant
is optional and define the tenant used to push data -
loki.username
andloki.password
are used for basic authentication
Redis
The Decanter Redis appender sends the data (collected by the collectors) to a Redis broker.
The decanter-appender-redis
feature installs the Redis appender:
karaf@root()> feature:install decanter-appender-redis
This feature also installs a default etc/org.apache.karaf.decanter.appender.redis.cfg
configuration file containing:
####################################### # Decanter Redis Appender Configuration ####################################### # # Location of the Redis broker # It's possible to use a list of brokers, for instance: # host= locahost:6389,localhost:6332,localhost:6419 # # Default is localhost:6379 # address=localhost:6379 # # Define the connection mode. # Possible modes: Single (default), Master_Slave, Sentinel, Cluster # mode=Single # # Name of the Redis map # Default is Decanter # map=Decanter # # For Master_Slave mode, we define the location of the master # Default is localhost:6379 # #masterAddress=localhost:6379 # # For Sentinel model, define the name of the master # Default is myMaster # #masterName=myMaster # # For Cluster mode, define the scan interval of the nodes in the cluster # Default value is 2000 (2 seconds). # #scanInterval=2000
This file allows you to configure the Redis broker to use:
-
the
address
property contains the location of the Redis broker -
the
mode
property defines the Redis topology to use (Single, Master_Slave, Sentinel, Cluster) -
the
map
property contains the name of the Redis map to use -
the
masterAddress
is the location of the master when using the Master_Slave topology -
the
masterName
is the name of the master when using the Sentinel topology -
the
scanInternal
is the scan interval of the nodes when using the Cluster topology
MQTT
The Decanter MQTT appender sends the data (collected by the collectors) to a MQTT broker.
The decanter-appender-mqtt
feature installs the MQTT appender:
karaf@root()> feature:install decanter-appender-mqtt
This feature installs a default etc/org.apache.karaf.decanter.appender.mqtt.cfg
configuration file containing:
#server=tcp://localhost:9300 #clientId=decanter #topic=decanter
This file allows you to configure the location and where to send in the MQTT broker:
-
the
server
contains the location of the MQTT broker -
the
clientId
identifies the appender on the MQTT broker -
the
topic
is the name of the topic where to send the messages
Cassandra
The Decanter Cassandra appender allows you to store the data (coming from the collectors) into an Apache Cassandra database.
The decanter-appender-cassandra
feature installs this appender:
karaf@root()> feature:install decanter-appender-cassandra
This feature installs the appender and a default etc/org.apache.karaf.decanter.appender.cassandra.cfg
configuration file
containing:
########################################### # Decanter Cassandra Appender Configuration ########################################### # Name of Keyspace keyspace.name=decanter # Name of table to write to table.name=decanter # Cassandra host name cassandra.host= # Cassandra port cassandra.port=9042
-
the
keyspace.name
property identifies the keyspace used for Decanter data -
the
table.name
property defines the name of the table where to store the data -
the
cassandra.host
property contains the hostname or IP address where the Cassandra instance is running (default is localhost) -
the
cassandra.port
property contains the port number of the Cassandra instance (default is 9042)
InfluxDB
The Decanter InfluxDB appender allows you to store the data (coming from the collectors) as a time series into a InfluxDB database.
The decanter-appender-influxdb
feature installs this appender:
karaf@root()> feature:install decanter-appender-influxdb
This feature installs the appender and a default etc/org.apache.karaf.decanter.appender.influxdb.cfg
configuration file containing:
########################################## # Decanter InfluxDB Appender Configuration ########################################## # URL of the InfluxDB database url= # InfluxDB server username #username= # InfluxDB server password #password= # InfluxDB database name database=decanter
-
url
property is mandatory and define the location of the InfluxDB server -
database
property contains the name of the InfluxDB database. Default isdecanter
. -
username
andpassword
are optional and define the authentication to the InfluxDB server.
MongoDB
The Decanter MongoDB appender allows you to store the data (coming from the collectors) into a MongoDB database.
The decanter-appender-mongodb
feature installs this appender:
karaf@root()> feature:install decanter-appender-mongodb
This feature installs the appender and a default etc/org.apache.karaf.decanter.appender.mongodb.cfg
configuration file
containing:
################################ # Decanter MongoDB Configuration ################################ # MongoDB connection URI #uri=mongodb://localhost # MongoDB database name #database=decanter # MongoDB collection name #collection=decanter
-
the
uri
property contains the location of the MongoDB instance -
the
database
property contains the name of the MongoDB database -
the
collection
property contains the name of the MongoDB collection
Network socket
The Decanter network socket appender sends the collected data to a remote Decanter network socket collector.
The use case could be to dedicate a Karaf instance as a central monitoring platform, receiving collected data from the other nodes.
The decanter-appender-socket
feature installs this appender:
karaf@root()> feature:install decanter-appender-socket
This feature installs the appender and a default etc/org.apache.karaf.decanter.appender.socket.cfg
configuration file
containing:
# Decanter Socket Appender # Hostname (or IP address) where to send the collected data #host=localhost # Port number where to send the collected data #port=34343 # If connected is true, the socket connection is created when the appender starts and # collected data are "streamed" to the socket. # If connected is false (default), a new socket connection is created for each data # to send to the socket. #connected=false # Marshaller to use marshaller.target=(dataFormat=json)
-
the
host
property contains the hostname or IP address of the remote network socket collector -
the
port
property contains the port number of the remote network socket collector -
the
connected
property defines if the socket connection is created when the appender starts, or for each data event. -
the
marshaller.target
property defines the data format to use.
OrientDB
The Decanter OrientDB appender stores the collected data into OrientDB Document database.
You can use an external OrientDB instance or you can use an embedded instance provided by Decanter.
OrientDB appender
The decanter-appender-orientdb
feature installs the OrientDB appender.
This feature installs the etc/org.apache.karaf.decanter.appender.orientdb.cfg
configuration file allowing you to setup the location
of the OrientDB database to use:
################################# # Decanter OrientDB Configuration ################################# # OrientDB connection URL #url=remote:localhost/decanter # OrientDB database username #username=root # OrientDB database password #password=decanter
where:
-
url
is the location of the OrientDB Document database. By default, it usesremote:localhost/decanter
corresponding to the OrientDB embedded instance. -
username
is the username to connect to the remote OrientDB Document database. -
password
is the password to connect to the remote OrientDB Document database.
OrientDB embedded instance
Warning
|
For production, we recommend to use a dedicated OrientDB instance. The following feature is not recommended for production. |
The orientdb
feature starts an OrientDB embedded datase. It also installs the etc/orientdb-server-config.xml
configuration file allowing you to configure the OrientBD instance:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <orient-server> <handlers> <handler class="com.orientechnologies.orient.graph.handler.OGraphServerHandler"> <parameters> <parameter value="true" name="enabled"/> <parameter value="50" name="graph.pool.max"/> </parameters> </handler> <handler class="com.orientechnologies.orient.server.handler.OJMXPlugin"> <parameters> <parameter value="false" name="enabled"/> <parameter value="true" name="profilerManaged"/> </parameters> </handler> <handler class="com.orientechnologies.orient.server.handler.OServerSideScriptInterpreter"> <parameters> <parameter value="true" name="enabled"/> <parameter value="SQL" name="allowedLanguages"/> </parameters> </handler> </handlers> <network> <protocols> <protocol implementation="com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary" name="binary"/> <protocol implementation="com.orientechnologies.orient.server.network.protocol.http.ONetworkProtocolHttpDb" name="http"/> </protocols> <listeners> <listener protocol="binary" socket="default" port-range="2424-2430" ip-address="0.0.0.0"/> <listener protocol="http" socket="default" port-range="2480-2490" ip-address="0.0.0.0"> <commands> <command implementation="com.orientechnologies.orient.server.network.protocol.http.command.get.OServerCommandGetStaticContent" pattern="GET|www GET|studio/ GET| GET|*.htm GET|*.html GET|*.xml GET|*.jpeg GET|*.jpg GET|*.png GET|*.gif GET|*.js GET|*.css GET|*.swf GET|*.ico GET|*.txt GET|*.otf GET|*.pjs GET|*.svg GET|*.json GET|*.woff GET|*.woff2 GET|*.ttf GET|*.svgz" stateful="false"> <parameters> <entry value="Cache-Control: no-cache, no-store, max-age=0, must-revalidate\r\nPragma: no-cache" name="http.cache:*.htm *.html"/> <entry value="Cache-Control: max-age=120" name="http.cache:default"/> </parameters> </command> <command implementation="com.orientechnologies.orient.graph.server.command.OServerCommandGetGephi" pattern="GET|gephi/*" stateful="false"/> </commands> <parameters> <parameter value="utf-8" name="network.http.charset"/> <parameter value="true" name="network.http.jsonResponseError"/> </parameters> </listener> </listeners> </network> <storages/> <users> </users> <properties> <entry value="1" name="db.pool.min"/> <entry value="50" name="db.pool.max"/> <entry value="false" name="profiler.enabled"/> </properties> <isAfterFirstTime>true</isAfterFirstTime> </orient-server>
Most of the values can be let as they are, however, you can tweak some:
-
<listener/>
allows you to configure the protocol and port numbers used by the OrientDB instance. You can define the IP address on which the instance is bound (ip-address
), the port numbers range to use (port-range
) for each protocol (binary
orhttp
). -
the
db.pool.min
anddb.pool.max
can be increased if you have a large number of connections on the instance.
Dropwizard Metrics
The Dropwizard Metrics appender receives the harvested data from the dispatcher and pushes to a Dropwizard Metrics
MetricRegistry
. You can register this MetricRegistry
in your own application or use a Dropwizard Metrics Reporter
to "push" these metrics to some backend.
The decanter-appender-dropwizard
feature provides the Decanter event handler registering the harvested data into the
MetricRegistry
:
karaf@root()> feature:install decanter-appender-dropwizard
TimescaleDB
The Decanter TimescaleDB appender stores the collected data into TimescaleDB database.
You have to install a TimescaleDB before using the appender.
You can install a test database with Docker for dev:
docker run -d --name timescaledb -p 5432:5432 -e POSTGRES_PASSWORD=decanter -e POSTGRES_USER=decanter -e POSTGRES_DATABASE=decanter timescale/timescaledb
The decanter-appender-timescaledb
feature installs the TimescaleDB appender.
As TimescaleDB is a PostgreSQL database extension, the timescaledb feature will install all required features to configure your datasource (jdbc, jndi, postgreSQL driver, pool datasource).
This feature installs the etc/org.apache.karaf.decanter.appender.timescaledb.cfg
configuration file allowing you to setup the location
of the TimescaleDB database to use:
################################# # Decanter TimescaleDB Configuration ################################# # DataSource to use dataSource.target=(osgi.jndi.service.name=jdbc/decanter-timescaledb) # Name of the table storing the collected data table.name=decanter # Marshaller to use (json is recommended) marshaller.target=(dataFormat=json)
where:
-
datasource.target
property contains the name of the JDBC datasource to use to connect to the database. You can create this datasource using the Karafjdbc:create
command (provided by thejdbc
feature). -
table.name
property contains the table name in the database. The Decanter JDBC appender automatically activates the Timescale extenssion, creates the table for you and migrates the table to a TimescaleDB hypertable. The table is simple and contains just two column:-
timestamp
as BIGINT -
content
as TEXT
-
-
marshaller.target
is the marshaller used to serialize data into the table.
WebSocket Servlet
The decanter-appender-websocket-servlet
feature exposes a websocket on which clients can register. Then, Decanter will send the collected data to the connected clients.
It’s very easy to use. First install the feature:
karaf@root()> feature:install decanter-appender-websocket-servlet
The feature registers the WebSocket endpoint on http://localhost:8181/decanter-websocket
by default:
karaf@root()> http:list
ID │ Servlet │ Servlet-Name │ State │ Alias │ Url
───┼──────────────────────────┼────────────────┼─────────────┼─────────────────────┼────────────────────────
55 │ DecanterWebSocketServlet │ ServletModel-2 │ Deployed │ /decanter-websocket │ [/decanter-websocket/*]
The alias can be configured via the etc/org.apache.karaf.decanter.appender.websocket.servlet.cfg
configuration file installed by the feature.
You can now register your websocket client on this URL. You can use curl
as client to test:
curl --include \
--no-buffer \
--header "Connection: Upgrade" \
--header "Upgrade: websocket" \
--header "Host: localhost:8181" \
--header "Origin: http://localhost:8181/decanter-websocket" \
--header "Sec-WebSocket-Key: SGVsbG8sIHdvcmxkIQ==" \
--header "Sec-WebSocket-Version: 13" \
http://localhost:8181/decanter-websocket
Prometheus
The decanter-appender-prometheus
feature collects and exposes metrics on prometheus:
karaf@root()> feature:install decanter-appender-prometheus
The feature registers the Prometheus HTTP servlet on http://localhost:8181/decanter/prometheus
by default:
karaf@root()> http:list
ID │ Servlet │ Servlet-Name │ State │ Alias │ Url
───┼────────────────┼────────────────┼─────────────┼──────────────────────┼─────────────────────────
51 │ MetricsServlet │ ServletModel-2 │ Deployed │ /decanter/prometheus │ [/decanter/prometheus/*]
You can change the servlet alias in etc/org.apache.karaf.decanter.appender.prometheus.cfg
configuration file:
################################################################################
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
################################################################################
################################################
# Decanter Prometheus Appender Configuration
################################################
# Prometheus HTTP servlet alias
#alias=/decanter/prometheus
The Decanter Prometheus appender exports io.prometheus*
packages, meaning that you can simple add your metrics to the Decanter Prometheus servlet.
You just have to import io.prometheus*
packages and simple use the regular Prometheus code:
class YourClass {
static final Gauge inprogressRequests = Gauge.build()
.name("inprogress_requests").help("Inprogress requests.").register();
void processRequest() {
inprogressRequests.inc();
// Your code here.
inprogressRequests.dec();
}
}
Don’t forget to import io.prometheus*
packages in your bundle MANIFEST.MF
:
Import-Package: io.prometheus.client;version="[0.8,1)"
That’s the only thing you need: your metrics will be available on the Decanter Prometheus servlet (again on http://localhost:8181/decanter/prometheus
by default).
Rest
Decanter Rest appender send collected data to a remote REST service.
The decanter-appender-rest
feature installs the Rest appender:
karaf@root()> feature:install decanter-appender-rest
The feature also installs etc/org.apache.karaf.decanter.appender.rest.cfg
configuration file:
############################### # Decanter Appender REST Configuration ############################### # Mandatory URI where the REST appender connects to uri= #request.method=POST (the REST verb) #user= (for basic authentication) #password= (for basic authentication) #content.type=application/json (the message content type sent) #charset=utf-8 (the message charset) #header.foo= (HTTP header prefixed with header.) #payload.header= (if set the Decanter collected data is sent as HTTP header instead of body) # Marshaller to use (json is recommended) marshaller.target=(dataFormat=json)
-
uri
is mandatory and contains the location of the Rest service to call -
user
andpassword
are used if the Rest service uses basic authentication -
content.type
is the message type sent to the Rest service (default isapplication/json
) -
charset
is the message encoding (default isutf-8
) -
header.
allows you to add any custom HTTP headers (parameters) to the request (prefixed byheader.
) -
payload.header
allows you to use a HTTP header to send the collected data instead of directly the "body".
HDFS
Decanter HDFS appender stores collected data into a HDFS file.
The decanter-appender-hdfs
feature installs the HDFS appender:
karaf@root()> feature:install decanter-appender-hdfs
The feature also installs etc/org.apache.karaf.decanter.appender.hdfs.cfg
configuration file:
###################################### # Decanter HDFS Appender Configuration ###################################### # Optional HDFS configuration #hdfs.configuration= # File mode (create or append) #hdfs.mode=create|append|overwrite # Path location #hdfs.path= # Marshaller marshaller.target=(dataFormat=csv)
-
hdfs.configuration
is the location of the hdfs configuration file (core or site) -
hdfs.mode
defines the way of populating the file on HDFS (creating a new one, appending to an existing one, overwriting an existing one) -
hdfs.path
defines the location and name of the file on HDFS
Amazon S3
Decanter Amazon S3 appender stores collected data as objects in a S3 bucket.
The decanter-appender-s3
feature installs the S3 appender:
karaf@root()> feature:install decanter-appender-s3
The feature also installs etc/org.apache.karaf.decanter.appender.hdfs.cfg
configuration file:
############################### # Decanter Appender S3 Configuration ############################### # AWS credentials accessKeyId= secretKeyId= # AWS Region (optional) #region= # S3 bucket name bucket= # Marshaller to use marshaller.target=(dataFormat=json)
-
accessKeyId
property is required, containing your AWS access key -
secretKeyId
property is required, containing your AWS secret key -
region
property is optional and allows you to define the Amazon region to use -
bucket
property is required, containing the name of the S3 bucket where to add objects