"
scopes:
- user:email
- read:user
authorization:
url: https://github.com/login/oauth/authorize
token:
url: https://github.com/login/oauth/access_token
auth-method: client-secret-post
```
You can also override the GitHub API url if needed. Default value is https://api.github.com
```yaml
github.api.url: https://override.api.github.com
```
To further tell AKHQ to display GitHub SSO options on the login page and customize claim mapping, configure Oauth in the AKHQ config:
```yaml
akhq:
security:
default-group: no-roles
oauth2:
enabled: true
providers:
github:
label: "Login with GitHub"
username-field: login
users:
- username: franz
groups:
# the corresponding akhq groups (eg. topic-reader/writer or akhq default groups like admin/reader/no-role)
- topic-reader
- topic-writer
```
The username field can be any string field, the roles field has to be a JSON array.
## References
[https://micronaut-projects.github.io/micronaut-security/latest/guide/#oauth2-configuration](https://micronaut-projects.github.io/micronaut-security/latest/guide/#oauth2-configuration)
---
# Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/authentifications/groups.md
# Groups
Groups allow you to set users granular permissions to various resources.
::: warning
With PR #1472 AKHQ introduced a new, better group management system in 0.25.0. It's a breaking change, so you need to rewrite your ACLS
:::
Define groups with specific roles for your users
* `akhq.security.default-group`: Default group for all the user even unlogged user
* `akhq.security.groups`: Groups map definition
* `key:` a uniq key used as name if not specified
* A list of role/patterns/clusters association
* `role`: name of an existing role
* `patterns`: list of regular expression that resources from the given role must match at least once get access
* `clusters`: list of regular expression that cluster must match at least once to get access
::: warning
Please also set the `micronaut.security.token.jwt.signatures.secret.generator.secret` if you set a group.
If the secret is not set, the API will not enforce the group role, and the restriction is in the UI only.
:::
3 defaults group are available :
* `admin` with all rights and no patterns/clusters restrictions
* `reader` with read access only on all AKHQ and no patterns/clusters restrictions
* `no-roles` without any roles, forces user to login
Here is an example of a `reader` group definition based on the default reader role with access on all the resources prefixed with `pub` and located the on `public` cluster
```yaml
groups:
reader:
- role: reader
patterns: [ "pub.*" ]
clusters: [ "public" ]
```
## Roles
Roles are based on Resource and Action association. A role can target one or several Resource and allow one or several Action.
The resources and actions list + possible associations between them are detailed in the table below.
You can still associate a resource with a non-supported action from the table however it will be ignored.
| | TOPIC | TOPIC_DATA | CONSUMER_GROUP | CONNECT_CLUSTER | CONNECTOR | SCHEMA | NODE | ACL | KSQLDB |
|----------------|-------|------------|----------------|-----------------|-----------|--------|------|-----|--------|
| READ | X | X | X | X | X | X | X | X | X |
| CREATE | X | X | | | X | X | | | |
| UPDATE | X | X | | | X | X | | | |
| DELETE | X | X | X | | X | X | | | |
| UPDATE_OFFSET | | | X | | | | | | |
| DELETE_OFFSET | | | X | | | | | | |
| READ_CONFIG | X | | | | | | X | | |
| ALTER_CONFIG | X | | | | | | X | | |
| DELETE_VERSION | | | | | | X | | | |
| UPDATE_STATE | | | | | X | | | | |
| EXECUTE | | | | | | | | | X |
A default roles list is predefined in `akhq.security.roles` but you can override it.
A role contains:
* `key:` a uniq key used as name
* A list of resources/actions associations
* `resources:` List of resources (ex: `[ "TOPIC", "TOPIC_DATA"]`)
* `actions:` Actions allowed on the previous resources (ex: `[ "READ", "CREATE"]`)
The default configuration provides a topic-admin role defined as follows:
```yaml
topic-admin:
- resources: [ "TOPIC", "TOPIC_DATA" ]
actions: [ "READ", "CREATE", "DELETE" ]
- resources: [ "TOPIC" ]
actions: [ "UPDATE", "READ_CONFIG", "ALTER_CONFIG" ]
```
---
# Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/authentifications/header.md
# Header configuration (reverse proxy)
To enable Header authentication in the application, you'll have to configure the header that will resolve users & groups:
```yaml
akhq:
security:
# Header configuration (reverse proxy)
header-auth:
user-header: x-akhq-user # mandatory (the header name that will contain username)
groups-header: x-akhq-group # optional (the header name that will contain groups separated by groups-header-separator)
groups-header-separator: , # optional (separator, defaults to ',')
ip-patterns: [0.0.0.0] # optional (Java regular expressions for matching trusted IP addresses, '0.0.0.0' matches all addresses)
default-group: topic-reader
groups: # optional
# the name of the user group read from header
- name: header-admin-group
groups:
# the corresponding akhq groups (eg. topic-reader/writer or akhq default groups like admin/reader/no-role)
- admin
users: # optional
- username: header-user # username matching the `user-header` value
groups: # list of groups / additional groups
- topic-writer
- username: header-admin
groups:
- admin
```
* `user-header` is mandatory in order to map the user with `users` list or to display the user on the ui if no `users` is provided.
* `groups-header` is optional and can be used in order to inject a list of groups for all the users. This list will be merged with `groups` for the current users.
* `groups-header-separator` is optional and can be used to customize group separator used when parsing `groups-header` header, defaults to `,`.
* `ip-patterns` limits the IP addresses that header authentication will accept, given as a list of Java regular expressions, omit or set to `[0.0.0.0]` to allow all addresses
* `default-group` default AKHQ group, used when no groups were read from `groups-header`
* `groups` maps external group names read from headers to AKHQ groups.
* `users` assigns additional AKHQ groups to users.
---
# Source:
# JWT
AKHQ uses signed JWT tokens to perform authentication.
Please generate a secret that is at least 256 bits.
You can use one of the following methods to provide the generated secret to AKHQ.
## Configuration File
Provide the generated secret via the AKHQ `application.yml` via the following directive:
```yaml
micronaut:
security:
enabled: true
token:
jwt:
signatures:
secret:
generator:
secret:
```
## Environment Variable
Provide the generated secret via [Micronaut Property Value Binding](https://docs.micronaut.io/latest/guide/index.html#_property_value_binding) using the following environment variable for the execution environment of AKHQ:
```bash
MICRONAUT_SECURITY_TOKEN_JWT_SIGNATURES_SECRET_GENERATOR_SECRET=""
```
---
# Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/authentifications/ldap.md
# LDAP
Configure how the ldap groups will be matched in AKHQ groups
* `akhq.security.ldap.groups`: Ldap groups list
* `- name: ldap-group-name`: Ldap group name (same name as in ldap)
* `groups`: AKHQ group list to be used for current ldap group
Example using [online ldap test server](https://www.forumsys.com/tutorials/integration-how-to/ldap/online-ldap-test-server/)
Configure ldap connection in micronaut
```yaml
micronaut:
security:
enabled: true
ldap:
default:
enabled: true
context:
server: 'ldap://ldap.forumsys.com:389'
managerDn: 'cn=read-only-admin,dc=example,dc=com'
managerPassword: 'password'
search:
base: "dc=example,dc=com"
groups:
enabled: true
base: "dc=example,dc=com"
```
If you want to enable anonymous auth to your LDAP server you can pass :
```yaml
managerDn: ''
managerPassword: ''
```
In Case your LDAP groups do not use the default UID for group membership, you can solve this using
```yaml
micronaut:
security:
enabled: true
ldap:
default:
search:
base: "OU=UserOU,dc=example,dc=com"
attributes:
- "cn"
groups:
enabled: true
base: "OU=GroupsOU,dc=example,dc=com"
filter: "member={0}"
```
Replace
```yaml
attributes:
- "cn"
```
with your group membership attribute
Configure AKHQ groups and Ldap groups and users
```yaml
micronaut:
security:
enabled: true
akhq:
security:
roles:
topic-reader:
- resources: [ "TOPIC", "TOPIC_DATA" ]
actions: [ "READ" ]
- resources: [ "TOPIC" ]
actions: [ "READ_CONFIG" ]
topic-writer:
- resources: [ "TOPIC", "TOPIC_DATA" ]
actions: [ "CREATE", "UPDATE" ]
- resources: [ "TOPIC" ]
actions: [ "ALTER_CONFIG" ]
groups:
topic-reader-pub:
- role: topic-reader
patterns: [ "pub.*" ]
topic-writer-clusterA-projectA:
- role: topic-reader
patterns: [ "projectA.*" ]
- role: topic-writer
patterns: [ "projectA.*" ]
clusters: [ "clusterA.*" ]
acl-reader-clusterA:
- role: acl-reader
clusters: [ "clusterA.*" ]
ldap:
groups:
- name: mathematicians
groups:
- topic-reader-pub
- name: scientists
groups:
- topic-writer-clusterA-projectA
- acl-reader-clusterA
users:
- username: franz
groups:
- topic-writer-clusterA-projectA
- acl-reader-clusterA
```
---
# Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/authentifications/oidc.md
# OIDC
To enable OIDC in the application, you'll first have to enable OIDC in micronaut:
```yaml
micronaut:
security:
oauth2:
enabled: true
clients:
google:
client-id: ""
client-secret: ""
openid:
issuer: ""
caches:
local-security-claim-provider:
expire-after-write: 600s # Default. May be overridden.
```
OIDC responses will be cached according to the settings under `micronaut.caches.local-security-claim-provider`.
To further tell AKHQ to display OIDC options on the login page and customize claim mapping, configure OIDC in the AKHQ config:
```yaml
akhq:
security:
roles:
topic-reader:
- resources: [ "TOPIC", "TOPIC_DATA" ]
actions: [ "READ" ]
- resources: [ "TOPIC" ]
actions: [ "READ_CONFIG" ]
topic-writer:
- resources: [ "TOPIC", "TOPIC_DATA" ]
actions: [ "CREATE", "UPDATE" ]
- resources: [ "TOPIC" ]
actions: [ "ALTER_CONFIG" ]
groups:
topic-reader-pub:
- role: topic-reader
patterns: [ "pub.*" ]
topic-writer-clusterA-projectA:
- role: topic-reader
patterns: [ "projectA.*" ]
- role: topic-writer
patterns: [ "projectA.*" ]
clusters: [ "clusterA.*" ]
acl-reader-clusterA:
- role: acl-reader
clusters: [ "clusterA.*" ]
oidc:
enabled: true
providers:
google:
label: "Login with Google"
username-field: preferred_username
# specifies the field name in the oidc claim containing the use assigned role (eg. in keycloak this would be the Token Claim Name you set in your Client Role Mapper)
groups-field: roles
default-group: topic-reader
groups:
# the name of the user role set in your oidc provider and associated with your user (eg. in keycloak this would be a client role)
- name: mathematicians
groups:
# the corresponding akhq groups (eg. topic-reader/writer or akhq default groups like admin/reader/no-role)
- topic-reader-pub
- name: scientists
groups:
- topic-writer-clusterA-projectA
- acl-reader-clusterA
users:
- username: franz
groups:
- topic-writer-clusterA-projectA
- acl-reader-clusterA
```
The username field can be any string field, the roles field has to be a JSON array. The mapping is performed on the OIDC _ID token_.
## Direct OIDC mapping
If you want to manage AKHQ roles an attributes directly with the OIDC provider, you can use the following configuration:
```yaml
akhq:
security:
oidc:
enabled: true
providers:
google:
label: "Login with Google"
username-field: preferred_username
use-oidc-claim: true
```
In this scenario, you need to make the OIDC provider return a JWT which have the following fields:
```json
{
// Standard claims
"exp": 1635868816,
"iat": 1635868516,
"preferred_username": "json",
...
"scope": "openid email profile",
// Mandatory AKHQ claims
"groups": {
"topic-writer-clusterA-projectA": [
{
"role": "topic-reader",
"patterns": [
"pub.*"
]
}, {
"role": "topic-writer",
"patterns": [
"projectA.*"
],
"clusters": [
"clusterA.*"
]
}
],
"acl-reader-clusterA": [
{
"role": "acl-reader",
"clusters": [
"clusterA.*"
]
}
]
}
}
```
---
# Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/avro.md
# Avro deserialization
Avro messages using Schema registry are automatically decoded if the registry is configured (see [Kafka cluster](../configuration/brokers.md)).
You can also decode raw binary Avro messages, that is messages encoded directly with [DatumWriter](https://avro.apache.org/docs/current/api/java/org/apache/avro/io/DatumWriter.html) without any header.
You must provide a `schemas-folder` and mappings which associate a `topic-regex` and a schema file name. The schema can be
specified either for message keys with `key-schema-file` and/or for values with `value-schema-file`.
Here is an example of configuration:
```yaml
akhq:
connections:
kafka:
properties:
# standard kafka properties
deserialization:
avro-raw:
schemas-folder: "/app/avro_schemas"
topics-mapping:
- topic-regex: "album.*"
value-schema-file: "Album.avsc"
- topic-regex: "film.*"
value-schema-file: "Film.avsc"
- topic-regex: "test.*"
key-schema-file: "Key.avsc"
value-schema-file: "Value.avsc"
```
Examples can be found in [tests](https://github.com/tchiotludo/akhq/tree/dev/src/main/java/org/akhq/utils).
---
# Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/brokers.md
# Cluster configuration
* `akhq.connections` is a key value configuration with :
* `key`: must be an url friendly (letter, number, _, -, ... dot are not allowed here) string to identify your cluster (`my-cluster-1` and `my-cluster-2` is the example above)
* `properties`: all the configurations found on [Kafka consumer documentation](https://kafka.apache.org/documentation/#consumerconfigs). Most important is `bootstrap.servers` that is a list of host:port of your Kafka brokers.
* `schema-registry`: *(optional)*
* `url`: the schema registry url
* `type`: the type of schema registry used, either 'confluent' or 'tibco'
* `basic-auth-username`: schema registry basic auth username
* `basic-auth-password`: schema registry basic auth password
* `properties`: all the configurations for registry client, especially ssl configuration
* `connect`: *(optional list, define each connector as an element of a list)*
* `name`: connect name
* `url`: connect url
* `basic-auth-username`: connect basic auth username
* `basic-auth-password`: connect basic auth password
* `ssl-trust-store`: /app/truststore.jks
* `ssl-trust-store-password`: trust-store-password
* `ssl-key-store`: /app/truststore.jks
* `ssl-key-store-password`: key-store-password
* `ksqldb`: *(optional list, define each ksqlDB instance as an element of a list)*
* `name`: ksqlDB name
* `url`: ksqlDB url
* `basic-auth-username`: ksqlDB basic auth username
* `basic-auth-password`: ksqlDB basic auth password
## Basic cluster with plain auth
```yaml
akhq:
connections:
local:
properties:
bootstrap.servers: "local:9092"
schema-registry:
url: "http://schema-registry:8085"
connect:
- name: "connect"
url: "http://connect:8083"
ksqldb:
- name: "ksqldb"
url: "http://connect:8088"
```
## Example for Confluent Cloud
```yaml
akhq:
connections:
ccloud:
properties:
bootstrap.servers: "{{ cluster }}.{{ region }}.{{ cloud }}.confluent.cloud:9092"
security.protocol: SASL_SSL
sasl.mechanism: PLAIN
sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username="{{ kafkaUsername }}" password="{{ kafkaPassword }}";
schema-registry:
url: "https://{{ cluster }}.{{ region }}.{{ cloud }}.confluent.cloud"
basic-auth-username: "{{ schemaRegistryUsername }}"
basic-auth-password: "{{ schemaRegistryPaswword }}"
```
## SSL Kafka Cluster
Configuration example for kafka cluster secured by ssl for saas provider like aiven (full https & basic auth):
You need to generate a jks & p12 file from pem, cert files give by saas provider.
```bash
openssl pkcs12 -export -inkey service.key -in service.cert -out client.keystore.p12 -name service_key
keytool -import -file ca.pem -alias CA -keystore client.truststore.jks
```
Configurations will look like this example:
```yaml
akhq:
connections:
ssl-dev:
properties:
bootstrap.servers: "{{host}}.aivencloud.com:12835"
security.protocol: SSL
ssl.truststore.location: {{path}}/avnadmin.truststore.jks
ssl.truststore.password: {{password}}
ssl.keystore.type: "PKCS12"
ssl.keystore.location: {{path}}/avnadmin.keystore.p12
ssl.keystore.password: {{password}}
ssl.key.password: {{password}}
schema-registry:
url: "https://{{host}}.aivencloud.com:12838"
type: "confluent"
basic-auth-username: avnadmin
basic-auth-password: {{password}}
properties:
schema.registry.ssl.truststore.location: {{path}}/avnadmin.truststore.jks
schema.registry.ssl.truststore.password: {{password}}
connect:
- name: connect-1
url: "https://{{host}}.aivencloud.com:{{port}}"
basic-auth-username: avnadmin
basic-auth-password: {{password}}
```
## OAuth2 authentification for brokers
Requirement Library Strimzi:
> The kafka brokers must be configured with the Strimzi library and an OAuth2 provider (Keycloak example).
> This [repository](https://github.com/strimzi/strimzi-kafka-oauth) contains documentation and examples.
Configuration Bootstrap:
> It's not necessary to compile AKHQ to integrate the Strimzi libraries since the libs will be included on the final image !
You must configure AKHQ through the application.yml file.
```yaml
akhq:
connections:
my-kafka-cluster:
properties:
bootstrap.servers: ":9094,:9094"
sasl.jaas.config: org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required auth.valid.issuer.uri="https:///auth/realms/sandbox_kafka" oauth.jwks.endpoint.uri="https:///auth/realms/sandbox_kafka/protocol/openid-connect/certs" oauth.username.claim="preferred_username" oauth.client.id="kafka-producer-client" oauth.client.secret="" oauth.ssl.truststore.location="kafka.server.truststore.jks" oauth.ssl.truststore.password="xxxxx" oauth.ssl.truststore.type="jks" oauth.ssl.endpoint_identification_algorithm="" oauth.token.endpoint.uri="https:///auth/realms/sandbox_kafka/protocol/openid-connect/token";
sasl.login.callback.handler.class: io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler
security.protocol: SASL_PLAINTEXT
sasl.mechanism: OAUTHBEARER
```
I put oauth.ssl.endpoint_identification_algorithm = "" for testing or my certificates did not match the FQDN. In a production, you have to remove it.
---
# Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/docker.md
# Docker
## Pass custom Java opts
By default, the docker container will allow a custom JVM options setting the environments vars `JAVA_OPTS`.
For example, if you want to change the default timezone, just add `-e "JAVA_OPTS=-Duser.timezone=Europe/Paris"`
## Run with another jvm.options file
By default, the docker container will run with a [jvm.options](https://github.com/tchiotludo/akhq/blob/dev/docker/app/jvm.options) file, you can override it with
your own with an Environment Variable. With the `JVM_OPTS_FILE` environment variable, you can override the jvm.options file by passing
the path of your file instead.
Override the `JVM_OPTS_FILE` with docker run:
```sh
docker run -d \
--env JVM_OPTS_FILE={{path-of-your-jvm.options-file}}
-p 8080:8080 \
-v /tmp/application.yml:/app/application.yml \
tchiotludo/akhq
```
Override the `JVM_OPTS_FILE` with docker-compose:
```yaml
services:
akhq:
image: tchiotludo/akhq-jvm:dev
environment:
JVM_OPTS_FILE: /app/jvm.options
ports:
- "8080:8080"
volumes:
- /tmp/application.yml:/app/application.yml
```
If you do not override the `JVM_OPTS_FILE`, the docker container will take the defaults one instead.
The AKHQ docker image supports 4 environment variables to handle configuration :
* `AKHQ_CONFIGURATION`: a string that contains the full configuration in yml that will be written on
/app/configuration.yml on the container.
* `MICRONAUT_APPLICATION_JSON`: a string that contains the full configuration in JSON format
* `MICRONAUT_CONFIG_FILES`: a path to a configuration file in the container. Default path is `/app/application.yml`
* `CLASSPATH`: additional Java classpath entries. Must be used to specify the location of the TIBCO Avro client library
jar if a 'tibco' schema registry type is used
## How to mount configuration file
Take care when you mount configuration files to not remove akhq files located on /app.
You need to explicitly mount the `/app/application.yml` and not mount the `/app` directory.
This will remove the AKHQ binaries and give you this error: `
/usr/local/bin/docker-entrypoint.sh: 9: exec: ./akhq: not found`
```yaml
volumeMounts:
- mountPath: /app/application.yml
subPath: application.yml
name: config
readOnly: true
```
---
# Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/helm.md
# Helm
Basically to create your helm values, you can take a look to the default values and you can see how your values could be defined:
https://github.com/tchiotludo/akhq/blob/dev/helm/akhq/values.yaml
Nextone we will present some helm chart value example used in an AWS MSK that maybe could show how to use and define stuff in the helm chart and understand better how to define that.
## Examples
### AWS MSK with Basic Authentication and ALB controller ingress
The following HELM chart is an example of AWS MSK with a basic authentication and also using AWS load balancer controller.
So mixing the default values.yaml previously linked and adding the basic idea of basic AKHQ authentication (more info here: https://akhq.io/docs/configuration/authentifications/basic-auth.html) and the documentation about how to connect to the AWS MSK here https://akhq.io/docs/configuration/authentifications/aws-iam-auth.html, we created the following example.
And of course, about `ingress` and `service` is using similar Helm configurations like other external helm charts are using in the opensource community.
Also, if you need to add more stuff like ACL defintions, LDAP integrations or other stuff. In the main documentation there are present a lot of examples https://akhq.io/docs/ .
```yaml
# This is an example with basic auth and a AWS MSK and using a AWS loadbalancer controller ingress
configuration:
micronaut:
security:
enabled: true
default-group: no-roles
token:
jwt:
signatures:
secret:
generator:
secret: changeme
akhq:
security:
enabled: true
default-group: no-roles
basic-auth:
- username: changeme
password: changeme
groups:
- admin
- username: changeme
password: changeme
groups:
- reader
server:
access-log:
enabled: true
name: org.akhq.log.access
connections:
my-cluster-sasl:
properties:
bootstrap.servers:
security.protocol: SASL_SSL
sasl.mechanism: SCRAM-SHA-512
sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="username" password="password";
ingress:
enabled: true
portnumber: 8080
apiVersion: networking.k8s.io/v1
annotations:
kubernetes.io/ingress.class: 'alb'
alb.ingress.kubernetes.io/group.name: "akhq"
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443},{"HTTPS":80}]'
alb.ingress.kubernetes.io/load-balancer-attributes: 'routing.http2.enabled=true,idle_timeout.timeout_seconds=60'
alb.ingress.kubernetes.io/healthcheck-path: "/api/me"
alb.ingress.kubernetes.io/subnets:
external-dns.alpha.kubernetes.io/hostname: "akhq.domain"
alb.ingress.kubernetes.io/certificate-arn: "your_acm_here"
alb.ingress.kubernetes.io/ssl-policy: "ELBSecurityPolicy-TLS-1-2-2017-01"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tls"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443,80"
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-2017-01"
labels:
app: akhq
service:
port: 443
annotations:
service.beta.kubernetes.io/target-type: "ip"
hosts: [ 'akhq.domain' ]
paths: [ "/*" ]
tls:
- secretName: tls-credential
hosts:
- 'akhq.domain'
```
---
# Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/others.md
# Others
## Server
* `micronaut.server.context-path`: if behind a reverse proxy, path to akhq without trailing slash (optional).
Example: akhq is behind a reverse proxy with url , set `context-path: "/akhq"`.
Not needed if you're behind a reverse proxy with subdomain
## Kafka admin / producer / consumer default properties
* `akhq.clients-defaults.{{admin|producer|consumer}}.properties`: default configuration for admin producer or
consumer. All properties from [Kafka documentation](https://kafka.apache.org/documentation/) is available.
## Micronaut configuration
> Since AKHQ is based on [Micronaut](https://micronaut.io/), you can customize configurations (server port, ssl, ...) with [Micronaut configuration](https://docs.micronaut.io/snapshot/guide/configurationreference.html#io.micronaut.http.server.HttpServerConfiguration).
> More information can be found on [Micronaut documentation](https://docs.micronaut.io/snapshot/guide/index.html#config)
### Activating SSL
When using HTTPS for communication, Micronaut will need to get the certificate within Netty. This uses classes of the java.base package which are no longer activated inside the JDK we use. The configuration at the bottom needs to be extended by this environment variable:
```bash
JDK_JAVA_OPTIONS: --add-exports\=java.base/sun.security.x509\=ALL-UNNAMED
```
```yaml
micronaut:
server:
ssl:
enabled: true
build-self-signed: true
```
## JSON Logging
In order to configure AKHQ to output log in JSON format, a logback configuration needs to be provided, e.g. `logback.xml`
```xml
```
This file then needs to be mounted to `/app/logback.xml` and referenced in `JAVA_OPTS` via `-Dlogback.configurationFile=/app/logback.xml` (see [docker](docker.md) for more information).
---
# Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/protobuf.md
# Protobuf deserialization
To deserialize topics containing data in Protobuf format, you can set topics mapping:
for each `topic-regex` you can specify `descriptor-file-base64` (descriptor file encoded to Base64 format),
or you can put descriptor files in `descriptors-folder` and specify `descriptor-file` name,
also specify corresponding message types for keys and values.
If, for example, keys are not in Protobuf format, `key-message-type` can be omitted,
the same for `value-message-type`. . It's important to keep in mind that both `key-message-type` and `value-message-type`
require a fully-qualified name.
This configuration can be specified for each Kafka cluster.
Example configuration can look like as follows:
```yaml
akhq:
connections:
kafka:
properties:
# standard kafka properties
deserialization:
protobuf:
descriptors-folder: "/app/protobuf_desc"
topics-mapping:
- topic-regex: "album.*"
descriptor-file-base64: "Cs4BCgthbGJ1bS5wcm90bxIXY29tLm5ldGNyYWNrZXIucHJvdG9idWYidwoFQWxidW0SFAoFdGl0bGUYASABKAlSBXRpdGxlEhYKBmFydGlzdBgCIAMoCVIGYXJ0aXN0EiEKDHJlbGVhc2VfeWVhchgDIAEoBVILcmVsZWFzZVllYXISHQoKc29uZ190aXRsZRgEIAMoCVIJc29uZ1RpdGxlQiUKF2NvbS5uZXRjcmFja2VyLnByb3RvYnVmQgpBbGJ1bVByb3RvYgZwcm90bzM="
value-message-type: "org.akhq.utils.Album"
- topic-regex: "film.*"
descriptor-file-base64: "CuEBCgpmaWxtLnByb3RvEhRjb20uY29tcGFueS5wcm90b2J1ZiKRAQoERmlsbRISCgRuYW1lGAEgASgJUgRuYW1lEhoKCHByb2R1Y2VyGAIgASgJUghwcm9kdWNlchIhCgxyZWxlYXNlX3llYXIYAyABKAVSC3JlbGVhc2VZZWFyEhoKCGR1cmF0aW9uGAQgASgFUghkdXJhdGlvbhIaCghzdGFycmluZxgFIAMoCVIIc3RhcnJpbmdCIQoUY29tLmNvbXBhbnkucHJvdG9idWZCCUZpbG1Qcm90b2IGcHJvdG8z"
value-message-type: "org.akhq.utils.Film"
- topic-regex: "test.*"
descriptor-file: "other.desc"
key-message-type: "org.akhq.utils.Row"
value-message-type: "org.akhq.utils.Envelope"
```
More examples about Protobuf deserialization can be found in [tests](https://github.com/tchiotludo/akhq/tree/dev/src/test/java/org/akhq/utils).
Info about the descriptor files generation can be found in [test resources](https://github.com/tchiotludo/akhq/tree/dev/src/test/resources/protobuf_proto).
---
# Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/schema-registry/glue.md
# Glue schema registry
Currently ,glue schema registry support is limited to only de-serialisation of avro/protobuf/json serialized messages.
It can be configured as below.
```yaml
akhq:
environment:
AKHQ_CONFIGURATION: |
akhq:
connections:
docker-kafka-server:
properties:
bootstrap.servers: "kafka:9092"
schema-registry:
url: "http://schema-registry:8085"
type: "glue"
glueSchemaRegistryName: Name of schema Registry
awsRegion: aws region
connect:
- name: "connect"
url: "http://connect:8083"
ports:
- 8080:8080
links:
- kafka
- repo
```
Please note that authentication is done using aws default credentials provider.
Url key is required to not break the flow.
---
# Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/schema-registry/schema-references.md
# Schema references
Since Confluent 5.5.0, Avro schemas can now be reused by others schemas through schema references. This feature allows to define a schema once and use it as a record type inside one or more schemas.
When registering new Avro schemas with AKHQ UI, it is now possible to pass a slightly more complex object with a `schema` and a `references` field.
To register a new schema without references, no need to change anything:
```json
{
"name": "Schema1",
"namespace": "org.akhq",
"type": "record",
"fields": [
{
"name": "description",
"type": "string"
}
]
}
```
To register a new schema with a reference to an already registered schema:
```json
{
"schema": {
"name": "Schema2",
"namespace": "org.akhq",
"type": "record",
"fields": [
{
"name": "name",
"type": "string"
},
{
"name": "schema1",
"type": "Schema1"
}
]
},
"references": [
{
"name": "Schema1",
"subject": "SCHEMA_1",
"version": 1
}
]
}
````
Documentation on Confluent 5.5 and schema references can be found [here](https://docs.confluent.io/5.5.0/schema-registry/serdes-develop/index.html).
---
# Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/schema-registry/tibco.md
# TIBCO schema registry
If you are using the TIBCO schema registry, you will also need to mount and use the TIBCO Avro client library and its
dependencies. The akhq service in a docker compose file might look something like:
```yaml
akhq:
# build:
# context: .
image: tchiotludo/akhq
volumes:
- /opt/tibco/akd/repo/1.2/lib/tibftl-kafka-avro-1.2.0-thin.jar:/app/tibftl-kafka-avro-1.2.0-thin.jar
- /opt/tibco/akd/repo/1.2/lib/deps:/app/deps
environment:
AKHQ_CONFIGURATION: |
akhq:
connections:
docker-kafka-server:
properties:
bootstrap.servers: "kafka:9092"
schema-registry:
type: "tibco"
url: "http://repo:8081"
connect:
- name: "connect"
url: "http://connect:8083"
CLASSPATH: "/app/tibftl-kafka-avro-1.2.0-thin.jar:/app/deps/*"
ports:
- 8080:8080
links:
- kafka
- repo
```
---
# Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/debug.md
# Debug & Monitoring
## Monitoring endpoint
Several monitoring endpoint is enabled by default and available on port `28081` only.
You can disable it, change the port or restrict access only for authenticated users following micronaut configuration below.
* `/info` [Info Endpoint](https://docs.micronaut.io/snapshot/guide/index.html#infoEndpoint) with git status information.
* `/health` [Health Endpoint](https://docs.micronaut.io/snapshot/guide/index.html#healthEndpoint)
* `/loggers` [Loggers Endpoint](https://docs.micronaut.io/snapshot/guide/index.html#loggersEndpoint)
* `/metrics` [Metrics Endpoint](https://docs.micronaut.io/snapshot/guide/index.html#metricsEndpoint)
* `/prometheus` [Prometheus Endpoint](https://micronaut-projects.github.io/micronaut-micrometer/latest/guide/)
## Debugging AKHQ performance issues
You can debug all query duration from AKHQ with this commands
```bash
curl -i -X POST -H "Content-Type: application/json" \
-d '{ "configuredLevel": "TRACE" }' \
http://localhost:28081/loggers/org.akhq
```
## Debugging authentication
Debugging auth can be done by increasing log level on Micronaut that handle most of the authentication part :
```bash
curl -i -X POST -H "Content-Type: application/json" \
-d '{ "configuredLevel": "TRACE" }' \
http://localhost:28081/loggers/io.micronaut.security
curl -i -X POST -H "Content-Type: application/json" \
-d '{ "configuredLevel": "TRACE" }' \
http://localhost:28081/loggers/org.akhq.configs
```
---
# Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/dev.md
# Development Environment
## Early dev image
You can have access to last feature / bug fix with docker dev image automatically build on tag `dev`
```bash
docker pull tchiotludo/akhq:dev
```
The dev jar is not publish on GitHub, you have 2 solutions to have the `dev` jar :
Get it from docker image
```bash
docker pull tchiotludo/akhq:dev
docker run --rm --name=akhq -v /tmp/akhq/application-dev.yml:/app/application.yml -it tchiotludo/akhq:dev
docker cp akhq:/app/akhq.jar .
```
Or build it with a `./gradlew shadowJar`, the jar will be located here `build/libs/akhq-*.jar`
## Development Server
A docker-compose is provided to start a development environment.
Just install docker & docker compose plugin, clone the repository and issue a simple `docker compose -f docker-compose-dev.yml up` to start a dev server.
Dev server is a java server & webpack-dev-server with live reload.
The configuration for the dev server is in `application.dev.yml`.
Once it started, you can visit http://localhost:4000, additionally it is possible to run `npm start` inside the client-folder which contains hot reload.
## Setup local dev environment on Windows
In case you want to develop for AKHQ on Windows with IntelliJ IDEA without Docker (for any reason) you can follow this
brief guide. For the following steps, please, make sure you meet these requirements:
* OS: Windows (10)
* Kafka (2.6.0) is downloaded and extracted, the installation directory is referred to as $KAFKA_HOME in the latter
* Git is installed and configured
* IntelliJ IDEA (Community Edition 2020.2) with the following plugins installed:
* Gradle (bundled with IDEA)
* [Lombok](https://plugins.jetbrains.com/plugin/6317-lombok)
First run a Kafka server locally. Therefore, you need to start Zookeeper first by opening a CMD and doing:
```bash
$KAFKA_HOME\bin\windows\zookeeper-server-start.bat config\zookeper.properties
$KAFKA_HOME\bin\windows\kafka-server-start.bat config\server.properties
```
A zero-config Kafka server should be up and running locally on your machine now. For further details or troubleshooting
see [Kafka Getting started guide](https://kafka.apache.org/quickstart). In the next step we're going to checkout AKHQ from GitHub:
```bash
git clone https://github.com/tchiotludo/akhq.git
```
Open the checked out directory in IntelliJ IDEA. The current version of AKHQ is built with Java 17. If you
don't have OpenJDK 17 installed already, do the following in IntelliJ IDEA:
* _File > Project Structure... > Platform Settings >
SDKs > + > Download JDK... >_ select a vendor of your choice (but make sure it's version 17)
* download + install. Make sure that JDK 17 is set under _Project Settings > Project SDK_
* language level is Java 17.
* Now tell Gradle to use Java 17
as well: _File > Settings > Plugins > Build, Execution, Deployment > Build Tools > Gradle > Gradle JVM_: any JDK 17.
To configure AKHQ for using the Kafka server you set up before, edit `application.yml` by adding the following under `akhq`:
```yaml
akhq:
connections:
kafka:
properties:
bootstrap.servers: "localhost:9092"
```
::: warning
Do not commit this part of `application.yml`. A more secure way to configure your local development Kafka server is
described in the Micronaut doc, chapter ["Application Configuration"](https://docs.micronaut.io/2.5.13/guide/index.html#config).
:::
Now you should be able to build the project with Gradle. Therefore, go to the Gradle view in IDEA, select _Tasks > build >
build_. If an error occurs saying that any filename is too long: move your project directory to a root directory in your
filesystem or as a fix (only for testing purposes) set the argument `-x test` to skip tests temporarily.
To debug a running AKHQ instance, go to the Gradle tab in IntelliJ IDEA, _Tasks > application_ > right click `run` and click
"_Debug(...)_". AKHQ should start up and hit the breakpoints you set in your IDE. Happy developing/debugging!
---
# Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/installation.md
# Installation
First you need a [configuration files](./configuration/README.md) in order to configure AKHQ connections to Kafka Brokers.
Configuration file default path is `/app/application.yml` (for YML file), so expected to be at the same folder as AKHQ application files. Configuration file path can target any path through `MICRONAUT_CONFIG_FILES` environment variable, for example: `MICRONAUT_CONFIG_FILES=/somepath/application.yml`.
### Docker
```sh
docker run -d \
-p 8080:8080 \
-v /tmp/application.yml:/app/application.yml \
tchiotludo/akhq
```
* With `-v /tmp/application.yml` must be an absolute path to configuration file
* Go to
### Stand Alone
* Install Java 17
* Download the latest jar on [release page](https://github.com/tchiotludo/akhq/releases)
* Create a [configuration file](./configuration/README.md)
* Launch the application with `java -Dmicronaut.config.files=/path/to/application.yml -jar akhq.jar`
* Go to
### Running in Kubernetes (using a Helm Chart)
### Using Helm repository
* Add the AKHQ helm charts repository:
```sh
helm repo add akhq https://akhq.io/
```
* Install or upgrade
```sh
helm upgrade --install akhq akhq/akhq
```
#### Requirements
* Chart version >=0.1.1 requires Kubernetes version >=1.14
* Chart version 0.1.0 works on previous Kubernetes versions
```sh
helm install akhq akhq/akhq --version 0.1.0
```
### Using git
* Clone the repository:
```sh
git clone https://github.com/tchiotludo/akhq && cd akhq/helm/akhq
```
* Update helm values located in [values.yaml](https://github.com/tchiotludo/akhq/blob/dev/helm/akhq/values.yaml)
* `configuration` values will contains all related configuration that you can find in [application.example.yml](https://github.com/tchiotludo/akhq/blob/dev/application.example.yml) and will be store in a `ConfigMap`
* `secrets` values will contains all sensitive configurations (with credentials) that you can find in [application.example.yml](https://github.com/tchiotludo/akhq/blob/dev/application.example.yml) and will be store in `Secret`
* Both values will be merged at startup
* Apply the chart:
```sh
helm install --name=akhq-release-name .
```