# Akhq > An **experimental** api is available that allow you to fetch all the exposed on AKHQ through api. --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/api.md # Api An **experimental** api is available that allow you to fetch all the exposed on AKHQ through api. Take care that this api is **experimental** and **will** change in a future release. Some endpoints expose too many data and is slow to fetch, and we will remove some properties in a future in order to be fast. Example: List topic endpoint expose log dir, consumer groups, offsets. Fetching all theses is slow for now, and we will remove these in a future. You can discover the api endpoint here : * `/api`: a [RapiDoc](https://mrin9.github.io/RapiDoc/) webpage that document all the endpoints. * `/swagger/akhq.yml`: a full [OpenApi](https://www.openapis.org/) specifications files --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/akhq.md # AKHQ configuration ## Pagination * `akhq.pagination.page-size` number of topics per page (default : 25) ## Avro Serializer * `akhq.avro-serializer.json.serialization.inclusions` is list of ObjectMapper serialization inclusions that is used for converting Avro message to more readable Json format in the UI. Supports Enums of JsonInclude.Include from Jackson library ## Topic List * `akhq.topic.internal-regexps` is list of regexp to be considered as internal (internal topic can't be deleted or updated) * `akhq.topic.stream-regexps` is list of regexp to be considered as internal stream topic ## Topic creation default values These parameters are the default values used in the topic creation page. * `akhq.topic.replication` Default number of replica to use * `akhq.topic.partition` Default number of partition ## Topic Data * `akhq.topic-data.size`: max record per page (default: 50) * `akhq.topic-data.poll-timeout`: The time, in milliseconds, spent waiting in poll if data is not available in the buffer (default: 1000). * `akhq.topic-data.kafka-max-message-length`: Max message length allowed to send to UI when retrieving a list of records (dafault: 1000000 bytes). ## Ui Settings ### Topics * `akhq.ui-options.topic.default-view` is default list view (ALL, HIDE_INTERNAL, HIDE_INTERNAL_STREAM, HIDE_STREAM) (default: HIDE_INTERNAL) * `akhq.ui-options.topic.skip-consumer-groups` hide consumer groups columns on topic list * `akhq.ui-options.topic.skip-last-record` hide the last records on topic list * `akhq.ui-options.topic.show-all-consumer-groups` expand lists of consumer groups on topic list * `akhq.ui-options.topic.groups-default-view` is the default consumer groups list view on topic screen/consumer groups tab (ALL, HIDE_EMPTY) (default: ALL). HIDE_EMPTY increases performance, especially on cluster with a lot of consumer groups ### Topic Data Display * `akhq.ui-options.topic-data.sort`: default sort order (OLDEST, NEWEST) (default: OLDEST) ### Inject some css or javascript * `akhq.html-head`: Append some head tags on the webserver application Mostly useful in order to inject some css or javascript to customize the web application. Examples, add a environment information on the left menu: ```yaml akhq: html-head: | ``` ## Custom HTTP response headers To add headers to every response please add the headers like in following example: ```yaml akhq: server: customHttpResponseHeaders: - name: "Content-Security-Policy" value: "default-src 'none'; frame-src 'self'; script-src 'self'; connect-src 'self'; img-src 'self'; style-src 'self'; frame-ancestors 'self'; form-action 'self'; upgrade-insecure-requests" - name: "X-Permitted-Cross-Domain-Policies" value: "none" ``` ## Data Masking If you want to hide some data in your records, there are two approaches. ### Regex Masking You can use regex masking - configure this with the following filters. These will be applied to all record values and keys. ```yaml akhq: security: data-masking: mode: regex # Note - this is not explicitly required as regex is the 'default' masker that gets applied for backwards compatibility filters: - description: "Masks value for secret-key fields" search-regex: '"(secret-key)":".*"' replacement: '"$1":"xxxx"' - description: "Masks last digits of phone numbers" search-regex: '"([\+]?[(]?[0-9]{3}[)]?[-\s\.]?[0-9]{3}[-\s\.]?)[0-9]{4,6}"' replacement: '"$1xxxx"' ``` ### JSON Masking This is useful for records which are interpreted as JSON on deserialization to strings - for example, Avro records, or normal JSON payloads. These can be configured per-topic, and you can select distinct fields to mask/unmask. There are two JSON masking modes: `json_show_by_default` and `json_mask_by_default`. #### Show by default config This means, by default, nothing is masked. If you wish to mask data this way, you can: * Set a value in `akhq.security.data-masking.jsonMaskReplacement` (this defaults to `xxxx`) * Set `akhq.security.data-masking.mode` to `json_show_by_default` * Add as many filters as desired under `akhq.security.data-masking.json-filters` (see below for an example) to select fields you want to *mask* NOTES: Only one filter per topic is currently supported. If you are using `RecordNameStrategy` on a topic with multiple record types, there is (currently) no way to distinguish between different records, so any records which have the JSON field at the selected path(s) will be masked. If you have a misconfiguration and have defined multiple filters per topic, only the first will actually be selected. ```yaml akhq: security: data-masking: mode: json_show_by_default jsonMaskReplacement: xxxx json-filters: - description: Mask sensitive values topic: users keys: - name - dateOfBirth - address.firstLine - address.town - metadata.notes ``` Given a record on `users` that looks like: ```json { "specialId": 123, "status": "ACTIVE", "name": "John Smith", "dateOfBirth": "01-01-1991", "address": [ { "firstLine": "123 Example Avenue", "town": "Faketown", "country": "United Kingdom" }, { "firstLine": "123 Previous Avenue", "town": "Previoustown", "country": "United Kingdom" } ], "metadata": { "trusted": true, "rating": "10", "notes": "All in good order" } } ``` With the above configuration, it will appear as: ```json { "specialId": 123, "status": "ACTIVE", "name": "xxxx", "dateOfBirth": "xxxx", "address": [ { "firstLine": "xxxx", "town": "xxxx", "country": "United Kingdom" }, { "firstLine": "xxxx", "town": "xxxx", "country": "United Kingdom" } ], "metadata": { "trusted": true, "rating": "10", "notes": "xxxx" } } ``` Note how arrays are automatically understood where relevant. In other words, `address.firstLine` will apply to both of the following: ```json { "address": { "firstLine": "This field!" } } ``` and ```json { "address": [ { "firstLine": "This field!" }, { "firstLine": "And this one!" } ] } ``` ### Mask by default config This means, by default, everything is masked. This is useful in production scenarios where data must be carefully selected and made available to users of AKHQ - usually this is for regulatory compliance of personal/sensitive information. PLEASE NOTE: This has the side effect of being unable to show unstructured data at all. This is because if the schema registry is down, the binary data would otherwise be unfilterable. Instead, a placeholder message is shown. If you wish to mask data this way, you can: * Set a value in `akhq.security.data-masking.jsonMaskReplacement` (this defaults to `xxxx`) * Set `akhq.security.data-masking.mode` to `json_mask_by_default` * Add as many filters as desired under `akhq.security.data-masking.json-filters` (see below for an example) to select fields you want to *show* NOTES: Only one filter per topic is currently supported. If you are using `RecordNameStrategy` on a topic with multiple record types, there is (currently) no way to distinguish between different records, so any records which have the JSON field at the selected path(s) will be shown. If you have a misconfiguration and have defined multiple filters per topic, only the first will actually be selected. ```yaml akhq: security: data-masking: mode: json_mask_by_default jsonMaskReplacement: xxxx json-filters: - description: Unmask non-sensitive values topic: users keys: - specialId - status - address.country - metadata.trusted - metadata.rating ``` Given a record on `users` that looks like: ```json { "specialId": 123, "status": "ACTIVE", "name": "John Smith", "dateOfBirth": "01-01-1991", "address": [ { "firstLine": "123 Example Avenue", "town": "Faketown", "country": "United Kingdom" }, { "firstLine": "123 Previous Avenue", "town": "Previoustown", "country": "United Kingdom" } ], "metadata": { "trusted": true, "rating": "10", "notes": "All in good order" } } ``` With the above configuration, it will appear as: ```json { "specialId": 123, "status": "ACTIVE", "name": "xxxx", "dateOfBirth": "xxxx", "address": [ { "firstLine": "xxxx", "town": "xxxx", "country": "United Kingdom" }, { "firstLine": "xxxx", "town": "xxxx", "country": "United Kingdom" } ], "metadata": { "trusted": true, "rating": "10", "notes": "xxxx" } } ``` Note how arrays are automatically understood where relevant. In other words, `address.firstLine` will apply to both of the following: ```json { "address": { "firstLine": "This field!" } } ``` and ```json { "address": [ { "firstLine": "This field!" }, { "firstLine": "And this one!" } ] } ``` ### No masking required You can set `akhq.security.data-masking.mode` to `none` to disable masking altogether. ## Audit If you want to audit user action that modified topics or consumer group state, you can configure akhq to send audit events to a pre-configured cluster: ```yaml akhq: audit: enabled: true cluster-id: my-audit-cluster-plain-text topic-name: audit ``` --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/audit.md # Audit configuration akhq can be configured to emit audit event to a kafka cluster for the following user actions: - Topic level - Topic creation - Topic configuration change - Topic partition increase - Topic deletion - Consumer group level - Update offsets - Delete offsets - Delete consumer group - Schema registry - Create new schema for a subject - Update existing schema for a subject - Change compatibility level of a subject - Delete a subject - Kafka connect - Create new connector - Update existing connector - Pause and resume connector - Restart connector or task - Delete connector The following configuration is an example of akhq with audit turned ON. All events mentioned above will be sent to the `my-audit-cluster-plain-text` cluster in the topic `audit`. ```yaml akhq: connections: my-cluster-plain-text: properties: bootstrap.servers: "kafka:9092" my-audit-cluster-plain-text: properties: bootstrap.servers: "audit:9092" audit: enabled: true cluster-id: my-audit-cluster-plain-text topic-name: audit ``` To be able to identify the user who performed these actions, security must be turned ON (otherwise the userName field is left empty). --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/authentifications/aws-iam-auth.md # AWS MSK IAM Auth * The libraries required for IAM authentication have already been loaded. Configure aws-msk-iam-auth connection in AKHQ ```yaml akhq: connections: docker-kafka-server: properties: bootstrap.servers: msk-broker:9098 security.protocol: SASL_SSL sasl.mechanism: AWS_MSK_IAM sasl.jaas.config: software.amazon.msk.auth.iam.IAMLoginModule required awsDebugCreds=true; sasl.client.callback.handler.class: software.amazon.msk.auth.iam.IAMClientCallbackHandler ssl.truststore.location: ${JAVA_HOME}/lib/security/cacerts ssl.truststore.password: changeit ``` ## References [https://docs.aws.amazon.com/msk/latest/developerguide/iam-access-control.html](https://docs.aws.amazon.com/msk/latest/developerguide/iam-access-control.html) [https://github.com/aws/aws-msk-iam-auth/blob/main/README.md](https://github.com/aws/aws-msk-iam-auth/blob/main/README.md) --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/authentifications/basic-auth.md # Basic Auth * `akhq.security.basic-auth`: List user & password with affected roles * `- username: actual-username`: Login of the current user as (maybe anything email, login, ...) * `password`: Password in sha256 (default) or bcrypt. The password can be converted * For default SHA256, with command `echo -n "password" | sha256sum` or Ansible filter {{ 'password' | hash('sha256') }} * For BCrypt, with Ansible filter {{ 'password' | password_hash('blowfish') }} * `passwordHash`: Password hashing algorithm, either `SHA256` or `BCRYPT` * `groups`: Groups for current user Configure basic-auth connection in AKHQ ```yaml micronaut: security: enabled: true akhq.security: basic-auth: - username: admin password: "$2a$" passwordHash: BCRYPT groups: - admin - username: reader password: "" groups: - reader ``` --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/authentifications/external.md # External roles and attributes mapping If you manage topics (or any other resource) permissions in an external system, you have access to 2 more implementation mechanisms to map your authenticated user (from either Local, Header, LDAP or OIDC) into AKHQ roles and attributes. If you use this approach, keep in mind it will take the local user's groups for local Auth, and the external groups for Header/LDAP/OIDC (ie. this will NOT do the mapping between Header/LDAP/OIDC and local groups). **Default configuration-based** This is the current implementation and the default one (doesn't break compatibility) ````yaml akhq: security: default-group: admin groups: reader: - role: reader # patterns: [ ".*" ] # clusters: [ ".*" ] ldap: # LDAP users/groups to AKHQ groups mapping oidc: # OIDC users/groups to AKHQ groups mapping header-auth: # header authentication users/groups to AKHQ groups mapping ```` ## REST API ````yaml akhq: security: default-group: no-roles rest: enabled: true url: https://external.service/get-roles-and-attributes groups: # anything set here will not be used micronaut: caches: rest-api-claim-provider: expire-after-write: 600s # Default. May be overridden. ```` In this mode, AKHQ will send to the ``akhq.security.rest.url`` endpoint a POST request with the following JSON: ````json { "providerType": "LDAP or OIDC or BASIC_AUTH or HEADER", "providerName": "OIDC provider name (OIDC only)", "username": "user", "groups": ["LDAP-GROUP-1", "LDAP-GROUP-2", "LDAP-GROUP-3"] } ```` and expect the following JSON as response: ````json { "groups": { "topic-writer-clusterA-projectA": [ { "role": "topic-reader", "patterns": [ "pub.*" ] }, { "role": "topic-writer", "patterns": [ "projectA.*" ], "clusters": [ "clusterA.*" ] } ], "acl-reader-clusterA": [ { "role": "acl-reader", "clusters": [ "clusterA.*" ] } ] } } ```` The response will be cached according to settings under `micronaut.caches.rest-api-claim-provider`, as may be seen in the example above. If you want to send a static authentication token to the external service where it might be public, you can extend the configuration for the rest interface as follows: ````yaml akhq: security: rest: enabled: true url: https://external.service/get-roles-and-attributes headers: - name: Authorization value: Bearer your-token ```` ::: warning The response must contain the `Content-Type: application/json` header to prevent any issue when reading the response. ::: ## Groovy API ````yaml akhq: security: default-group: no-roles groovy: enabled: true file: | package org.akhq.models.security; class GroovyCustomClaimProvider implements ClaimProvider { @Override ClaimResponse generateClaim(ClaimRequest request) { String filterRegexp = request.groups.collect { '^' + it + '\\..*' }.join('|') def groups = [ "reader": [ new org.akhq.configs.security.Group(role: "reader", patterns: [filterRegexp]), ] ] return ClaimResponse.builder().groups(groups).build(); } } groups: # anything set here will not be used ```` ``akhq.security.groovy.file`` must be a groovy class that implements the interface ClaimProvider: ````java package org.akhq.models.securitys; public interface ClaimProvider { ClaimResponse generateClaim(ClaimRequest request); } enum ClaimProviderType { BASIC_AUTH, LDAP, OIDC } public class ClaimRequest { ClaimProvider.ProviderType providerType; String providerName; String username; List groups; } public class ClaimResponse { private Map> groups; } ```` --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/authentifications/github.md # GitHub SSO / OAuth2 To enable GitHub SSO in the application, you'll first have to enable OAuth2 in micronaut: ```yaml micronaut: security: enabled: true oauth2: enabled: true clients: github: client-id: "" client-secret: "" scopes: - user:email - read:user authorization: url: https://github.com/login/oauth/authorize token: url: https://github.com/login/oauth/access_token auth-method: client-secret-post ``` You can also override the GitHub API url if needed. Default value is https://api.github.com ```yaml github.api.url: https://override.api.github.com ``` To further tell AKHQ to display GitHub SSO options on the login page and customize claim mapping, configure Oauth in the AKHQ config: ```yaml akhq: security: default-group: no-roles oauth2: enabled: true providers: github: label: "Login with GitHub" username-field: login users: - username: franz groups: # the corresponding akhq groups (eg. topic-reader/writer or akhq default groups like admin/reader/no-role) - topic-reader - topic-writer ``` The username field can be any string field, the roles field has to be a JSON array. ## References [https://micronaut-projects.github.io/micronaut-security/latest/guide/#oauth2-configuration](https://micronaut-projects.github.io/micronaut-security/latest/guide/#oauth2-configuration) --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/authentifications/groups.md # Groups Groups allow you to set users granular permissions to various resources. ::: warning With PR #1472 AKHQ introduced a new, better group management system in 0.25.0. It's a breaking change, so you need to rewrite your ACLS ::: Define groups with specific roles for your users * `akhq.security.default-group`: Default group for all the user even unlogged user * `akhq.security.groups`: Groups map definition * `key:` a uniq key used as name if not specified * A list of role/patterns/clusters association * `role`: name of an existing role * `patterns`: list of regular expression that resources from the given role must match at least once get access * `clusters`: list of regular expression that cluster must match at least once to get access ::: warning Please also set the `micronaut.security.token.jwt.signatures.secret.generator.secret` if you set a group. If the secret is not set, the API will not enforce the group role, and the restriction is in the UI only. ::: 3 defaults group are available : * `admin` with all rights and no patterns/clusters restrictions * `reader` with read access only on all AKHQ and no patterns/clusters restrictions * `no-roles` without any roles, forces user to login Here is an example of a `reader` group definition based on the default reader role with access on all the resources prefixed with `pub` and located the on `public` cluster ```yaml groups: reader: - role: reader patterns: [ "pub.*" ] clusters: [ "public" ] ``` ## Roles Roles are based on Resource and Action association. A role can target one or several Resource and allow one or several Action. The resources and actions list + possible associations between them are detailed in the table below. You can still associate a resource with a non-supported action from the table however it will be ignored.
| | TOPIC | TOPIC_DATA | CONSUMER_GROUP | CONNECT_CLUSTER | CONNECTOR | SCHEMA | NODE | ACL | KSQLDB | |----------------|-------|------------|----------------|-----------------|-----------|--------|------|-----|--------| | READ | X | X | X | X | X | X | X | X | X | | CREATE | X | X | | | X | X | | | | | UPDATE | X | X | | | X | X | | | | | DELETE | X | X | X | | X | X | | | | | UPDATE_OFFSET | | | X | | | | | | | | DELETE_OFFSET | | | X | | | | | | | | READ_CONFIG | X | | | | | | X | | | | ALTER_CONFIG | X | | | | | | X | | | | DELETE_VERSION | | | | | | X | | | | | UPDATE_STATE | | | | | X | | | | | | EXECUTE | | | | | | | | | X |
A default roles list is predefined in `akhq.security.roles` but you can override it. A role contains: * `key:` a uniq key used as name * A list of resources/actions associations * `resources:` List of resources (ex: `[ "TOPIC", "TOPIC_DATA"]`) * `actions:` Actions allowed on the previous resources (ex: `[ "READ", "CREATE"]`) The default configuration provides a topic-admin role defined as follows: ```yaml topic-admin: - resources: [ "TOPIC", "TOPIC_DATA" ] actions: [ "READ", "CREATE", "DELETE" ] - resources: [ "TOPIC" ] actions: [ "UPDATE", "READ_CONFIG", "ALTER_CONFIG" ] ``` --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/authentifications/header.md # Header configuration (reverse proxy) To enable Header authentication in the application, you'll have to configure the header that will resolve users & groups: ```yaml akhq: security: # Header configuration (reverse proxy) header-auth: user-header: x-akhq-user # mandatory (the header name that will contain username) groups-header: x-akhq-group # optional (the header name that will contain groups separated by groups-header-separator) groups-header-separator: , # optional (separator, defaults to ',') ip-patterns: [0.0.0.0] # optional (Java regular expressions for matching trusted IP addresses, '0.0.0.0' matches all addresses) default-group: topic-reader groups: # optional # the name of the user group read from header - name: header-admin-group groups: # the corresponding akhq groups (eg. topic-reader/writer or akhq default groups like admin/reader/no-role) - admin users: # optional - username: header-user # username matching the `user-header` value groups: # list of groups / additional groups - topic-writer - username: header-admin groups: - admin ``` * `user-header` is mandatory in order to map the user with `users` list or to display the user on the ui if no `users` is provided. * `groups-header` is optional and can be used in order to inject a list of groups for all the users. This list will be merged with `groups` for the current users. * `groups-header-separator` is optional and can be used to customize group separator used when parsing `groups-header` header, defaults to `,`. * `ip-patterns` limits the IP addresses that header authentication will accept, given as a list of Java regular expressions, omit or set to `[0.0.0.0]` to allow all addresses * `default-group` default AKHQ group, used when no groups were read from `groups-header` * `groups` maps external group names read from headers to AKHQ groups. * `users` assigns additional AKHQ groups to users. --- # Source: # JWT AKHQ uses signed JWT tokens to perform authentication. Please generate a secret that is at least 256 bits. You can use one of the following methods to provide the generated secret to AKHQ. ## Configuration File Provide the generated secret via the AKHQ `application.yml` via the following directive: ```yaml micronaut: security: enabled: true token: jwt: signatures: secret: generator: secret: ``` ## Environment Variable Provide the generated secret via [Micronaut Property Value Binding](https://docs.micronaut.io/latest/guide/index.html#_property_value_binding) using the following environment variable for the execution environment of AKHQ: ```bash MICRONAUT_SECURITY_TOKEN_JWT_SIGNATURES_SECRET_GENERATOR_SECRET="" ``` --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/authentifications/ldap.md # LDAP Configure how the ldap groups will be matched in AKHQ groups * `akhq.security.ldap.groups`: Ldap groups list * `- name: ldap-group-name`: Ldap group name (same name as in ldap) * `groups`: AKHQ group list to be used for current ldap group Example using [online ldap test server](https://www.forumsys.com/tutorials/integration-how-to/ldap/online-ldap-test-server/) Configure ldap connection in micronaut ```yaml micronaut: security: enabled: true ldap: default: enabled: true context: server: 'ldap://ldap.forumsys.com:389' managerDn: 'cn=read-only-admin,dc=example,dc=com' managerPassword: 'password' search: base: "dc=example,dc=com" groups: enabled: true base: "dc=example,dc=com" ``` If you want to enable anonymous auth to your LDAP server you can pass : ```yaml managerDn: '' managerPassword: '' ``` In Case your LDAP groups do not use the default UID for group membership, you can solve this using ```yaml micronaut: security: enabled: true ldap: default: search: base: "OU=UserOU,dc=example,dc=com" attributes: - "cn" groups: enabled: true base: "OU=GroupsOU,dc=example,dc=com" filter: "member={0}" ``` Replace ```yaml attributes: - "cn" ``` with your group membership attribute Configure AKHQ groups and Ldap groups and users ```yaml micronaut: security: enabled: true akhq: security: roles: topic-reader: - resources: [ "TOPIC", "TOPIC_DATA" ] actions: [ "READ" ] - resources: [ "TOPIC" ] actions: [ "READ_CONFIG" ] topic-writer: - resources: [ "TOPIC", "TOPIC_DATA" ] actions: [ "CREATE", "UPDATE" ] - resources: [ "TOPIC" ] actions: [ "ALTER_CONFIG" ] groups: topic-reader-pub: - role: topic-reader patterns: [ "pub.*" ] topic-writer-clusterA-projectA: - role: topic-reader patterns: [ "projectA.*" ] - role: topic-writer patterns: [ "projectA.*" ] clusters: [ "clusterA.*" ] acl-reader-clusterA: - role: acl-reader clusters: [ "clusterA.*" ] ldap: groups: - name: mathematicians groups: - topic-reader-pub - name: scientists groups: - topic-writer-clusterA-projectA - acl-reader-clusterA users: - username: franz groups: - topic-writer-clusterA-projectA - acl-reader-clusterA ``` --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/authentifications/oidc.md # OIDC To enable OIDC in the application, you'll first have to enable OIDC in micronaut: ```yaml micronaut: security: oauth2: enabled: true clients: google: client-id: "" client-secret: "" openid: issuer: "" caches: local-security-claim-provider: expire-after-write: 600s # Default. May be overridden. ``` OIDC responses will be cached according to the settings under `micronaut.caches.local-security-claim-provider`. To further tell AKHQ to display OIDC options on the login page and customize claim mapping, configure OIDC in the AKHQ config: ```yaml akhq: security: roles: topic-reader: - resources: [ "TOPIC", "TOPIC_DATA" ] actions: [ "READ" ] - resources: [ "TOPIC" ] actions: [ "READ_CONFIG" ] topic-writer: - resources: [ "TOPIC", "TOPIC_DATA" ] actions: [ "CREATE", "UPDATE" ] - resources: [ "TOPIC" ] actions: [ "ALTER_CONFIG" ] groups: topic-reader-pub: - role: topic-reader patterns: [ "pub.*" ] topic-writer-clusterA-projectA: - role: topic-reader patterns: [ "projectA.*" ] - role: topic-writer patterns: [ "projectA.*" ] clusters: [ "clusterA.*" ] acl-reader-clusterA: - role: acl-reader clusters: [ "clusterA.*" ] oidc: enabled: true providers: google: label: "Login with Google" username-field: preferred_username # specifies the field name in the oidc claim containing the use assigned role (eg. in keycloak this would be the Token Claim Name you set in your Client Role Mapper) groups-field: roles default-group: topic-reader groups: # the name of the user role set in your oidc provider and associated with your user (eg. in keycloak this would be a client role) - name: mathematicians groups: # the corresponding akhq groups (eg. topic-reader/writer or akhq default groups like admin/reader/no-role) - topic-reader-pub - name: scientists groups: - topic-writer-clusterA-projectA - acl-reader-clusterA users: - username: franz groups: - topic-writer-clusterA-projectA - acl-reader-clusterA ``` The username field can be any string field, the roles field has to be a JSON array. The mapping is performed on the OIDC _ID token_. ## Direct OIDC mapping If you want to manage AKHQ roles an attributes directly with the OIDC provider, you can use the following configuration: ```yaml akhq: security: oidc: enabled: true providers: google: label: "Login with Google" username-field: preferred_username use-oidc-claim: true ``` In this scenario, you need to make the OIDC provider return a JWT which have the following fields: ```json { // Standard claims "exp": 1635868816, "iat": 1635868516, "preferred_username": "json", ... "scope": "openid email profile", // Mandatory AKHQ claims "groups": { "topic-writer-clusterA-projectA": [ { "role": "topic-reader", "patterns": [ "pub.*" ] }, { "role": "topic-writer", "patterns": [ "projectA.*" ], "clusters": [ "clusterA.*" ] } ], "acl-reader-clusterA": [ { "role": "acl-reader", "clusters": [ "clusterA.*" ] } ] } } ``` --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/avro.md # Avro deserialization Avro messages using Schema registry are automatically decoded if the registry is configured (see [Kafka cluster](../configuration/brokers.md)). You can also decode raw binary Avro messages, that is messages encoded directly with [DatumWriter](https://avro.apache.org/docs/current/api/java/org/apache/avro/io/DatumWriter.html) without any header. You must provide a `schemas-folder` and mappings which associate a `topic-regex` and a schema file name. The schema can be specified either for message keys with `key-schema-file` and/or for values with `value-schema-file`. Here is an example of configuration: ```yaml akhq: connections: kafka: properties: # standard kafka properties deserialization: avro-raw: schemas-folder: "/app/avro_schemas" topics-mapping: - topic-regex: "album.*" value-schema-file: "Album.avsc" - topic-regex: "film.*" value-schema-file: "Film.avsc" - topic-regex: "test.*" key-schema-file: "Key.avsc" value-schema-file: "Value.avsc" ``` Examples can be found in [tests](https://github.com/tchiotludo/akhq/tree/dev/src/main/java/org/akhq/utils). --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/brokers.md # Cluster configuration * `akhq.connections` is a key value configuration with : * `key`: must be an url friendly (letter, number, _, -, ... dot are not allowed here) string to identify your cluster (`my-cluster-1` and `my-cluster-2` is the example above) * `properties`: all the configurations found on [Kafka consumer documentation](https://kafka.apache.org/documentation/#consumerconfigs). Most important is `bootstrap.servers` that is a list of host:port of your Kafka brokers. * `schema-registry`: *(optional)* * `url`: the schema registry url * `type`: the type of schema registry used, either 'confluent' or 'tibco' * `basic-auth-username`: schema registry basic auth username * `basic-auth-password`: schema registry basic auth password * `properties`: all the configurations for registry client, especially ssl configuration * `connect`: *(optional list, define each connector as an element of a list)* * `name`: connect name * `url`: connect url * `basic-auth-username`: connect basic auth username * `basic-auth-password`: connect basic auth password * `ssl-trust-store`: /app/truststore.jks * `ssl-trust-store-password`: trust-store-password * `ssl-key-store`: /app/truststore.jks * `ssl-key-store-password`: key-store-password * `ksqldb`: *(optional list, define each ksqlDB instance as an element of a list)* * `name`: ksqlDB name * `url`: ksqlDB url * `basic-auth-username`: ksqlDB basic auth username * `basic-auth-password`: ksqlDB basic auth password ## Basic cluster with plain auth ```yaml akhq: connections: local: properties: bootstrap.servers: "local:9092" schema-registry: url: "http://schema-registry:8085" connect: - name: "connect" url: "http://connect:8083" ksqldb: - name: "ksqldb" url: "http://connect:8088" ``` ## Example for Confluent Cloud ```yaml akhq: connections: ccloud: properties: bootstrap.servers: "{{ cluster }}.{{ region }}.{{ cloud }}.confluent.cloud:9092" security.protocol: SASL_SSL sasl.mechanism: PLAIN sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username="{{ kafkaUsername }}" password="{{ kafkaPassword }}"; schema-registry: url: "https://{{ cluster }}.{{ region }}.{{ cloud }}.confluent.cloud" basic-auth-username: "{{ schemaRegistryUsername }}" basic-auth-password: "{{ schemaRegistryPaswword }}" ``` ## SSL Kafka Cluster Configuration example for kafka cluster secured by ssl for saas provider like aiven (full https & basic auth): You need to generate a jks & p12 file from pem, cert files give by saas provider. ```bash openssl pkcs12 -export -inkey service.key -in service.cert -out client.keystore.p12 -name service_key keytool -import -file ca.pem -alias CA -keystore client.truststore.jks ``` Configurations will look like this example: ```yaml akhq: connections: ssl-dev: properties: bootstrap.servers: "{{host}}.aivencloud.com:12835" security.protocol: SSL ssl.truststore.location: {{path}}/avnadmin.truststore.jks ssl.truststore.password: {{password}} ssl.keystore.type: "PKCS12" ssl.keystore.location: {{path}}/avnadmin.keystore.p12 ssl.keystore.password: {{password}} ssl.key.password: {{password}} schema-registry: url: "https://{{host}}.aivencloud.com:12838" type: "confluent" basic-auth-username: avnadmin basic-auth-password: {{password}} properties: schema.registry.ssl.truststore.location: {{path}}/avnadmin.truststore.jks schema.registry.ssl.truststore.password: {{password}} connect: - name: connect-1 url: "https://{{host}}.aivencloud.com:{{port}}" basic-auth-username: avnadmin basic-auth-password: {{password}} ``` ## OAuth2 authentification for brokers Requirement Library Strimzi: > The kafka brokers must be configured with the Strimzi library and an OAuth2 provider (Keycloak example). > This [repository](https://github.com/strimzi/strimzi-kafka-oauth) contains documentation and examples. Configuration Bootstrap: > It's not necessary to compile AKHQ to integrate the Strimzi libraries since the libs will be included on the final image ! You must configure AKHQ through the application.yml file. ```yaml akhq: connections: my-kafka-cluster: properties: bootstrap.servers: ":9094,:9094" sasl.jaas.config: org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required auth.valid.issuer.uri="https:///auth/realms/sandbox_kafka" oauth.jwks.endpoint.uri="https:///auth/realms/sandbox_kafka/protocol/openid-connect/certs" oauth.username.claim="preferred_username" oauth.client.id="kafka-producer-client" oauth.client.secret="" oauth.ssl.truststore.location="kafka.server.truststore.jks" oauth.ssl.truststore.password="xxxxx" oauth.ssl.truststore.type="jks" oauth.ssl.endpoint_identification_algorithm="" oauth.token.endpoint.uri="https:///auth/realms/sandbox_kafka/protocol/openid-connect/token"; sasl.login.callback.handler.class: io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler security.protocol: SASL_PLAINTEXT sasl.mechanism: OAUTHBEARER ``` I put oauth.ssl.endpoint_identification_algorithm = "" for testing or my certificates did not match the FQDN. In a production, you have to remove it. --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/docker.md # Docker ## Pass custom Java opts By default, the docker container will allow a custom JVM options setting the environments vars `JAVA_OPTS`. For example, if you want to change the default timezone, just add `-e "JAVA_OPTS=-Duser.timezone=Europe/Paris"` ## Run with another jvm.options file By default, the docker container will run with a [jvm.options](https://github.com/tchiotludo/akhq/blob/dev/docker/app/jvm.options) file, you can override it with your own with an Environment Variable. With the `JVM_OPTS_FILE` environment variable, you can override the jvm.options file by passing the path of your file instead. Override the `JVM_OPTS_FILE` with docker run: ```sh docker run -d \ --env JVM_OPTS_FILE={{path-of-your-jvm.options-file}} -p 8080:8080 \ -v /tmp/application.yml:/app/application.yml \ tchiotludo/akhq ``` Override the `JVM_OPTS_FILE` with docker-compose: ```yaml services: akhq: image: tchiotludo/akhq-jvm:dev environment: JVM_OPTS_FILE: /app/jvm.options ports: - "8080:8080" volumes: - /tmp/application.yml:/app/application.yml ``` If you do not override the `JVM_OPTS_FILE`, the docker container will take the defaults one instead. The AKHQ docker image supports 4 environment variables to handle configuration : * `AKHQ_CONFIGURATION`: a string that contains the full configuration in yml that will be written on /app/configuration.yml on the container. * `MICRONAUT_APPLICATION_JSON`: a string that contains the full configuration in JSON format * `MICRONAUT_CONFIG_FILES`: a path to a configuration file in the container. Default path is `/app/application.yml` * `CLASSPATH`: additional Java classpath entries. Must be used to specify the location of the TIBCO Avro client library jar if a 'tibco' schema registry type is used ## How to mount configuration file Take care when you mount configuration files to not remove akhq files located on /app. You need to explicitly mount the `/app/application.yml` and not mount the `/app` directory. This will remove the AKHQ binaries and give you this error: ` /usr/local/bin/docker-entrypoint.sh: 9: exec: ./akhq: not found` ```yaml volumeMounts: - mountPath: /app/application.yml subPath: application.yml name: config readOnly: true ``` --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/helm.md # Helm Basically to create your helm values, you can take a look to the default values and you can see how your values could be defined: https://github.com/tchiotludo/akhq/blob/dev/helm/akhq/values.yaml Nextone we will present some helm chart value example used in an AWS MSK that maybe could show how to use and define stuff in the helm chart and understand better how to define that. ## Examples ### AWS MSK with Basic Authentication and ALB controller ingress The following HELM chart is an example of AWS MSK with a basic authentication and also using AWS load balancer controller. So mixing the default values.yaml previously linked and adding the basic idea of basic AKHQ authentication (more info here: https://akhq.io/docs/configuration/authentifications/basic-auth.html) and the documentation about how to connect to the AWS MSK here https://akhq.io/docs/configuration/authentifications/aws-iam-auth.html, we created the following example. And of course, about `ingress` and `service` is using similar Helm configurations like other external helm charts are using in the opensource community. Also, if you need to add more stuff like ACL defintions, LDAP integrations or other stuff. In the main documentation there are present a lot of examples https://akhq.io/docs/ . ```yaml # This is an example with basic auth and a AWS MSK and using a AWS loadbalancer controller ingress configuration: micronaut: security: enabled: true default-group: no-roles token: jwt: signatures: secret: generator: secret: changeme akhq: security: enabled: true default-group: no-roles basic-auth: - username: changeme password: changeme groups: - admin - username: changeme password: changeme groups: - reader server: access-log: enabled: true name: org.akhq.log.access connections: my-cluster-sasl: properties: bootstrap.servers: security.protocol: SASL_SSL sasl.mechanism: SCRAM-SHA-512 sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="username" password="password"; ingress: enabled: true portnumber: 8080 apiVersion: networking.k8s.io/v1 annotations: kubernetes.io/ingress.class: 'alb' alb.ingress.kubernetes.io/group.name: "akhq" alb.ingress.kubernetes.io/scheme: internal alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443},{"HTTPS":80}]' alb.ingress.kubernetes.io/load-balancer-attributes: 'routing.http2.enabled=true,idle_timeout.timeout_seconds=60' alb.ingress.kubernetes.io/healthcheck-path: "/api/me" alb.ingress.kubernetes.io/subnets: external-dns.alpha.kubernetes.io/hostname: "akhq.domain" alb.ingress.kubernetes.io/certificate-arn: "your_acm_here" alb.ingress.kubernetes.io/ssl-policy: "ELBSecurityPolicy-TLS-1-2-2017-01" service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tls" service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443,80" service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-2017-01" labels: app: akhq service: port: 443 annotations: service.beta.kubernetes.io/target-type: "ip" hosts: [ 'akhq.domain' ] paths: [ "/*" ] tls: - secretName: tls-credential hosts: - 'akhq.domain' ``` --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/others.md # Others ## Server * `micronaut.server.context-path`: if behind a reverse proxy, path to akhq without trailing slash (optional). Example: akhq is behind a reverse proxy with url , set `context-path: "/akhq"`. Not needed if you're behind a reverse proxy with subdomain ## Kafka admin / producer / consumer default properties * `akhq.clients-defaults.{{admin|producer|consumer}}.properties`: default configuration for admin producer or consumer. All properties from [Kafka documentation](https://kafka.apache.org/documentation/) is available. ## Micronaut configuration > Since AKHQ is based on [Micronaut](https://micronaut.io/), you can customize configurations (server port, ssl, ...) with [Micronaut configuration](https://docs.micronaut.io/snapshot/guide/configurationreference.html#io.micronaut.http.server.HttpServerConfiguration). > More information can be found on [Micronaut documentation](https://docs.micronaut.io/snapshot/guide/index.html#config) ### Activating SSL When using HTTPS for communication, Micronaut will need to get the certificate within Netty. This uses classes of the java.base package which are no longer activated inside the JDK we use. The configuration at the bottom needs to be extended by this environment variable: ```bash JDK_JAVA_OPTIONS: --add-exports\=java.base/sun.security.x509\=ALL-UNNAMED ``` ```yaml micronaut: server: ssl: enabled: true build-self-signed: true ``` ## JSON Logging In order to configure AKHQ to output log in JSON format, a logback configuration needs to be provided, e.g. `logback.xml` ```xml ``` This file then needs to be mounted to `/app/logback.xml` and referenced in `JAVA_OPTS` via `-Dlogback.configurationFile=/app/logback.xml` (see [docker](docker.md) for more information). --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/protobuf.md # Protobuf deserialization To deserialize topics containing data in Protobuf format, you can set topics mapping: for each `topic-regex` you can specify `descriptor-file-base64` (descriptor file encoded to Base64 format), or you can put descriptor files in `descriptors-folder` and specify `descriptor-file` name, also specify corresponding message types for keys and values. If, for example, keys are not in Protobuf format, `key-message-type` can be omitted, the same for `value-message-type`. . It's important to keep in mind that both `key-message-type` and `value-message-type` require a fully-qualified name. This configuration can be specified for each Kafka cluster. Example configuration can look like as follows: ```yaml akhq: connections: kafka: properties: # standard kafka properties deserialization: protobuf: descriptors-folder: "/app/protobuf_desc" topics-mapping: - topic-regex: "album.*" descriptor-file-base64: "Cs4BCgthbGJ1bS5wcm90bxIXY29tLm5ldGNyYWNrZXIucHJvdG9idWYidwoFQWxidW0SFAoFdGl0bGUYASABKAlSBXRpdGxlEhYKBmFydGlzdBgCIAMoCVIGYXJ0aXN0EiEKDHJlbGVhc2VfeWVhchgDIAEoBVILcmVsZWFzZVllYXISHQoKc29uZ190aXRsZRgEIAMoCVIJc29uZ1RpdGxlQiUKF2NvbS5uZXRjcmFja2VyLnByb3RvYnVmQgpBbGJ1bVByb3RvYgZwcm90bzM=" value-message-type: "org.akhq.utils.Album" - topic-regex: "film.*" descriptor-file-base64: "CuEBCgpmaWxtLnByb3RvEhRjb20uY29tcGFueS5wcm90b2J1ZiKRAQoERmlsbRISCgRuYW1lGAEgASgJUgRuYW1lEhoKCHByb2R1Y2VyGAIgASgJUghwcm9kdWNlchIhCgxyZWxlYXNlX3llYXIYAyABKAVSC3JlbGVhc2VZZWFyEhoKCGR1cmF0aW9uGAQgASgFUghkdXJhdGlvbhIaCghzdGFycmluZxgFIAMoCVIIc3RhcnJpbmdCIQoUY29tLmNvbXBhbnkucHJvdG9idWZCCUZpbG1Qcm90b2IGcHJvdG8z" value-message-type: "org.akhq.utils.Film" - topic-regex: "test.*" descriptor-file: "other.desc" key-message-type: "org.akhq.utils.Row" value-message-type: "org.akhq.utils.Envelope" ``` More examples about Protobuf deserialization can be found in [tests](https://github.com/tchiotludo/akhq/tree/dev/src/test/java/org/akhq/utils). Info about the descriptor files generation can be found in [test resources](https://github.com/tchiotludo/akhq/tree/dev/src/test/resources/protobuf_proto). --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/schema-registry/glue.md # Glue schema registry Currently ,glue schema registry support is limited to only de-serialisation of avro/protobuf/json serialized messages. It can be configured as below. ```yaml akhq: environment: AKHQ_CONFIGURATION: | akhq: connections: docker-kafka-server: properties: bootstrap.servers: "kafka:9092" schema-registry: url: "http://schema-registry:8085" type: "glue" glueSchemaRegistryName: Name of schema Registry awsRegion: aws region connect: - name: "connect" url: "http://connect:8083" ports: - 8080:8080 links: - kafka - repo ``` Please note that authentication is done using aws default credentials provider. Url key is required to not break the flow. --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/schema-registry/schema-references.md # Schema references Since Confluent 5.5.0, Avro schemas can now be reused by others schemas through schema references. This feature allows to define a schema once and use it as a record type inside one or more schemas. When registering new Avro schemas with AKHQ UI, it is now possible to pass a slightly more complex object with a `schema` and a `references` field. To register a new schema without references, no need to change anything: ```json { "name": "Schema1", "namespace": "org.akhq", "type": "record", "fields": [ { "name": "description", "type": "string" } ] } ``` To register a new schema with a reference to an already registered schema: ```json { "schema": { "name": "Schema2", "namespace": "org.akhq", "type": "record", "fields": [ { "name": "name", "type": "string" }, { "name": "schema1", "type": "Schema1" } ] }, "references": [ { "name": "Schema1", "subject": "SCHEMA_1", "version": 1 } ] } ```` Documentation on Confluent 5.5 and schema references can be found [here](https://docs.confluent.io/5.5.0/schema-registry/serdes-develop/index.html). --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/configuration/schema-registry/tibco.md # TIBCO schema registry If you are using the TIBCO schema registry, you will also need to mount and use the TIBCO Avro client library and its dependencies. The akhq service in a docker compose file might look something like: ```yaml akhq: # build: # context: . image: tchiotludo/akhq volumes: - /opt/tibco/akd/repo/1.2/lib/tibftl-kafka-avro-1.2.0-thin.jar:/app/tibftl-kafka-avro-1.2.0-thin.jar - /opt/tibco/akd/repo/1.2/lib/deps:/app/deps environment: AKHQ_CONFIGURATION: | akhq: connections: docker-kafka-server: properties: bootstrap.servers: "kafka:9092" schema-registry: type: "tibco" url: "http://repo:8081" connect: - name: "connect" url: "http://connect:8083" CLASSPATH: "/app/tibftl-kafka-avro-1.2.0-thin.jar:/app/deps/*" ports: - 8080:8080 links: - kafka - repo ``` --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/debug.md # Debug & Monitoring ## Monitoring endpoint Several monitoring endpoint is enabled by default and available on port `28081` only. You can disable it, change the port or restrict access only for authenticated users following micronaut configuration below. * `/info` [Info Endpoint](https://docs.micronaut.io/snapshot/guide/index.html#infoEndpoint) with git status information. * `/health` [Health Endpoint](https://docs.micronaut.io/snapshot/guide/index.html#healthEndpoint) * `/loggers` [Loggers Endpoint](https://docs.micronaut.io/snapshot/guide/index.html#loggersEndpoint) * `/metrics` [Metrics Endpoint](https://docs.micronaut.io/snapshot/guide/index.html#metricsEndpoint) * `/prometheus` [Prometheus Endpoint](https://micronaut-projects.github.io/micronaut-micrometer/latest/guide/) ## Debugging AKHQ performance issues You can debug all query duration from AKHQ with this commands ```bash curl -i -X POST -H "Content-Type: application/json" \ -d '{ "configuredLevel": "TRACE" }' \ http://localhost:28081/loggers/org.akhq ``` ## Debugging authentication Debugging auth can be done by increasing log level on Micronaut that handle most of the authentication part : ```bash curl -i -X POST -H "Content-Type: application/json" \ -d '{ "configuredLevel": "TRACE" }' \ http://localhost:28081/loggers/io.micronaut.security curl -i -X POST -H "Content-Type: application/json" \ -d '{ "configuredLevel": "TRACE" }' \ http://localhost:28081/loggers/org.akhq.configs ``` --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/dev.md # Development Environment ## Early dev image You can have access to last feature / bug fix with docker dev image automatically build on tag `dev` ```bash docker pull tchiotludo/akhq:dev ``` The dev jar is not publish on GitHub, you have 2 solutions to have the `dev` jar : Get it from docker image ```bash docker pull tchiotludo/akhq:dev docker run --rm --name=akhq -v /tmp/akhq/application-dev.yml:/app/application.yml -it tchiotludo/akhq:dev docker cp akhq:/app/akhq.jar . ``` Or build it with a `./gradlew shadowJar`, the jar will be located here `build/libs/akhq-*.jar` ## Development Server A docker-compose is provided to start a development environment. Just install docker & docker compose plugin, clone the repository and issue a simple `docker compose -f docker-compose-dev.yml up` to start a dev server. Dev server is a java server & webpack-dev-server with live reload. The configuration for the dev server is in `application.dev.yml`. Once it started, you can visit http://localhost:4000, additionally it is possible to run `npm start` inside the client-folder which contains hot reload. ## Setup local dev environment on Windows In case you want to develop for AKHQ on Windows with IntelliJ IDEA without Docker (for any reason) you can follow this brief guide. For the following steps, please, make sure you meet these requirements: * OS: Windows (10) * Kafka (2.6.0) is downloaded and extracted, the installation directory is referred to as $KAFKA_HOME in the latter * Git is installed and configured * IntelliJ IDEA (Community Edition 2020.2) with the following plugins installed: * Gradle (bundled with IDEA) * [Lombok](https://plugins.jetbrains.com/plugin/6317-lombok) First run a Kafka server locally. Therefore, you need to start Zookeeper first by opening a CMD and doing: ```bash $KAFKA_HOME\bin\windows\zookeeper-server-start.bat config\zookeper.properties $KAFKA_HOME\bin\windows\kafka-server-start.bat config\server.properties ``` A zero-config Kafka server should be up and running locally on your machine now. For further details or troubleshooting see [Kafka Getting started guide](https://kafka.apache.org/quickstart). In the next step we're going to checkout AKHQ from GitHub: ```bash git clone https://github.com/tchiotludo/akhq.git ``` Open the checked out directory in IntelliJ IDEA. The current version of AKHQ is built with Java 17. If you don't have OpenJDK 17 installed already, do the following in IntelliJ IDEA: * _File > Project Structure... > Platform Settings > SDKs > + > Download JDK... >_ select a vendor of your choice (but make sure it's version 17) * download + install. Make sure that JDK 17 is set under _Project Settings > Project SDK_ * language level is Java 17. * Now tell Gradle to use Java 17 as well: _File > Settings > Plugins > Build, Execution, Deployment > Build Tools > Gradle > Gradle JVM_: any JDK 17. To configure AKHQ for using the Kafka server you set up before, edit `application.yml` by adding the following under `akhq`: ```yaml akhq: connections: kafka: properties: bootstrap.servers: "localhost:9092" ``` ::: warning Do not commit this part of `application.yml`. A more secure way to configure your local development Kafka server is described in the Micronaut doc, chapter ["Application Configuration"](https://docs.micronaut.io/2.5.13/guide/index.html#config). ::: Now you should be able to build the project with Gradle. Therefore, go to the Gradle view in IDEA, select _Tasks > build > build_. If an error occurs saying that any filename is too long: move your project directory to a root directory in your filesystem or as a fix (only for testing purposes) set the argument `-x test` to skip tests temporarily. To debug a running AKHQ instance, go to the Gradle tab in IntelliJ IDEA, _Tasks > application_ > right click `run` and click "_Debug(...)_". AKHQ should start up and hit the breakpoints you set in your IDE. Happy developing/debugging! --- # Source: https://github.com/tchiotludo/akhq/blob/master/docs/docs/installation.md # Installation First you need a [configuration files](./configuration/README.md) in order to configure AKHQ connections to Kafka Brokers. Configuration file default path is `/app/application.yml` (for YML file), so expected to be at the same folder as AKHQ application files. Configuration file path can target any path through `MICRONAUT_CONFIG_FILES` environment variable, for example: `MICRONAUT_CONFIG_FILES=/somepath/application.yml`. ### Docker ```sh docker run -d \ -p 8080:8080 \ -v /tmp/application.yml:/app/application.yml \ tchiotludo/akhq ``` * With `-v /tmp/application.yml` must be an absolute path to configuration file * Go to ### Stand Alone * Install Java 17 * Download the latest jar on [release page](https://github.com/tchiotludo/akhq/releases) * Create a [configuration file](./configuration/README.md) * Launch the application with `java -Dmicronaut.config.files=/path/to/application.yml -jar akhq.jar` * Go to ### Running in Kubernetes (using a Helm Chart) ### Using Helm repository * Add the AKHQ helm charts repository: ```sh helm repo add akhq https://akhq.io/ ``` * Install or upgrade ```sh helm upgrade --install akhq akhq/akhq ``` #### Requirements * Chart version >=0.1.1 requires Kubernetes version >=1.14 * Chart version 0.1.0 works on previous Kubernetes versions ```sh helm install akhq akhq/akhq --version 0.1.0 ``` ### Using git * Clone the repository: ```sh git clone https://github.com/tchiotludo/akhq && cd akhq/helm/akhq ``` * Update helm values located in [values.yaml](https://github.com/tchiotludo/akhq/blob/dev/helm/akhq/values.yaml) * `configuration` values will contains all related configuration that you can find in [application.example.yml](https://github.com/tchiotludo/akhq/blob/dev/application.example.yml) and will be store in a `ConfigMap` * `secrets` values will contains all sensitive configurations (with credentials) that you can find in [application.example.yml](https://github.com/tchiotludo/akhq/blob/dev/application.example.yml) and will be store in `Secret` * Both values will be merged at startup * Apply the chart: ```sh helm install --name=akhq-release-name . ```