Browse Source

GitBook: [#4] No subject

Roman Zabaluev 2 năm trước cách đây
mục cha
commit
bb2dcc7beb

+ 11 - 0
README.md

@@ -0,0 +1,11 @@
+# About
+
+## **About Kafka-UI**
+
+**Versatile, fast and lightweight web UI for managing Apache Kafka® clusters. Built by developers, for developers.**
+
+****
+
+**UI for Apache Kafka is a free, open-source web UI to monitor and manage Apache Kafka clusters.**
+
+UI for Apache Kafka is a simple tool that makes your data flows observable, helps find and troubleshoot issues faster and deliver optimal performance. Its lightweight dashboard makes it easy to track key metrics of your Kafka clusters - Brokers, Topics, Partitions, Production, and Consumption.

+ 41 - 0
SUMMARY.md

@@ -0,0 +1,41 @@
+# Table of contents
+
+## 🎓 Overview
+
+* [About](README.md)
+* [Features](overview/features.md)
+* [Getting started](overview/getting-started.md)
+
+## 🛣 Project
+
+* [Code of Conduct](project/code-of-conduct.md)
+* [Roadmap](project/roadmap.md)
+
+## 🛠 Development
+
+* [Contributing](development/contributing.md)
+* [Building](development/building/README.md)
+  * [Prerequisites](development/building/prerequisites.md)
+  * [WIP: Setting up git](development/building/wip-setting-up-git.md)
+  * [With Docker](development/building/with-docker.md)
+  * [Without Docker](development/building/without-docker.md)
+  * [WIP: Testing](development/building/wip-testing.md)
+
+## 👷♂ Configuration
+
+* [Configuration](configuration/configuration.md)
+* [SSL](configuration/ssl.md)
+* [Authentication](configuration/authentication/README.md)
+  * [OAuth2](configuration/authentication/oauth2.md)
+  * [AWS IAM](configuration/authentication/aws-iam.md)
+  * [SSO Guide](configuration/authentication/sso-guide.md)
+  * [SASL\_SCRAM](configuration/authentication/sasl\_scram.md)
+* [RBAC (Role based access control)](configuration/rbac-role-based-access-control.md)
+* [Data masking](configuration/data-masking.md)
+* [Serialization / SerDe](configuration/serialization-serde.md)
+* [Protobuf setup](configuration/protobuf-setup.md)
+
+## ❓ FAQ
+
+* [Common problems](faq/common-problems.md)
+* [FAQ](faq/faq.md)

+ 2 - 0
configuration/authentication/README.md

@@ -0,0 +1,2 @@
+# Authentication
+

+ 45 - 0
configuration/authentication/aws-iam.md

@@ -0,0 +1,45 @@
+---
+description: How to configure AWS IAM Authentication
+---
+
+# AWS IAM
+
+UI for Apache Kafka comes with built-in [aws-msk-iam-auth](https://github.com/aws/aws-msk-iam-auth) library.
+
+You could pass sasl configs in properties section for each cluster.
+
+More details could be found here: [aws-msk-iam-auth](https://github.com/aws/aws-msk-iam-auth)
+
+### Examples:
+
+Please replace
+
+* \<KAFKA\_URL> with broker list
+* \<PROFILE\_NAME> with your aws profile
+
+#### Running From Docker Image
+
+```
+docker run -p 8080:8080 \
+    -e KAFKA_CLUSTERS_0_NAME=local \
+    -e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=<KAFKA_URL> \
+    -e KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL=SASL_SSL \
+    -e KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM=AWS_MSK_IAM \
+    -e KAFKA_CLUSTERS_0_PROPERTIES_SASL_CLIENT_CALLBACK_HANDLER_CLASS=software.amazon.msk.auth.iam.IAMClientCallbackHandler \
+    -e KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG=software.amazon.msk.auth.iam.IAMLoginModule required awsProfileName="<PROFILE_NAME>"; \
+    -d provectuslabs/kafka-ui:latest 
+```
+
+#### Configuring by application.yaml
+
+```yaml
+kafka:
+  clusters:
+    - name: local
+      bootstrapServers: <KAFKA_URL>
+      properties:
+        security.protocol: SASL_SSL
+        sasl.mechanism: AWS_MSK_IAM
+        sasl.client.callback.handler.class: software.amazon.msk.auth.iam.IAMClientCallbackHandler
+        sasl.jaas.config: software.amazon.msk.auth.iam.IAMLoginModule required awsProfileName="<PROFILE_NAME>";
+```

+ 80 - 0
configuration/authentication/oauth2.md

@@ -0,0 +1,80 @@
+# OAuth2
+
+## Examples to set up different oauth providers
+
+### Cognito
+
+```
+kafka:
+  clusters:
+    - name: local
+      bootstrapServers: localhost:9092
+    # ...
+
+auth:
+  type: OAUTH2
+  oauth2:
+    client:
+      cognito:
+        clientId: xxx
+        clientSecret: yyy
+        scope: openid
+        client-name: cognito
+        provider: cognito
+        redirect-uri: http://localhost:8080/login/oauth2/code/cognito
+        authorization-grant-type: authorization_code
+        issuer-uri: https://cognito-idp.eu-central-1.amazonaws.com/eu-central-1_xxx
+        jwk-set-uri: https://cognito-idp.eu-central-1.amazonaws.com/eu-central-1_xxx/.well-known/jwks.json
+        user-name-attribute: username
+        custom-params:
+          type: cognito
+          logoutUrl: https://<XXX>>.eu-central-1.amazoncognito.com/logout
+```
+
+### Google
+
+```
+kafka:
+  clusters:
+    - name: local
+      bootstrapServers: localhost:9092
+    # ...
+
+auth:
+  type: OAUTH2
+  oauth2:
+    client:
+      google:
+        provider: google
+        clientId: xxx.apps.googleusercontent.com
+        clientSecret: GOCSPX-xxx
+        user-name-attribute: email
+        custom-params:
+          type: google
+          allowedDomain: provectus.com
+
+```
+
+### Github:
+
+```
+kafka:
+  clusters:
+    - name: local
+      bootstrapServers: localhost:9092
+    # ...
+
+auth:
+  type: OAUTH2
+  oauth2:
+    client:
+      github:
+        provider: github
+        clientId: xxx
+        clientSecret: yyy
+        scope:
+          - read:org
+        user-name-attribute: login
+        custom-params:
+          type: github
+```

+ 63 - 0
configuration/authentication/sasl_scram.md

@@ -0,0 +1,63 @@
+---
+description: How to configure SASL SCRAM Authentication
+---
+
+# SASL\_SCRAM
+
+You could pass sasl configs in properties section for each cluster.
+
+### Examples:
+
+Please replace
+
+* \<KAFKA\_NAME> with cluster name
+* \<KAFKA\_URL> with broker list
+* \<KAFKA\_USERNAME> with username
+* \<KAFKA\_PASSWORD> with password
+
+#### Running From Docker Image
+
+```
+docker run -p 8080:8080 \
+    -e KAFKA_CLUSTERS_0_NAME=<KAFKA_NAME> \
+    -e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=<KAFKA_URL> \
+    -e KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL=SASL_SSL \
+    -e KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM=SCRAM-SHA-512 \     
+    -e KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG=org.apache.kafka.common.security.scram.ScramLoginModule required username="<KAFKA_USERNAME>" password="<KAFKA_PASSWORD>"; \
+    -d provectuslabs/kafka-ui:latest 
+```
+
+#### Running From Docker-compose file
+
+```yaml
+
+version: '3.4'
+services:
+  
+  kafka-ui:
+    image: provectuslabs/kafka-ui
+    container_name: kafka-ui
+    ports:
+      - "888:8080"
+    restart: always
+    environment:
+      - KAFKA_CLUSTERS_0_NAME=<KAFKA_NAME>
+      - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=<KAFKA_URL>
+      - KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL=SASL_SSL
+      - KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM=SCRAM-SHA-512
+      - KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG=org.apache.kafka.common.security.scram.ScramLoginModule required username="<KAFKA_USERNAME>" password="<KAFKA_PASSWORD>";
+      - KAFKA_CLUSTERS_0_PROPERTIES_PROTOCOL=SASL
+```
+
+#### Configuring by application.yaml
+
+```yaml
+kafka:
+  clusters:
+    - name: local
+      bootstrapServers: <KAFKA_URL>
+      properties:
+        security.protocol: SASL_SSL
+        sasl.mechanism: SCRAM-SHA-512        
+        sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="<KAFKA_USERNAME>" password="<KAFKA_PASSWORD>";
+```

+ 85 - 0
configuration/authentication/sso-guide.md

@@ -0,0 +1,85 @@
+# SSO Guide
+
+## How to configure SSO
+
+SSO require additionaly to configure TLS for application, in that example we will use self-signed certificate, in case of use legal certificates please skip step 1.
+
+### Step 1
+
+At this step we will generate self-signed PKCS12 keypair.
+
+```bash
+mkdir cert
+keytool -genkeypair -alias ui-for-apache-kafka -keyalg RSA -keysize 2048 \
+  -storetype PKCS12 -keystore cert/ui-for-apache-kafka.p12 -validity 3650
+```
+
+### Step 2
+
+Create new application in any SSO provider, we will continue with [Auth0](https://auth0.com).
+
+![](https://user-images.githubusercontent.com/1494347/172255269-94cb9e3a-042b-49bb-925e-a06344840662.png)
+
+After that need to provide callback URLs, in our case we will use `https://127.0.0.1:8080/login/oauth2/code/auth0`
+
+![](https://user-images.githubusercontent.com/1494347/172255294-86af29b9-642b-4fb5-9ba8-212185e3fdfc.png)
+
+This is a main parameters required for enabling SSO
+
+![](https://user-images.githubusercontent.com/1494347/172255315-4f12ac92-ca13-4206-ab68-48092e562092.png)
+
+### Step 3
+
+To launch UI for Apache Kafka with enabled TLS and SSO run following:
+
+```bash
+docker run -p 8080:8080 -v `pwd`/cert:/opt/cert -e AUTH_TYPE=LOGIN_FORM \
+  -e SECURITY_BASIC_ENABLED=true \
+  -e SERVER_SSL_KEY_STORE_TYPE=PKCS12 \
+  -e SERVER_SSL_KEY_STORE=/opt/cert/ui-for-apache-kafka.p12 \
+  -e SERVER_SSL_KEY_STORE_PASSWORD=123456 \
+  -e SERVER_SSL_KEY_ALIAS=ui-for-apache-kafka \
+  -e SERVER_SSL_ENABLED=true \
+  -e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTID=uhvaPKIHU4ZF8Ne4B6PGvF0hWW6OcUSB \
+  -e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTSECRET=YXfRjmodifiedTujnkVr7zuW9ECCAK4TcnCio-i \
+  -e SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_AUTH0_ISSUER_URI=https://dev-a63ggcut.auth0.com/ \
+  -e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_SCOPE=openid \
+  -e TRUST_STORE=/opt/cert/ui-for-apache-kafka.p12 \
+  -e TRUST_STORE_PASSWORD=123456 \
+provectuslabs/kafka-ui:latest
+```
+
+In the case with trusted CA-signed SSL certificate and SSL termination somewhere outside of application we can pass only SSO related environment variables:
+
+```bash
+docker run -p 8080:8080 -v `pwd`/cert:/opt/cert -e AUTH_TYPE=OAUTH2 \
+  -e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTID=uhvaPKIHU4ZF8Ne4B6PGvF0hWW6OcUSB \
+  -e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTSECRET=YXfRjmodifiedTujnkVr7zuW9ECCAK4TcnCio-i \
+  -e SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_AUTH0_ISSUER_URI=https://dev-a63ggcut.auth0.com/ \
+  -e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_SCOPE=openid \
+provectuslabs/kafka-ui:latest
+```
+
+### Step 4 (Load Balancer HTTP) (optional)
+
+If you're using load balancer/proxy and use HTTP between the proxy and the app, you might want to set `server_forward-headers-strategy` to `native` as well (`SERVER_FORWARDHEADERSSTRATEGY=native`), for more info refer to [this issue](https://github.com/provectus/kafka-ui/issues/1017).
+
+### Step 5 (Azure) (optional)
+
+For Azure AD (Office365) OAUTH2 you'll want to add additional environment variables:
+
+```bash
+docker run -p 8080:8080 \
+        -e KAFKA_CLUSTERS_0_NAME="${cluster_name}"\
+        -e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS="${kafka_listeners}" \
+        -e KAFKA_CLUSTERS_0_ZOOKEEPER="${zookeeper_servers}" \
+        -e KAFKA_CLUSTERS_0_KAFKACONNECT_0_ADDRESS="${kafka_connect_servers}"
+        -e AUTH_TYPE=OAUTH2 \
+        -e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTID=uhvaPKIHU4ZF8Ne4B6PGvF0hWW6OcUSB \
+        -e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTSECRET=YXfRjmodifiedTujnkVr7zuW9ECCAK4TcnCio-i \
+        -e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_SCOPE="https://graph.microsoft.com/User.Read" \
+        -e SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_AUTH0_ISSUER_URI="https://login.microsoftonline.com/{tenant-id}/v2.0" \
+        -d provectuslabs/kafka-ui:latest"
+```
+
+Note that scope is created by default when Application registration is done in Azure portal. You'll need to update application registration manifest to include `"accessTokenAcceptedVersion": 2`

+ 2 - 0
configuration/configuration.md

@@ -0,0 +1,2 @@
+# Configuration
+

+ 136 - 0
configuration/data-masking.md

@@ -0,0 +1,136 @@
+# Data masking
+
+## Topics data masking
+
+You can configure kafka-ui to mask sensitive data shown in Messages page.
+
+Several masking policies supported:
+
+#### REMOVE
+
+For json objects - remove target fields, otherwise - return "null" string.
+
+```yaml
+- type: REMOVE
+  fields: [ "id", "name" ]
+  ...
+```
+
+Apply examples:
+
+```
+{ "id": 1234, "name": { "first": "James" }, "age": 30 } 
+ ->
+{ "age": 30 } 
+```
+
+```
+non-json string -> null
+```
+
+#### REPLACE
+
+For json objects - replace target field's values with specified replacement string (by default with `***DATA_MASKED***`). Note: if target field's value is object, then replacement applied to all its fields recursively (see example).
+
+```yaml
+- type: REPLACE
+  fields: [ "id", "name" ]
+  replacement: "***"  #optional, "***DATA_MASKED***" by default
+  ...
+```
+
+Apply examples:
+
+```
+{ "id": 1234, "name": { "first": "James", "last": "Bond" }, "age": 30 } 
+ ->
+{ "id": "***", "name": { "first": "***", "last": "***" }, "age": 30 } 
+```
+
+```
+non-json string -> ***
+```
+
+#### MASK
+
+Mask target field's values with specified masking characters, recursively (spaces and line separators will be kept as-is). `pattern` array specifies what symbols will be used to replace upper-case chars, lower-case chars, digits and other symbols correspondingly.
+
+```yaml
+- type: MASK
+  fields: [ "id", "name" ]
+  pattern: ["A", "a", "N", "_"]   # optional, default is ["X", "x", "n", "-"]
+  ...
+```
+
+Apply examples:
+
+```
+{ "id": 1234, "name": { "first": "James", "last": "Bond!" }, "age": 30 } 
+ ->
+{ "id": "NNNN", "name": { "first": "Aaaaa", "last": "Aaaa_" }, "age": 30 } 
+```
+
+```
+Some string! -> Aaaa aaaaaa_
+```
+
+***
+
+For each policy, if `fields` not specified, then policy will be applied to all object's fields or whole string if it is not a json-object.
+
+You can specify which masks will be applied to topic's keys/values. Multiple policies will be applied if topic matches both policy's patterns.
+
+Yaml configuration example:
+
+```yaml
+kafka:
+  clusters:
+    - name: ClusterName
+      # Other Cluster configuration omitted ... 
+      masking:
+        - type: REMOVE
+          fields: [ "id" ]
+          topicKeysPattern: "events-with-ids-.*"
+          topicValuesPattern: "events-with-ids-.*"
+          
+        - type: REPLACE
+          fields: [ "companyName", "organizationName" ]
+          replacement: "***MASKED_ORG_NAME***"   #optional
+          topicValuesPattern: "org-events-.*"
+        
+        - type: MASK
+          fields: [ "name", "surname" ]
+          pattern: ["A", "a", "N", "_"]  #optional
+          topicValuesPattern: "user-states"
+
+        - type: MASK
+          topicValuesPattern: "very-secured-topic"
+```
+
+Same configuration in env-vars fashion:
+
+```
+...
+KAFKA_CLUSTERS_0_MASKING_0_TYPE: REMOVE
+KAFKA_CLUSTERS_0_MASKING_0_FIELDS_0: "id"
+KAFKA_CLUSTERS_0_MASKING_0_TOPICKEYSPATTERN: "events-with-ids-.*"
+KAFKA_CLUSTERS_0_MASKING_0_TOPICVALUESPATTERN: "events-with-ids-.*"
+
+KAFKA_CLUSTERS_0_MASKING_1_TYPE: REPLACE
+KAFKA_CLUSTERS_0_MASKING_1_FIELDS_0: "companyName"
+KAFKA_CLUSTERS_0_MASKING_1_FIELDS_1: "organizationName"
+KAFKA_CLUSTERS_0_MASKING_1_REPLACEMENT: "***MASKED_ORG_NAME***"
+KAFKA_CLUSTERS_0_MASKING_1_TOPICVALUESPATTERN: "org-events-.*"
+
+KAFKA_CLUSTERS_0_MASKING_2_TYPE: MASK
+KAFKA_CLUSTERS_0_MASKING_2_FIELDS_0: "name"
+KAFKA_CLUSTERS_0_MASKING_2_FIELDS_1: "surname"
+KAFKA_CLUSTERS_0_MASKING_2_PATTERN_0: 'A'
+KAFKA_CLUSTERS_0_MASKING_2_PATTERN_1: 'a'
+KAFKA_CLUSTERS_0_MASKING_2_PATTERN_2: 'N'
+KAFKA_CLUSTERS_0_MASKING_2_PATTERN_3: '_'
+KAFKA_CLUSTERS_0_MASKING_2_TOPICVALUESPATTERN: "user-states"
+
+KAFKA_CLUSTERS_0_MASKING_3_TYPE: MASK
+KAFKA_CLUSTERS_0_MASKING_3_TOPICVALUESPATTERN: "very-secured-topic"
+```

+ 55 - 0
configuration/protobuf-setup.md

@@ -0,0 +1,55 @@
+# Protobuf setup
+
+## Kafkaui Protobuf Support
+
+#### This document is deprecated, please see examples in Serialization document.
+
+Kafkaui supports deserializing protobuf messages in two ways:
+
+1. Using Confluent Schema Registry's [protobuf support](https://docs.confluent.io/platform/current/schema-registry/serdes-develop/serdes-protobuf.html).
+2. Supplying a protobuf file as well as a configuration that maps topic names to protobuf types.
+
+### Configuring Kafkaui with a Protobuf File
+
+To configure Kafkaui to deserialize protobuf messages using a supplied protobuf schema add the following to the config:
+
+```yaml
+kafka:
+  clusters:
+    - # Cluster configuration omitted.
+      # protobufFile is the path to the protobuf schema. (deprecated: please use "protobufFiles")
+      protobufFile: path/to/my.proto
+      # protobufFiles is the path to one or more protobuf schemas.
+      protobufFiles: 
+        - /path/to/my.proto
+        - /path/to/another.proto
+      # protobufMessageName is the default protobuf type that is used to deserilize
+      # the message's value if the topic is not found in protobufMessageNameByTopic.
+      protobufMessageName: my.DefaultValType
+      # protobufMessageNameByTopic is a mapping of topic names to protobuf types.
+      # This mapping is required and is used to deserialize the Kafka message's value.
+      protobufMessageNameByTopic:
+        topic1: my.Type1
+        topic2: my.Type2
+      # protobufMessageNameForKey is the default protobuf type that is used to deserilize
+      # the message's key if the topic is not found in protobufMessageNameForKeyByTopic.
+      protobufMessageNameForKey: my.DefaultKeyType
+      # protobufMessageNameForKeyByTopic is a mapping of topic names to protobuf types.
+      # This mapping is optional and is used to deserialize the Kafka message's key.
+      # If a protobuf type is not found for a topic's key, the key is deserialized as a string,
+      # unless protobufMessageNameForKey is specified.
+      protobufMessageNameForKeyByTopic:
+        topic1: my.KeyType1
+```
+
+Same config with flattened config (for docker-compose):
+
+```
+kafka.clusters.0.protobufFiles.0: /path/to/my.proto
+kafka.clusters.0.protobufFiles.1: /path/to/another.proto
+kafka.clusters.0.protobufMessageName: my.DefaultValType
+kafka.clusters.0.protobufMessageNameByTopic.topic1: my.Type1
+kafka.clusters.0.protobufMessageNameByTopic.topic2: my.Type2
+kafka.clusters.0.protobufMessageNameForKey: my.DefaultKeyType
+kafka.clusters.0.protobufMessageNameForKeyByTopic.topic1: my.KeyType1
+```

+ 255 - 0
configuration/rbac-role-based-access-control.md

@@ -0,0 +1,255 @@
+# RBAC (Role based access control)
+
+## Role based access control
+
+In this article we'll guide how to setup Kafka-UI with role-based access control.
+
+### Authentication methods
+
+First of all, you'd need to setup authentication method(s). Refer to [this](https://github.com/provectus/kafka-ui/wiki/OAuth-Configuration) article for OAuth2 setup.\
+LDAP: TBD
+
+### Config placement
+
+First of all you have to decide if either:
+
+1. You wish to store all roles in a separate config file
+2. Or within a main config file
+
+This is how you include one more file to start with: docker-compose example:
+
+```
+services:
+  kafka-ui:
+    container_name: kafka-ui
+    image: provectuslabs/kafka-ui:latest
+    environment:
+      KAFKA_CLUSTERS_0_NAME: local
+      # other properties, omitted
+      spring.config.additional-location: /roles.yml
+    volumes:
+      - /tmp/roles.yml:/roles.yml
+```
+
+Alternatively, you can append the roles file contents to your main config file.
+
+### Roles file structure
+
+#### Clusters
+
+In the roles file we define roles, duh. Every each role has an access to defined clusters:
+
+```
+rbac:
+  roles:
+    - name: "memelords"
+      clusters:
+        - local
+        - dev
+        - staging
+        - prod
+```
+
+#### Subjects
+
+A role also has a list of _subjects_ which are the entities we will use to assign roles to. They are provider-dependant, in general they can be users, groups or some other entities (github orgs, google domains, LDAP queries, etc.) In this example we define a role `memelords` which will contain all the users within google domain `memelord.lol` and, additionally, a github user `Haarolean`. You can combine as many subjects as you want within a role.
+
+```
+    - name: "memelords"
+      subjects:
+        - provider: oauth_google
+          type: domain
+          value: "memelord.lol"
+        - provider: oauth_github
+          type: user
+          value: "Haarolean"
+```
+
+#### Providers
+
+A list of supported providers and corresponding subject fetch mechanism:
+
+* oauth\_google: `user`, `domain`
+* oauth\_github: `user`, `organization`
+* oauth\_cognito: `user`, `group`
+* ldap: (unsupported yet, will do in 0.6 release)
+* ldap\_ad: (unsupported yet, will do in 0.6 release)
+
+Find the more detailed examples in a full example file lower.
+
+#### Permissions
+
+Next thing which is present in your roles file is, surprisingly, permissions. They consist of:
+
+1. Resource Can be one of: `CLUSTERCONFIG`, `TOPIC`, `CONSUMER`, `SCHEMA`, `CONNECT`, `KSQL`.
+2. Resource value Either a fixed string or a regular expression identifying resource. Value is not applicable for `clusterconfig` and `ksql` resources. Please do not fill it.
+3. Actions It's a list of actions (the possible values depend on the resource, see the lists below) which will be applied to the certain permission. Also note, there's a special action for any of the resources called "all", it will virtually grant all the actions within the corresponding resource. An example for enabling viewing and creating topics which name start with "derp":
+
+```
+      permissions:
+        - resource: topic
+          value: "derp.*"
+          actions: [ VIEW, CREATE ]
+```
+
+**Actions**
+
+A list of all the actions for the corresponding resources (please note neither resource nor action names are case-sensitive):
+
+* `clusterconfig`: `view`, `edit`
+* `topic`: `view`, `create`, `edit`, `delete`, `messages_read`, `messages_produce`, `messages_delete`
+* `consumer`: `view`, `delete`, `reset_offsets`
+* `schema`: `view`, `create`, `delete`, `edit`, `modify_global_compatibility`
+* `connect`: `view`, `edit`, `create`
+* `ksql`: `execute`
+
+## Example file
+
+**A complete file example:**
+
+```
+rbac:
+  roles:
+    - name: "memelords"
+      clusters:
+        - local
+        - dev
+        - staging
+        - prod
+      subjects:
+        - provider: oauth_google
+          type: domain
+          value: "memelord.lol"
+        - provider: oauth_google
+          type: user
+          value: "kek@memelord.lol"
+
+        - provider: oauth_github
+          type: organization
+          value: "memelords_team"
+        - provider: oauth_github
+          type: user
+          value: "memelord"
+
+        - provider: oauth_cognito
+          type: user
+          value: "username"
+        - provider: oauth_cognito
+          type: group
+          value: "memelords"
+
+        # LDAP NOT IMPLEMENTED YET
+        - provider: ldap
+          type: group
+          value: "ou=devs,dc=planetexpress,dc=com"
+        - provider: ldap_ad
+          type: user
+          value: "cn=germanosin,dc=planetexpress,dc=com"
+
+      permissions:
+        - resource: clusterconfig
+          # value not applicable for clusterconfig
+          actions: [ "view", "edit" ] # can be with or without quotes
+
+        - resource: topic
+          value: "ololo.*"
+          actions: # can be a multiline list
+            - VIEW # can be upper or lower case
+            - CREATE
+            - EDIT
+            - DELETE
+            - MESSAGES_READ
+            - MESSAGES_PRODUCE
+            - MESSAGES_DELETE
+
+        - resource: consumer
+          value: "\_confluent-ksql.*"
+          actions: [ VIEW, DELETE, RESET_OFFSETS ]
+
+        - resource: schema
+          value: "blah.*"
+          actions: [ VIEW, CREATE, DELETE, EDIT, MODIFY_GLOBAL_COMPATIBILITY ]
+
+        - resource: connect
+          value: "local"
+          actions: [ view, edit, create ]
+        # connectors selector not implemented yet, use connects
+        #      selector:
+        #        connector:
+        #          name: ".*"
+        #          class: 'com.provectus.connectorName'
+
+        - resource: ksql
+          # value not applicable for ksql
+          actions: [ execute ]
+
+```
+
+**A read-only setup:**
+
+```
+rbac:
+  roles:
+    - name: "readonly"
+      clusters:
+        # FILL THIS
+      subjects:
+        # FILL THIS
+      permissions:
+        - resource: clusterconfig
+          actions: [ "view" ]
+
+        - resource: topic
+          value: ".*"
+          actions: 
+            - VIEW
+            - MESSAGES_READ
+
+        - resource: consumer
+          value: ".*"
+          actions: [ view ]
+
+        - resource: schema
+          value: ".*"
+          actions: [ view ]
+
+        - resource: connect
+          value: ".*"
+          actions: [ view ]
+
+```
+
+**An admin-group setup example:**
+
+```
+rbac:
+  roles:
+    - name: "admins"
+      clusters:
+        # FILL THIS
+      subjects:
+        # FILL THIS
+      permissions:
+        - resource: clusterconfig
+          actions: all
+
+        - resource: topic
+          value: ".*"
+          actions: all
+
+        - resource: consumer
+          value: ".*"
+          actions: all
+
+        - resource: schema
+          value: ".*"
+          actions: all
+
+        - resource: connect
+          value: ".*"
+          actions: all
+
+        - resource: ksql
+          actions: all
+
+```

+ 181 - 0
configuration/serialization-serde.md

@@ -0,0 +1,181 @@
+---
+description: Serialization, deserialization and custom plugins
+---
+
+# Serialization / SerDe
+
+Kafka-ui supports multiple ways to serialize/deserialize data.
+
+#### Int32, Int64, UInt32, UInt64
+
+Big-endian 4/8 bytes representation of signed/unsigned integers.
+
+#### Base64
+
+Base64 (RFC4648) binary data representation. Can be useful in case if the actual data is not important, but exactly the same (byte-wise) key/value should be send.
+
+#### String
+
+Treats binary data as a string in specified encoding. Default encoding is UTF-8.
+
+Class name: `com.provectus.kafka.ui.serdes.builtin.StringSerde`
+
+Sample configuration (if you want to overwrite default configuration):
+
+```yaml
+kafka:
+  clusters:
+    - name: Cluster1
+      # Other Cluster configuration omitted ... 
+      serdes:
+          # registering String serde with custom config
+        - name: AsciiString
+          className: com.provectus.kafka.ui.serdes.builtin.StringSerde
+          properties:
+            encoding: "ASCII"
+        
+          # overriding build-it String serde config   
+        - name: String 
+          properties:
+            encoding: "UTF-16"
+```
+
+#### Protobuf
+
+Class name: `com.provectus.kafka.ui.serdes.builtin.ProtobufFileSerde`
+
+Sample configuration:
+
+```yaml
+kafka:
+  clusters:
+    - name: Cluster1
+      # Other Cluster configuration omitted ... 
+      serdes:
+        - name: ProtobufFile
+          properties:
+            # path to the protobuf schema files
+            protobufFiles:
+              - path/to/my.proto
+              - path/to/another.proto
+            # default protobuf type that is used for KEY serialization/deserialization
+            # optional
+            protobufMessageNameForKey: my.Type1
+            # mapping of topic names to protobuf types, that will be used for KEYS  serialization/deserialization
+            # optional
+            protobufMessageNameForKeyByTopic:
+              topic1: my.KeyType1
+              topic2: my.KeyType2
+            # default protobuf type that is used for VALUE serialization/deserialization
+            # optional, if not set - first type in file will be used as default
+            protobufMessageName: my.Type1
+            # mapping of topic names to protobuf types, that will be used for VALUES  serialization/deserialization
+            # optional
+            protobufMessageNameByTopic:
+              topic1: my.Type1
+              "topic.2": my.Type2
+```
+
+Docker-compose sample for Protobuf serialization is here.
+
+Legacy configuration for protobuf is here.
+
+#### SchemaRegistry
+
+SchemaRegistry serde is automatically configured if schema registry properties set on cluster level. But you can add new SchemaRegistry-typed serdes that will connect to another schema-registry instance.
+
+Class name: `com.provectus.kafka.ui.serdes.builtin.sr.SchemaRegistrySerde`
+
+Sample configuration:
+
+```yaml
+kafka:
+  clusters:
+    - name: Cluster1
+      # this url will be used by "SchemaRegistry" by default
+      schemaRegistry: http://main-schema-registry:8081
+      serdes:
+        - name: AnotherSchemaRegistry
+          className: com.provectus.kafka.ui.serdes.builtin.sr.SchemaRegistrySerde
+          properties:
+            url:  http://another-schema-registry:8081
+            # auth properties, optional
+            username: nameForAuth
+            password: P@ssW0RdForAuth
+        
+          # and also add another SchemaRegistry serde
+        - name: ThirdSchemaRegistry
+          className: com.provectus.kafka.ui.serdes.builtin.sr.SchemaRegistrySerde
+          properties:
+            url:  http://another-yet-schema-registry:8081
+```
+
+### Setting serdes for specific topics
+
+You can specify preferable serde for topics key/value. This serde will be chosen by default in UI on topic's view/produce pages. To do so, set `topicValuesPattern/topicValuesPattern` properties for the selected serde. Kafka-ui will choose a first serde that matches specified pattern.
+
+Sample configuration:
+
+```yaml
+kafka:
+  clusters:
+    - name: Cluster1
+      serdes:
+        - name: String
+          topicKeysPattern: click-events|imp-events
+        
+        - name: Int64
+          topicKeysPattern: ".*-events"
+        
+        - name: SchemaRegistry
+          topicValuesPattern: click-events|imp-events
+```
+
+### Default serdes
+
+You can specify which serde will be chosen in UI by default if no other serdes selected via `topicKeysPattern/topicValuesPattern` settings.
+
+Sample configuration:
+
+```yaml
+kafka:
+  clusters:
+    - name: Cluster1
+      defaultKeySerde: Int32
+      defaultValueSerde: String
+      serdes:
+        - name: Int32
+          topicKeysPattern: click-events|imp-events
+```
+
+### Fallback
+
+If selected serde couldn't be applied (exception was thrown), then fallback (String serde with UTF-8 encoding) serde will be applied. Such messages will be specially highlighted in UI.
+
+### Custom pluggable serde registration
+
+You can implement your own serde and register it in kafka-ui application. To do so:
+
+1. Add `kafka-ui-serde-api` dependency (should be downloadable via maven central)
+2. Implement `com.provectus.kafka.ui.serde.api.Serde` interface. See javadoc for implementation requirements.
+3. Pack your serde into uber jar, or provide directory with no-dependency jar and it's dependencies jars
+
+Example pluggable serdes : https://github.com/provectus/kafkaui-smile-serde https://github.com/provectus/kafkaui-glue-sr-serde
+
+Sample configuration:
+
+```yaml
+kafka:
+  clusters:
+    - name: Cluster1
+      serdes:
+        - name: MyCustomSerde
+          className: my.lovely.org.KafkaUiSerde
+          filePath: /var/lib/kui-serde/my-kui-serde.jar
+          
+        - name: MyCustomSerde2
+          className: my.lovely.org.KafkaUiSerde2
+          filePath: /var/lib/kui-serde2
+          properties:
+            prop1: v1
+```

+ 10 - 0
configuration/ssl.md

@@ -0,0 +1,10 @@
+# SSL
+
+### Connecting to a Secure Broker
+
+The app supports TLS (SSL) and SASL connections for [encryption and authentication](http://kafka.apache.org/090/documentation.html#security).\
+
+
+#### Running From Docker-compose file
+
+See this docker-compose file reference for ssl-enabled kafka

+ 2 - 0
development/building/README.md

@@ -0,0 +1,2 @@
+# Building
+

+ 43 - 0
development/building/prerequisites.md

@@ -0,0 +1,43 @@
+# Prerequisites
+
+This page explains how to get the software you need to use a Linux or macOS machine for local development.
+
+Before you begin contributing you must have:
+
+* A GitHub account
+* `Java` 17 or newer
+* `Git`
+* `Docker`
+
+#### Installing prerequisites on macOS
+
+1. Install [brew](https://brew.sh/).
+2. Install brew cask:
+
+```
+brew cask
+```
+
+1. Install Eclipse Temurin 17 via Homebrew cask:
+
+```
+brew tap homebrew/cask-versions
+brew install temurin17
+```
+
+1. Verify Installation
+
+```
+java -version
+```
+
+Note : In case OpenJDK 17 is not set as your default Java, you can consider to include it in your `$PATH` after installation
+
+```
+export PATH="$(/usr/libexec/java_home -v 17)/bin:$PATH"
+export JAVA_HOME="$(/usr/libexec/java_home -v 17)"
+```
+
+### Tips
+
+Consider allocating not less than 4GB of memory for your docker. Otherwise, some apps within a stack (e.g. `kafka-ui.yaml`) might crash.

+ 7 - 0
development/building/wip-setting-up-git.md

@@ -0,0 +1,7 @@
+# WIP: Setting up git
+
+TODO :)
+
+
+
+1. credentials?

+ 3 - 0
development/building/wip-testing.md

@@ -0,0 +1,3 @@
+# WIP: Testing
+
+TODO :)

+ 80 - 0
development/building/with-docker.md

@@ -0,0 +1,80 @@
+# With Docker
+
+## Build & Run
+
+Once you installed the prerequisites and cloned the repository, run the following steps in your project directory:
+
+### Step 1 : Build
+
+> _**NOTE:**_ If you are an macOS M1 User then please keep in mind below things
+
+> Make sure you have ARM supported java installed
+
+> Skip the maven tests as they might not be successful
+
+* Build a docker image with the app
+
+```
+./mvnw clean install -Pprod
+```
+
+* if you need to build the frontend `kafka-ui-react-app`, go here
+  * kafka-ui-react-app-build-documentation
+* In case you want to build `kafka-ui-api` by skipping the tests
+
+```
+./mvnw clean install -Dmaven.test.skip=true -Pprod
+```
+
+* To build only the `kafka-ui-api` you can use this command:
+
+```
+./mvnw -f kafka-ui-api/pom.xml clean install -Pprod -DskipUIBuild=true
+```
+
+If this step is successful, it should create a docker image named `provectuslabs/kafka-ui` with `latest` tag on your local machine except macOS M1.
+
+### Step 2 : Run
+
+**Using Docker Compose**
+
+> _**NOTE:**_ If you are an macOS M1 User then you can use arm64 supported docker compose script `./documentation/compose/kafka-ui-arm64.yaml`
+
+* Start the `kafka-ui` app using docker image built in step 1 along with Kafka clusters:
+
+```
+docker-compose -f ./documentation/compose/kafka-ui.yaml up -d
+```
+
+**Using Spring Boot Run**
+
+* If you want to start only kafka clusters (to run the `kafka-ui` app via `spring-boot:run`):
+
+```
+docker-compose -f ./documentation/compose/kafka-clusters-only.yaml up -d
+```
+
+* Then start the app.
+
+```
+./mvnw spring-boot:run -Pprod
+
+# or
+
+./mvnw spring-boot:run -Pprod -Dspring.config.location=file:///path/to/conf.yaml
+```
+
+**Running in kubernetes**
+
+* Using Helm Charts
+
+```
+helm repo add kafka-ui https://provectus.github.io/kafka-ui
+helm install kafka-ui kafka-ui/kafka-ui
+```
+
+To read more please follow to chart documentation.
+
+### Step 3 : Access Kafka-UI
+
+* To see the `kafka-ui` app running, navigate to http://localhost:8080.

+ 31 - 0
development/building/without-docker.md

@@ -0,0 +1,31 @@
+---
+description: Build & Run Without Docker
+---
+
+# Without Docker
+
+Once you installed the prerequisites and cloned the repository, run the following steps in your project directory:
+
+### Running Without Docker Quickly <a href="#run_without_docker_quickly" id="run_without_docker_quickly"></a>
+
+* [Download the latest kafka-ui jar file](https://github.com/provectus/kafka-ui/releases)
+
+**Execute the jar**
+
+```
+java -Dspring.config.additional-location=<path-to-application-local.yml> -jar <path-to-kafka-ui-jar>
+```
+
+* Example of how to configure clusters in the [application-local.yml](https://github.com/provectus/kafka-ui/blob/master/kafka-ui-api/src/main/resources/application-local.yml) configuration file.
+
+### Building And Running Without Docker <a href="#build_and_run_without_docker" id="build_and_run_without_docker"></a>
+
+> _**NOTE:**_ If you want to get kafka-ui up and running locally quickly without building the jar file manually, then just follow Running Without Docker Quickly
+
+> Comment out `docker-maven-plugin` plugin in `kafka-ui-api` pom.xml
+
+* Command to build the jar
+
+> Once your build is successful and the jar file named kafka-ui-api-0.0.1-SNAPSHOT.jar is generated inside `kafka-ui-api/target`.
+
+* Execute the jar

+ 98 - 0
development/contributing.md

@@ -0,0 +1,98 @@
+# Contributing
+
+This guide aims to walk you through the process of working on issues and Pull Requests (PRs).
+
+Bear in mind that you will not be able to complete some steps on your own if you do not have a “write” permission. Feel free to reach out to the maintainers to help you unlock these activities.
+
+## General recommendations
+
+Please note that we have a code of conduct (`CODE-OF-CONDUCT.md`). Make sure that you follow it in all of your interactions with the project.
+
+## Issues
+
+### Choosing an issue
+
+There are two options to look for the issues to contribute to.\
+The first is our ["Up for grabs"](https://github.com/provectus/kafka-ui/projects/11) board. There the issues are sorted by a required experience level (beginner, intermediate, expert).
+
+The second option is to search for ["good first issue"](https://github.com/provectus/kafka-ui/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22)-labeled issues. Some of them might not be displayed on the aforementioned board, or vice versa.
+
+You also need to consider labels. You can sort the issues by scope labels, such as `scope/backend`, `scope/frontend` or even `scope/k8s`. If any issue covers several specific areas, and you do not have a required expertise for one of them, just do your part of work — others will do the rest.
+
+### Grabbing the issue
+
+There is a bunch of criteria that make an issue feasible for development.\
+The implementation of any features and/or their enhancements should be reasonable, must be backed by justified requirements (demanded by the community, roadmap plans, etc.). The final decision is left for the maintainers' discretion.
+
+All bugs should be confirmed as such (i.e. the behavior is unintended).
+
+Any issue should be properly triaged by the maintainers beforehand, which includes:
+
+1. Having a proper milestone set
+2. Having required labels assigned: accepted label, scope labels, etc.
+
+Formally, if these triage conditions are met, you can start to work on the issue.
+
+With all these requirements met, feel free to pick the issue you want. Reach out to the maintainers if you have any questions.
+
+### Working on the issue
+
+Every issue “in-progress” needs to be assigned to a corresponding person. To keep the status of the issue clear to everyone, please keep the card's status updated ("project" card to the right of the issue should match the milestone’s name).
+
+### Setting up a local development environment
+
+Please refer to this guide.
+
+## Pull Requests
+
+### Branch naming
+
+In order to keep branch names uniform and easy-to-understand, please use the following conventions for branch naming.
+
+Generally speaking, it is a good idea to add a group/type prefix to a branch; e.g., if you are working on a specific branch, you could name it `issues/xxx`.
+
+Here is a list of good examples:\
+`issues/123`\
+`feature/feature_name`\
+`bugfix/fix_thing`\
+
+
+### Code style
+
+Java: There is a file called `checkstyle.xml` in project root under `etc` directory.\
+You can import it into IntelliJ IDEA via Checkstyle plugin.
+
+### Naming conventions
+
+REST paths should be written in **lowercase** and consist of **plural** nouns only.\
+Also, multiple words that are placed in a single path segment should be divided by a hyphen (`-`).\
+
+
+Query variable names should be formatted in `camelCase`.
+
+Model names should consist of **plural** nouns only and should be formatted in `camelCase` as well.
+
+### Creating a PR
+
+When creating a PR please do the following:
+
+1. In commit messages use these [closing keywords](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword).
+2. Link an issue(-s) via "linked issues" block.
+3. Set the PR labels. Ensure that you set only the same set of labels that is present in the issue, and ignore yellow `status/` labels.
+4. If the PR does not close any of the issues, the PR itself might need to have a milestone set. Reach out to the maintainers to consult.
+5. Assign the PR to yourself. A PR assignee is someone whose goal is to get the PR merged.
+6. Add reviewers. As a rule, reviewers' suggestions are pretty good; please use them.
+7. Upon merging the PR, please use a meaningful commit message, task name should be fine in this case.
+
+#### Pull Request checklist
+
+1. When composing a build, ensure that any install or build dependencies have been removed before the end of the layer.
+2. Update the `README.md` with the details of changes made to the interface. This includes new environment variables, exposed ports, useful file locations, and container parameters.
+
+### Reviewing a PR
+
+WIP
+
+#### Pull Request reviewer checklist
+
+WIP

+ 25 - 0
faq/common-problems.md

@@ -0,0 +1,25 @@
+# Common problems
+
+## Login module control flag not specified in JAAS config
+
+If you are running against confluent cloud and you have specified correctly the jass config and still continue getting these errors look to to see if you are passing confluent.license in the connector, absence of a license returns a number of bogus errors like "Login module control flag not specified in JAAS config".
+
+https://docs.confluent.io/platform/current/connect/license.html
+
+Good resource for what properties are needed here: https://gist.github.com/rmoff/49526672990f1b4f7935b62609f6f567
+
+## Cluster authorization failed
+
+Check [required permissions](https://github.com/provectus/kafka-ui/wiki/FAQ#required-aclmsk-permissions).
+
+## Confluent cloud errors
+
+Set this property to `true`: `KAFKA_CLUSTERS_<id>_DISABLELOGDIRSCOLLECTION`
+
+## AWS MSK w/ IAM: Access denied
+
+https://github.com/provectus/kafka-ui/discussions/1104#discussioncomment-1656843 https://github.com/provectus/kafka-ui/discussions/1104#discussioncomment-2963449 https://github.com/provectus/kafka-ui/issues/2184#issuecomment-1198506124
+
+## DataBufferLimitException: Exceeded limit on max bytes to buffer
+
+Increase `webclient.max-in-memory-buffer-size` property value. Default value is `20MB`.

+ 71 - 0
faq/faq.md

@@ -0,0 +1,71 @@
+# FAQ
+
+#### Basic (username password) authentication
+
+Add these env. properties:
+
+```
+      AUTH_TYPE: "LOGIN_FORM"
+      SPRING_SECURITY_USER_NAME: admin
+      SPRING_SECURITY_USER_PASSWORD: pass
+```
+
+#### Role based access control (authorization)
+
+[#753](https://github.com/provectus/kafka-ui/issues/753) (WIP)
+
+#### OAuth 2
+
+See [this](https://github.com/provectus/kafka-ui/wiki/Set-up-OAuth2---SSO) guide.
+
+#### LDAP
+
+See [this](https://github.com/provectus/kafka-ui/blob/master/documentation/compose/auth-ldap.yaml#L29) example.
+
+#### Active Directory (LDAP)
+
+See [this](https://github.com/provectus/kafka-ui/blob/master/documentation/compose/auth-ldap.yaml#L29) example.
+
+#### SAML
+
+Planned, see [#478](https://github.com/provectus/kafka-ui/issues/478)
+
+#### Required ACL/MSK permissions
+
+ACL: todo
+
+MSK:
+
+```
+      "kafka-cluster:Connect",
+      "kafka-cluster:Describe*",
+      "kafka-cluster:CreateTopic",
+      "kafka-cluster:AlterGroup",
+      "kafka-cluster:ReadData"
+```
+
+#### Smart filters syntax
+
+**Variables bound to groovy context**: partition, timestampMs, keyAsText, valueAsText, header, key (json if possible), value (json if possible).
+
+**JSON parsing logic**:
+
+Key and Value (if they can be parsed to JSON) they are bound as JSON objects, otherwise bound as nulls.
+
+**Sample filters**:
+
+1. `keyAsText != null && keyAsText ~"([Gg])roovy"` - regex for key as a string
+2. `value.name == "iS.ListItemax" && value.age > 30` - in case value is json
+3. `value == null && valueAsText != null` - search for values that are not nulls and are not json
+4. `headers.sentBy == "some system" && headers["sentAt"] == "2020-01-01"`
+5. multiline filters are also allowed:
+
+```
+def name = value.name
+def age = value.age
+name == "iliax" && age == 30
+```
+
+#### Can I use the app as API?
+
+Yes, you can. Swagger declaration is located [here](https://github.com/provectus/kafka-ui/blob/master/kafka-ui-contract/src/main/resources/swagger/kafka-ui-api.yaml).

+ 13 - 0
overview/features.md

@@ -0,0 +1,13 @@
+# Features
+
+* **Multi-Cluster Management** — monitor and manage all your clusters in one place
+* **Performance Monitoring with Metrics Dashboard** — track key Kafka metrics with a lightweight dashboard
+* **View Kafka Brokers** — view topic and partition assignments, controller status
+* **View Kafka Topics** — view partition count, replication status, and custom configuration
+* **View Consumer Groups** — view per-partition parked offsets, combined and per-partition lag
+* **Browse Messages** — browse messages with JSON, plain text, and Avro encoding
+* **Dynamic Topic Configuration** — create and configure new topics with dynamic configuration
+* **Configurable Authentification** — secure your installation with optional Github/Gitlab/Google OAuth 2.0
+* **Custom serialization/deserialization plugins** - use a ready-to-go serde for your data like AWS Glue or Smile, or code your own!
+* **Role based access control** - [manage permissions](https://github.com/provectus/kafka-ui/wiki/RBAC-\(role-based-access-control\)) to access the UI with granular precision
+* **Data masking** - [obfuscate](https://github.com/provectus/kafka-ui/blob/master/documentation/guides/DataMasking.md) sensitive data in topic messages

+ 93 - 0
overview/getting-started.md

@@ -0,0 +1,93 @@
+# Getting started
+
+We have plenty of [docker-compose files](https://github.com/provectus/kafka-ui/blob/master/documentation/compose/DOCKER\_COMPOSE.md) as examples. They're built for various configuration stacks.
+
+## Guides
+
+* [SSO configuration](https://github.com/provectus/kafka-ui/blob/master/documentation/guides/SSO.md)
+* [AWS IAM configuration](https://github.com/provectus/kafka-ui/blob/master/documentation/guides/AWS\_IAM.md)
+* [Docker-compose files](https://github.com/provectus/kafka-ui/blob/master/documentation/compose/DOCKER\_COMPOSE.md)
+* [Connection to a secure broker](https://github.com/provectus/kafka-ui/blob/master/documentation/guides/SECURE\_BROKER.md)
+* [Configure seriliazation/deserialization plugins or code your own](https://github.com/provectus/kafka-ui/blob/master/documentation/guides/Serialization.md)
+
+#### Configuration File
+
+Example of how to configure clusters in the [application-local.yml](https://github.com/provectus/kafka-ui/blob/master/kafka-ui-api/src/main/resources/application-local.yml) configuration file:
+
+```
+kafka:
+  clusters:
+    -
+      name: local
+      bootstrapServers: localhost:29091
+      schemaRegistry: http://localhost:8085
+      schemaRegistryAuth:
+        username: username
+        password: password
+#     schemaNameTemplate: "%s-value"
+      metrics:
+        port: 9997
+        type: JMX
+    -
+```
+
+* `name`: cluster name
+* `bootstrapServers`: where to connect
+* `schemaRegistry`: schemaRegistry's address
+* `schemaRegistryAuth.username`: schemaRegistry's basic authentication username
+* `schemaRegistryAuth.password`: schemaRegistry's basic authentication password
+* `schemaNameTemplate`: how keys are saved to schemaRegistry
+* `metrics.port`: open JMX port of a broker
+* `metrics.type`: Type of metrics, either JMX or PROMETHEUS. Defaulted to JMX.
+* `readOnly`: enable read only mode
+
+Configure as many clusters as you need by adding their configs below separated with `-`.
+
+### Running a Docker Image
+
+The official Docker image for UI for Apache Kafka is hosted here: [hub.docker.com/r/provectuslabs/kafka-ui](https://hub.docker.com/r/provectuslabs/kafka-ui).
+
+Launch Docker container in the background:
+
+```
+docker run -p 8080:8080 \
+	-e KAFKA_CLUSTERS_0_NAME=local \
+	-e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092 \
+	-d provectuslabs/kafka-ui:latest
+```
+
+Then access the web UI at [http://localhost:8080](http://localhost:8080/). Further configuration with environment variables - [see environment variables](https://github.com/provectus/kafka-ui#env\_variables)
+
+#### Docker Compose
+
+If you prefer to use `docker-compose` please refer to the [documentation](https://github.com/provectus/kafka-ui/blob/master/docker-compose.md).
+
+#### Helm chart
+
+Helm chart could be found under [charts/kafka-ui](https://github.com/provectus/kafka-ui/tree/master/charts/kafka-ui) directory
+
+Quick-start instruction [here](https://github.com/provectus/kafka-ui/blob/master/helm\_chart.md)
+
+### Building With Docker
+
+#### Prerequisites
+
+Check [prerequisites.md](https://github.com/provectus/kafka-ui/blob/master/documentation/project/contributing/prerequisites.md)
+
+#### Building and Running
+
+Check [building.md](https://github.com/provectus/kafka-ui/blob/master/documentation/project/contributing/building.md)
+
+### Building Without Docker
+
+#### Prerequisites
+
+[Prerequisites](https://github.com/provectus/kafka-ui/blob/master/documentation/project/contributing/prerequisites.md) will mostly remain the same with the exception of docker.
+
+#### Running without Building
+
+[How to run quickly without building](https://github.com/provectus/kafka-ui/blob/master/documentation/project/contributing/building-and-running-without-docker.md#run\_without\_docker\_quickly)
+
+#### Building and Running
+
+[How to build and run](https://github.com/provectus/kafka-ui/blob/master/documentation/project/contributing/building-and-running-without-docker.md#build\_and\_run\_without\_docker)

+ 79 - 0
project/code-of-conduct.md

@@ -0,0 +1,79 @@
+# Code of Conduct
+
+## Contributor Covenant Code of Conduct
+
+### Our Pledge
+
+We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
+
+We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
+
+### Our Standards
+
+Examples of behavior that contributes to a positive environment for our community include:
+
+* Demonstrating empathy and kindness toward other people
+* Being respectful of differing opinions, viewpoints, and experiences
+* Giving and gracefully accepting constructive feedback
+* Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
+* Focusing on what is best not just for us as individuals, but for the overall community
+
+Examples of unacceptable behavior include:
+
+* The use of sexualized language or imagery, and sexual attention or advances of any kind
+* Trolling, insulting or derogatory comments, and personal or political attacks
+* Public or private harassment
+* Publishing others' private information, such as a physical or email address, without their explicit permission
+* Other conduct which could reasonably be considered inappropriate in a professional setting
+
+### Enforcement Responsibilities
+
+Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.
+
+Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.
+
+### Scope
+
+This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
+
+### Enforcement
+
+Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at email kafkaui@provectus.com. All complaints will be reviewed and investigated promptly and fairly.
+
+All community leaders are obligated to respect the privacy and security of the reporter of any incident.
+
+### Enforcement Guidelines
+
+Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
+
+#### 1. Correction
+
+**Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
+
+**Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.
+
+#### 2. Warning
+
+**Community Impact**: A violation through a single incident or series of actions.
+
+**Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
+
+#### 3. Temporary Ban
+
+**Community Impact**: A serious violation of community standards, including sustained inappropriate behavior.
+
+**Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
+
+#### 4. Permanent Ban
+
+**Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
+
+**Consequence**: A permanent ban from any sort of public interaction within the community.
+
+### Attribution
+
+This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org), version 2.0, available at [https://www.contributor-covenant.org/version/2/0/code\_of\_conduct.html](https://www.contributor-covenant.org/version/2/0/code\_of\_conduct.html).
+
+Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity).
+
+For answers to common questions about this code of conduct, see the FAQ at [https://www.contributor-covenant.org/faq](https://www.contributor-covenant.org/faq). Translations are available at [https://www.contributor-covenant.org/translations](https://www.contributor-covenant.org/translations).

+ 24 - 0
project/roadmap.md

@@ -0,0 +1,24 @@
+---
+description: Kafka-UI Project Roadmap
+---
+
+# Roadmap
+
+Roadmap exists in a form of a github project board and is located [here](https://github.com/provectus/kafka-ui/projects/8).
+
+#### How to use this document
+
+The roadmap provides a list of features we decided to prioritize in project development. It should serve as a reference point to understand projects' goals.
+
+We do prioritize them based on the feedback from the community, our own vision and other conditions and circumstances.
+
+The roadmap sets the general way of development. The roadmap is mostly about long-term features. All the features could be re-prioritized, rescheduled or canceled.
+
+If there's no feature `X`, that **doesn't** mean we're **not** going to implement it. Feel free to raise the issue for the consideration.\
+If a feature you want to see live is not present on roadmap, but there's an issue for the feature, feel free to vote for it using reactions in the issue.
+
+#### How to contribute
+
+Since the roadmap consists mostly of big long-term features, implementing them might be not easy for a beginner outside collaborator.
+
+A good starting point is checking the [CONTRIBUTING.md](https://github.com/provectus/kafka-ui/blob/master/CONTRIBUTING.md) document.