Compare commits

..

12 commits

847 changed files with 20673 additions and 33819 deletions

View file

@ -1,36 +0,0 @@
{
"name": "Java",
"image": "mcr.microsoft.com/devcontainers/java:0-17",
"features": {
"ghcr.io/devcontainers/features/java:1": {
"version": "none",
"installMaven": "true",
"installGradle": "false"
},
"ghcr.io/devcontainers/features/docker-in-docker:2": {}
},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "java -version",
"customizations": {
"vscode": {
"extensions" : [
"vscjava.vscode-java-pack",
"vscjava.vscode-maven",
"vscjava.vscode-java-debug",
"EditorConfig.EditorConfig",
"ms-azuretools.vscode-docker",
"antfu.vite",
"ms-kubernetes-tools.vscode-kubernetes-tools",
"github.vscode-pull-request-github"
]
}
}
}

4
.github/CODEOWNERS vendored
View file

@ -14,5 +14,5 @@
# TESTS
/kafka-ui-e2e-checks/ @provectus/kafka-qa
# INFRA
/.github/workflows/ @provectus/kafka-devops
# HELM CHARTS
/charts/ @provectus/kafka-devops

View file

@ -1,92 +0,0 @@
name: "\U0001F41E Bug report"
description: File a bug report
labels: ["status/triage", "type/bug"]
assignees: []
body:
- type: markdown
attributes:
value: |
Hi, thanks for raising the issue(-s), all contributions really matter!
Please, note that we'll close the issue without further explanation if you don't follow
this template and don't provide the information requested within this template.
- type: checkboxes
id: terms
attributes:
label: Issue submitter TODO list
description: By you checking these checkboxes we can be sure you've done the essential things.
options:
- label: I've looked up my issue in [FAQ](https://docs.kafka-ui.provectus.io/faq/common-problems)
required: true
- label: I've searched for an already existing issues [here](https://github.com/provectus/kafka-ui/issues)
required: true
- label: I've tried running `master`-labeled docker image and the issue still persists there
required: true
- label: I'm running a supported version of the application which is listed [here](https://github.com/provectus/kafka-ui/blob/master/SECURITY.md)
required: true
- type: textarea
attributes:
label: Describe the bug (actual behavior)
description: A clear and concise description of what the bug is. Use a list, if there is more than one problem
validations:
required: true
- type: textarea
attributes:
label: Expected behavior
description: A clear and concise description of what you expected to happen
validations:
required: false
- type: textarea
attributes:
label: Your installation details
description: |
How do you run the app? Please provide as much info as possible:
1. App version (commit hash in the top left corner of the UI)
2. Helm chart version, if you use one
3. Your application config. Please remove the sensitive info like passwords or API keys.
4. Any IAAC configs
validations:
required: true
- type: textarea
attributes:
label: Steps to reproduce
description: |
Please write down the order of the actions required to reproduce the issue.
For the advanced setups/complicated issue, we might need you to provide
a minimal [reproducible example](https://stackoverflow.com/help/minimal-reproducible-example).
validations:
required: true
- type: textarea
attributes:
label: Screenshots
description: |
If applicable, add screenshots to help explain your problem
validations:
required: false
- type: textarea
attributes:
label: Logs
description: |
If applicable, *upload* screenshots to help explain your problem
validations:
required: false
- type: textarea
attributes:
label: Additional context
description: |
Add any other context about the problem here. E.G.:
1. Are there any alternative scenarios (different data/methods/configuration/setup) you have tried?
Were they successful or the same issue occurred? Please provide steps as well.
2. Related issues (if there are any).
3. Logs (if available)
4. Is there any serious impact or behaviour on the end-user because of this issue, that can be overlooked?
validations:
required: false

62
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View file

@ -0,0 +1,62 @@
---
name: "\U0001F41E Bug report"
about: Create a bug report
title: ''
labels: status/triage, type/bug
assignees: ''
---
<!--
Don't forget to check for existing issues/discussions regarding your proposal. We might already have it.
https://github.com/provectus/kafka-ui/issues
https://github.com/provectus/kafka-ui/discussions
-->
<!--
Please follow the naming conventions for bugs:
<Feature/Area/Scope> : <Compact, but specific problem summary>
Avoid generic titles, like “Topics: incorrect layout of message sorting drop-down list”. Better use something like: “Topics: Message sorting drop-down list overlaps the "Submit" button”.
-->
**Describe the bug** (Actual behavior)
<!--(A clear and concise description of what the bug is.Use a list, if there is more than one problem)-->
**Expected behavior**
<!--(A clear and concise description of what you expected to happen.)-->
**Set up**
<!--
WE MIGHT CLOSE THE ISSUE without further explanation IF YOU DON'T PROVIDE THIS INFORMATION.
How do you run the app? Please provide as much info as possible:
1. App version (docker image version or check commit hash in the top left corner in UI)
2. Helm chart version, if you use one
3. Any IAAC configs
-->
**Steps to Reproduce**
<!-- We'd like you to provide an example setup (via docker-compose, helm, etc.)
to reproduce the problem, especially with a complex setups. -->
1.
**Screenshots**
<!--
(If applicable, add screenshots to help explain your problem)
-->
**Additional context**
<!--
Add any other context about the problem here. E.g.:
1. Are there any alternative scenarios (different data/methods/configuration/setup) you have tried?
Were they successfull or same issue occured? Please provide steps as well.
2. Related issues (if there are any).
3. Logs (if available)
4. Is there any serious impact or behaviour on the end-user because of this issue, that can be overlooked?
-->

View file

@ -1,14 +0,0 @@
blank_issues_enabled: false
contact_links:
- name: Report helm issue
url: https://github.com/provectus/kafka-ui-charts
about: Our helm charts are located in another repo. Please raise issues/PRs regarding charts in that repo.
- name: Official documentation
url: https://docs.kafka-ui.provectus.io/
about: Before reaching out for support, please refer to our documentation. Read "FAQ" and "Common problems", also try using search there.
- name: Community Discord
url: https://discord.gg/4DWzD7pGE5
about: Chat with other users, get some support or ask questions.
- name: GitHub Discussions
url: https://github.com/provectus/kafka-ui/discussions
about: An alternative place to ask questions or to get some support.

View file

@ -1,66 +0,0 @@
name: "\U0001F680 Feature request"
description: Propose a new feature
labels: ["status/triage", "type/feature"]
assignees: []
body:
- type: markdown
attributes:
value: |
Hi, thanks for raising the issue(-s), all contributions really matter!
Please, note that we'll close the issue without further explanation if you don't follow
this template and don't provide the information requested within this template.
- type: checkboxes
id: terms
attributes:
label: Issue submitter TODO list
description: By you checking these checkboxes we can be sure you've done the essential things.
options:
- label: I've searched for an already existing issues [here](https://github.com/provectus/kafka-ui/issues)
required: true
- label: I'm running a supported version of the application which is listed [here](https://github.com/provectus/kafka-ui/blob/master/SECURITY.md) and the feature is not present there
required: true
- type: textarea
attributes:
label: Is your proposal related to a problem?
description: |
Provide a clear and concise description of what the problem is.
For example, "I'm always frustrated when..."
validations:
required: false
- type: textarea
attributes:
label: Describe the feature you're interested in
description: |
Provide a clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
attributes:
label: Describe alternatives you've considered
description: |
Let us know about other solutions you've tried or researched.
validations:
required: false
- type: input
attributes:
label: Version you're running
description: |
Please provide the app version you're currently running:
1. App version (commit hash in the top left corner of the UI)
validations:
required: true
- type: textarea
attributes:
label: Additional context
description: |
Is there anything else you can add about the proposal?
You might want to link to related issues here, if you haven't already.
validations:
required: false

View file

@ -0,0 +1,46 @@
---
name: "\U0001F680 Feature request"
about: Propose a new feature
title: ''
labels: status/triage, type/feature
assignees: ''
---
<!--
Don't forget to check for existing issues/discussions regarding your proposal. We might already have it.
https://github.com/provectus/kafka-ui/issues
https://github.com/provectus/kafka-ui/discussions
-->
### Which version of the app are you running?
<!-- Please provide docker image version or check commit hash in the top left corner in UI) -->
### Is your proposal related to a problem?
<!--
Provide a clear and concise description of what the problem is.
For example, "I'm always frustrated when..."
-->
### Describe the solution you'd like
<!--
Provide a clear and concise description of what you want to happen.
-->
### Describe alternatives you've considered
<!--
Let us know about other solutions you've tried or researched.
-->
### Additional context
<!--
Is there anything else you can add about the proposal?
You might want to link to related issues here, if you haven't already.
-->

52
.github/ISSUE_TEMPLATE/k8s.md vendored Normal file
View file

@ -0,0 +1,52 @@
---
name: "⎈ K8s/Helm problem report"
about: Report a problem with k8s/helm charts/etc
title: ''
labels: scope/k8s, status/triage
assignees: azatsafin
---
<!--
Don't forget to check for existing issues/discussions regarding your proposal. We might already have it.
https://github.com/provectus/kafka-ui/issues
https://github.com/provectus/kafka-ui/discussions
-->
**Describe the bug**
<!--(A clear and concise description of what the bug is.)-->
**Set up**
<!--
How do you run the app? Please provide as much info as possible:
1. App version (docker image version or check commit hash in the top left corner in UI)
2. Helm chart version, if you use one
3. Any IAAC configs
We might close the issue without further explanation if you don't provide such information.
-->
**Steps to Reproduce**
Steps to reproduce the behavior:
1.
**Expected behavior**
<!--
(A clear and concise description of what you expected to happen)
-->
**Screenshots**
<!--
(If applicable, add screenshots to help explain your problem)
-->
**Additional context**
<!--
(Add any other context about the problem here)
-->

16
.github/ISSUE_TEMPLATE/question.md vendored Normal file
View file

@ -0,0 +1,16 @@
---
name: "❓ Question"
about: Ask a question
title: ''
---
<!--
To ask a question, please either:
1. Open up a discussion (https://github.com/provectus/kafka-ui/discussions)
2. Join us on discord (https://discord.gg/4DWzD7pGE5) and ask there.
Don't forget to check/search for existing issues/discussions.
-->

View file

@ -8,6 +8,8 @@ updates:
timezone: Europe/Moscow
reviewers:
- "Haarolean"
assignees:
- "Haarolean"
labels:
- "scope/backend"
- "type/dependencies"
@ -97,6 +99,8 @@ updates:
timezone: Europe/Moscow
reviewers:
- "Haarolean"
assignees:
- "Haarolean"
labels:
- "scope/infrastructure"
- "type/dependencies"

View file

@ -16,26 +16,18 @@ exclude-labels:
- 'type/refactoring'
categories:
- title: '🚩 Breaking Changes'
labels:
- 'impact/changelog'
- title: '⚙Features'
labels:
- 'type/feature'
- title: '🪛Enhancements'
labels:
- 'type/enhancement'
- title: '🔨Bug Fixes'
labels:
- 'type/bug'
- title: 'Security'
labels:
- 'type/security'
- title: '⎈ Helm/K8S Changes'
labels:
- 'scope/k8s'

View file

@ -1,4 +1,4 @@
name: "Infra: Release: AWS Marketplace Publisher"
name: AWS Marketplace Publisher
on:
workflow_dispatch:
inputs:
@ -24,14 +24,14 @@ jobs:
- name: Clone infra repo
run: |
echo "Cloning repo..."
git clone https://infra-tech:${{ secrets.INFRA_USER_ACCESS_TOKEN }}@github.com/provectus/kafka-ui-infra.git --branch ${{ github.event.inputs.KafkaUIInfraBranch }}
git clone https://kafka-ui-infra:${{ secrets.KAFKA_UI_INFRA_TOKEN }}@gitlab.provectus.com/provectus-internals/kafka-ui-infra.git --branch ${{ github.event.inputs.KafkaUIInfraBranch }}
echo "Cd to packer DIR..."
cd kafka-ui-infra/ami
echo "WORK_DIR=$(pwd)" >> $GITHUB_ENV
echo "Packer will be triggered in this dir $WORK_DIR"
- name: Configure AWS credentials for Kafka-UI account
uses: aws-actions/configure-aws-credentials@v3
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_AMI_PUBLISH_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_AMI_PUBLISH_KEY_SECRET }}

View file

@ -1,4 +1,4 @@
name: "Backend: PR/master build & test"
name: backend
on:
push:
branches:
@ -8,9 +8,6 @@ on:
paths:
- "kafka-ui-api/**"
- "pom.xml"
permissions:
checks: write
pull-requests: write
jobs:
build-and-test:
runs-on: ubuntu-latest
@ -32,7 +29,7 @@ jobs:
key: ${{ runner.os }}-sonar
restore-keys: ${{ runner.os }}-sonar
- name: Build and analyze pull request target
if: ${{ github.event_name == 'pull_request' }}
if: ${{ github.event_name == 'pull_request_target' }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN_BACKEND }}

View file

@ -1,4 +1,4 @@
name: "Infra: PR block merge"
name: Pull Request Labels
on:
pull_request:
types: [opened, labeled, unlabeled, synchronize]
@ -6,7 +6,7 @@ jobs:
block_merge:
runs-on: ubuntu-latest
steps:
- uses: mheap/github-action-required-labels@v5
- uses: mheap/github-action-required-labels@v2
with:
mode: exactly
count: 0

View file

@ -1,4 +1,4 @@
name: "Infra: Feature Testing: Init env"
name: DeployFromBranch
on:
workflow_dispatch:
@ -10,8 +10,6 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: get branch name
id: extract_branch
run: |
@ -45,7 +43,7 @@ jobs:
restore-keys: |
${{ runner.os }}-buildx-
- name: Configure AWS credentials for Kafka-UI account
uses: aws-actions/configure-aws-credentials@v3
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
@ -55,7 +53,7 @@ jobs:
uses: aws-actions/amazon-ecr-login@v1
- name: Build and push
id: docker_build_and_push
uses: docker/build-push-action@v4
uses: docker/build-push-action@v3
with:
builder: ${{ steps.buildx.outputs.name }}
context: kafka-ui-api
@ -73,33 +71,29 @@ jobs:
steps:
- name: clone
run: |
git clone https://infra-tech:${{ secrets.INFRA_USER_ACCESS_TOKEN }}@github.com/provectus/kafka-ui-infra.git --branch envs
git clone https://kafka-ui-infra:${{ secrets.KAFKA_UI_INFRA_TOKEN }}@gitlab.provectus.com/provectus-internals/kafka-ui-infra.git
- name: create deployment
run: |
cd kafka-ui-infra/aws-infrastructure4eks/argocd/scripts
echo "Branch:${{ needs.build.outputs.tag }}"
./kafka-ui-deployment-from-branch.sh ${{ needs.build.outputs.tag }} ${{ github.event.label.name }} ${{ secrets.FEATURE_TESTING_UI_PASSWORD }}
git config --global user.email "infra-tech@provectus.com"
git config --global user.name "infra-tech"
git config --global user.email "kafka-ui-infra@provectus.com"
git config --global user.name "kafka-ui-infra"
git add ../kafka-ui-from-branch/
git commit -m "added env:${{ needs.build.outputs.deploy }}" && git push || true
- name: update status check for private deployment
- name: make comment with private deployment link
if: ${{ github.event.label.name == 'status/feature_testing' }}
uses: Sibz/github-status-action@v1.1.6
uses: peter-evans/create-or-update-comment@v2
with:
authToken: ${{secrets.GITHUB_TOKEN}}
context: "Click Details button to open custom deployment page"
state: "success"
sha: ${{ github.event.pull_request.head.sha || github.sha }}
target_url: "http://${{ needs.build.outputs.tag }}.internal.kafka-ui.provectus.io"
issue-number: ${{ github.event.pull_request.number }}
body: |
Custom deployment will be available at http://${{ needs.build.outputs.tag }}.internal.kafka-ui.provectus.io
- name: update status check for public deployment
- name: make comment with public deployment link
if: ${{ github.event.label.name == 'status/feature_testing_public' }}
uses: Sibz/github-status-action@v1.1.6
uses: peter-evans/create-or-update-comment@v2
with:
authToken: ${{secrets.GITHUB_TOKEN}}
context: "Click Details button to open custom deployment page"
state: "success"
sha: ${{ github.event.pull_request.head.sha || github.sha }}
target_url: "http://${{ needs.build.outputs.tag }}.internal.kafka-ui.provectus.io"
issue-number: ${{ github.event.pull_request.number }}
body: |
Custom deployment will be available at http://${{ needs.build.outputs.tag }}.kafka-ui.provectus.io in 5 minutes

View file

@ -1,4 +1,4 @@
name: "Infra: Feature Testing: Destroy env"
name: RemoveCustomDeployment
on:
workflow_dispatch:
pull_request:
@ -11,12 +11,18 @@ jobs:
- uses: actions/checkout@v3
- name: clone
run: |
git clone https://infra-tech:${{ secrets.INFRA_USER_ACCESS_TOKEN }}@github.com/provectus/kafka-ui-infra.git --branch envs
git clone https://kafka-ui-infra:${{ secrets.KAFKA_UI_INFRA_TOKEN }}@gitlab.provectus.com/provectus-internals/kafka-ui-infra.git
- name: remove env
run: |
cd kafka-ui-infra/aws-infrastructure4eks/argocd/scripts
./delete-env.sh pr${{ github.event.pull_request.number }} || true
git config --global user.email "infra-tech@provectus.com"
git config --global user.name "infra-tech"
git config --global user.email "kafka-ui-infra@provectus.com"
git config --global user.name "kafka-ui-infra"
git add ../kafka-ui-from-branch/
git commit -m "removed env:${{ needs.build.outputs.deploy }}" && git push || true
- name: make comment with deployment link
uses: peter-evans/create-or-update-comment@v2
with:
issue-number: ${{ github.event.pull_request.number }}
body: |
Custom deployment removed

View file

@ -1,4 +1,4 @@
name: "Infra: Image Testing: Deploy"
name: Build Docker image and push
on:
workflow_dispatch:
pull_request:
@ -9,8 +9,6 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: get branch name
id: extract_branch
run: |
@ -42,7 +40,7 @@ jobs:
restore-keys: |
${{ runner.os }}-buildx-
- name: Configure AWS credentials for Kafka-UI account
uses: aws-actions/configure-aws-credentials@v3
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
@ -54,7 +52,7 @@ jobs:
registry-type: 'public'
- name: Build and push
id: docker_build_and_push
uses: docker/build-push-action@v4
uses: docker/build-push-action@v3
with:
builder: ${{ steps.buildx.outputs.name }}
context: kafka-ui-api
@ -65,10 +63,11 @@ jobs:
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache
- name: make comment with private deployment link
uses: peter-evans/create-or-update-comment@v3
uses: peter-evans/create-or-update-comment@v2
with:
issue-number: ${{ github.event.pull_request.number }}
body: |
Image published at public.ecr.aws/provectus/kafka-ui-custom-build:${{ steps.extract_branch.outputs.tag }}
outputs:
tag: ${{ steps.extract_branch.outputs.tag }}

View file

@ -0,0 +1,28 @@
name: prepare-helm-release
on:
repository_dispatch:
types: [prepare-helm-release]
jobs:
change-app-version:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- run: |
git config user.name github-actions
git config user.email github-actions@github.com
- name: Change versions
run: |
git checkout -b release-${{ github.event.client_payload.appversion}}
version=$(cat charts/kafka-ui/Chart.yaml | grep version | awk '{print $2}')
version=${version%.*}.$((${version##*.}+1))
sed -i "s/version:.*/version: ${version}/" charts/kafka-ui/Chart.yaml
sed -i "s/appVersion:.*/appVersion: ${{ github.event.client_payload.appversion}}/" charts/kafka-ui/Chart.yaml
git add charts/kafka-ui/Chart.yaml
git commit -m "release ${version}"
git push --set-upstream origin release-${{ github.event.client_payload.appversion}}
- name: Slack Notification
uses: rtCamp/action-slack-notify@v2
env:
SLACK_TITLE: "release-${{ github.event.client_payload.appversion}}"
SLACK_MESSAGE: "A new release of the helm chart has been prepared. Branch name: release-${{ github.event.client_payload.appversion}}"
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

View file

@ -40,7 +40,7 @@ jobs:
${{ runner.os }}-buildx-
- name: Build docker image
uses: docker/build-push-action@v4
uses: docker/build-push-action@v3
with:
builder: ${{ steps.buildx.outputs.name }}
context: kafka-ui-api
@ -55,7 +55,7 @@ jobs:
cache-to: type=local,dest=/tmp/.buildx-cache
- name: Run CVE checks
uses: aquasecurity/trivy-action@0.12.0
uses: aquasecurity/trivy-action@0.8.0
with:
image-ref: "provectuslabs/kafka-ui:${{ steps.build.outputs.version }}"
format: "table"

View file

@ -1,4 +1,4 @@
name: "Infra: Image Testing: Delete"
name: Delete Public ECR Image
on:
workflow_dispatch:
pull_request:
@ -15,7 +15,7 @@ jobs:
tag='${{ github.event.pull_request.number }}'
echo "tag=${tag}" >> $GITHUB_OUTPUT
- name: Configure AWS credentials for Kafka-UI account
uses: aws-actions/configure-aws-credentials@v3
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
@ -32,3 +32,9 @@ jobs:
--repository-name kafka-ui-custom-build \
--image-ids imageTag=${{ steps.extract_branch.outputs.tag }} \
--region us-east-1
- name: make comment with private deployment link
uses: peter-evans/create-or-update-comment@v2
with:
issue-number: ${{ github.event.pull_request.number }}
body: |
Image tag public.ecr.aws/provectus/kafka-ui-custom-build:${{ steps.extract_branch.outputs.tag }} has been removed

View file

@ -1,4 +1,4 @@
name: "Infra: Docs: URL linter"
name: Documentation
on:
pull_request:
types:

View file

@ -1,88 +0,0 @@
name: "E2E: Automation suite"
on:
workflow_dispatch:
inputs:
test_suite:
description: 'Select test suite to run'
default: 'regression'
required: true
type: choice
options:
- regression
- sanity
- smoke
qase_token:
description: 'Set Qase token to enable integration'
required: false
type: string
jobs:
build-and-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
ref: ${{ github.sha }}
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v3
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-central-1
- name: Set up environment
id: set_env_values
run: |
cat "./kafka-ui-e2e-checks/.env.ci" >> "./kafka-ui-e2e-checks/.env"
- name: Pull with Docker
id: pull_chrome
run: |
docker pull selenoid/vnc_chrome:103.0
- name: Set up JDK
uses: actions/setup-java@v3
with:
java-version: '17'
distribution: 'zulu'
cache: 'maven'
- name: Build with Maven
id: build_app
run: |
./mvnw -B -ntp versions:set -DnewVersion=${{ github.sha }}
./mvnw -B -V -ntp clean install -Pprod -Dmaven.test.skip=true ${{ github.event.inputs.extraMavenOptions }}
- name: Compose with Docker
id: compose_app
# use the following command until #819 will be fixed
run: |
docker-compose -f kafka-ui-e2e-checks/docker/selenoid-git.yaml up -d
docker-compose -f ./documentation/compose/e2e-tests.yaml up -d
- name: Run test suite
run: |
./mvnw -B -ntp versions:set -DnewVersion=${{ github.sha }}
./mvnw -B -V -ntp -DQASEIO_API_TOKEN=${{ github.event.inputs.qase_token }} -Dsurefire.suiteXmlFiles='src/test/resources/${{ github.event.inputs.test_suite }}.xml' -Dsuite=${{ github.event.inputs.test_suite }} -f 'kafka-ui-e2e-checks' test -Pprod
- name: Generate Allure report
uses: simple-elf/allure-report-action@master
if: always()
id: allure-report
with:
allure_results: ./kafka-ui-e2e-checks/allure-results
gh_pages: allure-results
allure_report: allure-report
subfolder: allure-results
report_url: "http://kafkaui-allure-reports.s3-website.eu-central-1.amazonaws.com"
- uses: jakejarvis/s3-sync-action@master
if: always()
env:
AWS_S3_BUCKET: 'kafkaui-allure-reports'
AWS_REGION: 'eu-central-1'
SOURCE_DIR: 'allure-history/allure-results'
- name: Deploy report to Amazon S3
if: always()
uses: Sibz/github-status-action@v1.1.6
with:
authToken: ${{secrets.GITHUB_TOKEN}}
context: "Click Details button to open Allure report"
state: "success"
sha: ${{ github.sha }}
target_url: http://kafkaui-allure-reports.s3-website.eu-central-1.amazonaws.com/${{ github.run_number }}
- name: Dump Docker logs on failure
if: failure()
uses: jwalton/gh-docker-logs@v2.2.1

View file

@ -1,15 +1,13 @@
name: "E2E: PR healthcheck"
name: e2e-checks
on:
pull_request_target:
types: [ "opened", "edited", "reopened", "synchronize" ]
types: ["opened", "edited", "reopened", "synchronize"]
paths:
- "kafka-ui-api/**"
- "kafka-ui-contract/**"
- "kafka-ui-react-app/**"
- "kafka-ui-e2e-checks/**"
- "pom.xml"
permissions:
statuses: write
jobs:
build-and-test:
runs-on: ubuntu-latest
@ -17,20 +15,14 @@ jobs:
- uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v3
with:
aws-access-key-id: ${{ secrets.S3_AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.S3_AWS_SECRET_ACCESS_KEY }}
aws-region: eu-central-1
- name: Set up environment
- name: Set the values
id: set_env_values
run: |
cat "./kafka-ui-e2e-checks/.env.ci" >> "./kafka-ui-e2e-checks/.env"
- name: Pull with Docker
- name: pull docker
id: pull_chrome
run: |
docker pull selenoid/vnc_chrome:103.0
docker pull selenium/standalone-chrome:103.0
- name: Set up JDK
uses: actions/setup-java@v3
with:
@ -41,17 +33,16 @@ jobs:
id: build_app
run: |
./mvnw -B -ntp versions:set -DnewVersion=${{ github.event.pull_request.head.sha }}
./mvnw -B -V -ntp clean install -Pprod -Dmaven.test.skip=true ${{ github.event.inputs.extraMavenOptions }}
- name: Compose with Docker
./mvnw -B -V -ntp clean package -Pprod -Dmaven.test.skip=true ${{ github.event.inputs.extraMavenOptions }}
- name: compose app
id: compose_app
# use the following command until #819 will be fixed
run: |
docker-compose -f kafka-ui-e2e-checks/docker/selenoid-git.yaml up -d
docker-compose -f ./documentation/compose/e2e-tests.yaml up -d && until [ "$(docker exec kafka-ui wget --spider --server-response http://localhost:8080/actuator/health 2>&1 | grep -c 'HTTP/1.1 200 OK')" == "1" ]; do echo "Waiting for kafka-ui ..." && sleep 1; done
- name: Run test suite
docker-compose -f ./documentation/compose/e2e-tests.yaml up -d
- name: e2e run
run: |
./mvnw -B -ntp versions:set -DnewVersion=${{ github.event.pull_request.head.sha }}
./mvnw -B -V -ntp -Dsurefire.suiteXmlFiles='src/test/resources/smoke.xml' -f 'kafka-ui-e2e-checks' test -Pprod
./mvnw -B -V -ntp -DQASEIO_API_TOKEN=${{ secrets.QASEIO_API_TOKEN }} -pl '!kafka-ui-api' test -Pprod
- name: Generate allure report
uses: simple-elf/allure-report-action@master
if: always()
@ -61,19 +52,20 @@ jobs:
gh_pages: allure-results
allure_report: allure-report
subfolder: allure-results
report_url: "http://kafkaui-allure-reports.s3-website.eu-central-1.amazonaws.com"
- uses: jakejarvis/s3-sync-action@master
if: always()
env:
AWS_S3_BUCKET: 'kafkaui-allure-reports'
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'eu-central-1'
SOURCE_DIR: 'allure-history/allure-results'
- name: Deploy report to Amazon S3
- name: Post the link to allure report
if: always()
uses: Sibz/github-status-action@v1.1.6
with:
authToken: ${{secrets.GITHUB_TOKEN}}
context: "Click Details button to open Allure report"
context: "Test report"
state: "success"
sha: ${{ github.event.pull_request.head.sha || github.sha }}
target_url: http://kafkaui-allure-reports.s3-website.eu-central-1.amazonaws.com/${{ github.run_number }}

View file

@ -1,43 +0,0 @@
name: "E2E: Manual suite"
on:
workflow_dispatch:
inputs:
test_suite:
description: 'Select test suite to run'
default: 'manual'
required: true
type: choice
options:
- manual
- qase
qase_token:
description: 'Set Qase token to enable integration'
required: true
type: string
jobs:
build-and-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
ref: ${{ github.sha }}
- name: Set up environment
id: set_env_values
run: |
cat "./kafka-ui-e2e-checks/.env.ci" >> "./kafka-ui-e2e-checks/.env"
- name: Set up JDK
uses: actions/setup-java@v3
with:
java-version: '17'
distribution: 'zulu'
cache: 'maven'
- name: Build with Maven
id: build_app
run: |
./mvnw -B -ntp versions:set -DnewVersion=${{ github.sha }}
./mvnw -B -V -ntp clean install -Pprod -Dmaven.test.skip=true ${{ github.event.inputs.extraMavenOptions }}
- name: Run test suite
run: |
./mvnw -B -ntp versions:set -DnewVersion=${{ github.sha }}
./mvnw -B -V -ntp -DQASEIO_API_TOKEN=${{ github.event.inputs.qase_token }} -Dsurefire.suiteXmlFiles='src/test/resources/${{ github.event.inputs.test_suite }}.xml' -Dsuite=${{ github.event.inputs.test_suite }} -f 'kafka-ui-e2e-checks' test -Pprod

View file

@ -1,75 +0,0 @@
name: "E2E: Weekly suite"
on:
schedule:
- cron: '0 1 * * 1'
jobs:
build-and-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
ref: ${{ github.sha }}
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v3
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-central-1
- name: Set up environment
id: set_env_values
run: |
cat "./kafka-ui-e2e-checks/.env.ci" >> "./kafka-ui-e2e-checks/.env"
- name: Pull with Docker
id: pull_chrome
run: |
docker pull selenoid/vnc_chrome:103.0
- name: Set up JDK
uses: actions/setup-java@v3
with:
java-version: '17'
distribution: 'zulu'
cache: 'maven'
- name: Build with Maven
id: build_app
run: |
./mvnw -B -ntp versions:set -DnewVersion=${{ github.sha }}
./mvnw -B -V -ntp clean install -Pprod -Dmaven.test.skip=true ${{ github.event.inputs.extraMavenOptions }}
- name: Compose with Docker
id: compose_app
# use the following command until #819 will be fixed
run: |
docker-compose -f kafka-ui-e2e-checks/docker/selenoid-git.yaml up -d
docker-compose -f ./documentation/compose/e2e-tests.yaml up -d
- name: Run test suite
run: |
./mvnw -B -ntp versions:set -DnewVersion=${{ github.sha }}
./mvnw -B -V -ntp -DQASEIO_API_TOKEN=${{ secrets.QASEIO_API_TOKEN }} -Dsurefire.suiteXmlFiles='src/test/resources/sanity.xml' -Dsuite=weekly -f 'kafka-ui-e2e-checks' test -Pprod
- name: Generate Allure report
uses: simple-elf/allure-report-action@master
if: always()
id: allure-report
with:
allure_results: ./kafka-ui-e2e-checks/allure-results
gh_pages: allure-results
allure_report: allure-report
subfolder: allure-results
report_url: "http://kafkaui-allure-reports.s3-website.eu-central-1.amazonaws.com"
- uses: jakejarvis/s3-sync-action@master
if: always()
env:
AWS_S3_BUCKET: 'kafkaui-allure-reports'
AWS_REGION: 'eu-central-1'
SOURCE_DIR: 'allure-history/allure-results'
- name: Deploy report to Amazon S3
if: always()
uses: Sibz/github-status-action@v1.1.6
with:
authToken: ${{secrets.GITHUB_TOKEN}}
context: "Click Details button to open Allure report"
state: "success"
sha: ${{ github.sha }}
target_url: http://kafkaui-allure-reports.s3-website.eu-central-1.amazonaws.com/${{ github.run_number }}
- name: Dump Docker logs on failure
if: failure()
uses: jwalton/gh-docker-logs@v2.2.1

View file

@ -1,4 +1,4 @@
name: "Frontend: PR/master build & test"
name: frontend
on:
push:
branches:
@ -8,9 +8,6 @@ on:
paths:
- "kafka-ui-contract/**"
- "kafka-ui-react-app/**"
permissions:
checks: write
pull-requests: write
jobs:
build-and-test:
env:
@ -23,13 +20,13 @@ jobs:
# Disabling shallow clone is recommended for improving relevancy of reporting
fetch-depth: 0
ref: ${{ github.event.pull_request.head.sha }}
- uses: pnpm/action-setup@v2.4.0
- uses: pnpm/action-setup@v2.2.4
with:
version: 8.6.12
version: 7.4.0
- name: Install node
uses: actions/setup-node@v3.8.1
uses: actions/setup-node@v3.5.1
with:
node-version: "18.17.1"
node-version: "16.15.0"
cache: "pnpm"
cache-dependency-path: "./kafka-ui-react-app/pnpm-lock.yaml"
- name: Install Node dependencies
@ -49,7 +46,7 @@ jobs:
cd kafka-ui-react-app/
pnpm test:CI
- name: SonarCloud Scan
uses: sonarsource/sonarcloud-github-action@master
uses: workshur/sonarcloud-github-action@improved_basedir
with:
projectBaseDir: ./kafka-ui-react-app
args: -Dsonar.pullrequest.key=${{ github.event.pull_request.number }} -Dsonar.pullrequest.branch=${{ github.head_ref }} -Dsonar.pullrequest.base=${{ github.base_ref }}

38
.github/workflows/helm.yaml vendored Normal file
View file

@ -0,0 +1,38 @@
name: Helm
on:
pull_request:
types: ["opened", "edited", "reopened", "synchronize"]
branches:
- 'master'
paths:
- "charts/**"
jobs:
build-and-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Helm tool installer
uses: Azure/setup-helm@v3
- name: Setup Kubeval
uses: lra/setup-kubeval@v1.0.1
#check, was helm version increased in Chart.yaml?
- name: Check version
shell: bash
run: |
helm_version_new=$(cat charts/kafka-ui/Chart.yaml | grep version | awk '{print $2}')
helm_version_old=$(curl -s https://raw.githubusercontent.com/provectus/kafka-ui/master/charts/kafka-ui/Chart.yaml | grep version | awk '{print $2}' )
echo $helm_version_old
echo $helm_version_new
if [[ "$helm_version_new" > "$helm_version_old" ]]; then exit 0 ; else exit 1 ; fi
- name: Run kubeval
shell: bash
run: |
sed -i "s@enabled: false@enabled: true@g" charts/kafka-ui/values.yaml
K8S_VERSIONS=$(git ls-remote --refs --tags https://github.com/kubernetes/kubernetes.git | cut -d/ -f3 | grep -e '^v1\.[0-9]\{2\}\.[0]\{1,2\}$' | grep -v -e '^v1\.1[0-7]\{1\}' | cut -c2-)
echo "NEXT K8S VERSIONS ARE GOING TO BE TESTED: $K8S_VERSIONS"
echo ""
for version in $K8S_VERSIONS
do
echo $version;
helm template --kube-version $version --set ingress.enabled=true charts/kafka-ui -f charts/kafka-ui/values.yaml | kubeval --additional-schema-locations https://raw.githubusercontent.com/yannh/kubernetes-json-schema/master --strict -v $version;
done

View file

@ -1,4 +1,4 @@
name: "Master: Build & deploy"
name: Master
on:
workflow_dispatch:
push:
@ -9,8 +9,6 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Set up JDK
uses: actions/setup-java@v3
@ -53,12 +51,11 @@ jobs:
- name: Build and push
id: docker_build_and_push
uses: docker/build-push-action@v4
uses: docker/build-push-action@v3
with:
builder: ${{ steps.buildx.outputs.name }}
context: kafka-ui-api
platforms: linux/amd64,linux/arm64
provenance: false
push: true
tags: |
provectuslabs/kafka-ui:${{ steps.build.outputs.version }}
@ -74,11 +71,11 @@ jobs:
#################################
- name: update-master-deployment
run: |
git clone https://infra-tech:${{ secrets.INFRA_USER_ACCESS_TOKEN }}@github.com/provectus/kafka-ui-infra.git --branch master
git clone https://kafka-ui-infra:${{ secrets.KAFKA_UI_INFRA_TOKEN }}@gitlab.provectus.com/provectus-internals/kafka-ui-infra.git --branch master
cd kafka-ui-infra/aws-infrastructure4eks/argocd/scripts
echo "Image digest is:${{ steps.docker_build_and_push.outputs.digest }}"
./kafka-ui-update-master-digest.sh ${{ steps.docker_build_and_push.outputs.digest }}
git config --global user.email "infra-tech@provectus.com"
git config --global user.name "infra-tech"
git config --global user.email "kafka-ui-infra@provectus.com"
git config --global user.name "kafka-ui-infra"
git add ../kafka-ui/*
git commit -m "updated master image digest: ${{ steps.docker_build_and_push.outputs.digest }}" && git push

View file

@ -1,14 +1,13 @@
name: "PR: Checklist linter"
name: "PR Checklist checked"
on:
pull_request_target:
types: [opened, edited, synchronize, reopened]
permissions:
checks: write
jobs:
task-check:
runs-on: ubuntu-latest
steps:
- uses: kentaro-m/task-completed-checker-action@v0.1.2
- uses: kentaro-m/task-completed-checker-action@v0.1.0
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
- uses: dekinderfiets/pr-description-enforcer@0.0.1

39
.github/workflows/release-helm.yaml vendored Normal file
View file

@ -0,0 +1,39 @@
name: Release helm
on:
push:
branches:
- master
paths:
- "charts/**"
jobs:
release-helm:
runs-on:
ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 1
- run: |
git config user.name github-actions
git config user.email github-actions@github.com
- uses: azure/setup-helm@v3
- name: add chart #realse helm with new version
run: |
VERSION=$(cat charts/kafka-ui/Chart.yaml | grep version | awk '{print $2}')
echo "HELM_VERSION=$(echo ${VERSION})" >> $GITHUB_ENV
MSG=$(helm package charts/kafka-ui)
git fetch origin
git stash
git checkout -b gh-pages origin/gh-pages
git pull
helm repo index .
git add -f ${MSG##*/} index.yaml
git commit -m "release ${VERSION}"
git push
- uses: rickstaa/action-create-tag@v1 #create new tag
with:
tag: "charts/kafka-ui-${{ env.HELM_VERSION }}"

View file

@ -1,4 +1,4 @@
name: "Infra: Release: Serde API"
name: Release-serde-api
on: workflow_dispatch
jobs:

View file

@ -1,4 +1,4 @@
name: "Infra: Release"
name: Release
on:
release:
types: [published]
@ -12,7 +12,6 @@ jobs:
- uses: actions/checkout@v3
with:
fetch-depth: 0
ref: ${{ github.event.pull_request.head.sha }}
- run: |
git config user.name github-actions
@ -34,7 +33,7 @@ jobs:
echo "version=${VERSION}" >> $GITHUB_OUTPUT
- name: Upload files to a GitHub release
uses: svenstaro/upload-release-action@2.7.0
uses: svenstaro/upload-release-action@2.3.0
with:
repo_token: ${{ secrets.GITHUB_TOKEN }}
file: kafka-ui-api/target/kafka-ui-api-${{ steps.build.outputs.version }}.jar
@ -72,12 +71,11 @@ jobs:
- name: Build and push
id: docker_build_and_push
uses: docker/build-push-action@v4
uses: docker/build-push-action@v3
with:
builder: ${{ steps.buildx.outputs.name }}
context: kafka-ui-api
platforms: linux/amd64,linux/arm64
provenance: false
push: true
tags: |
provectuslabs/kafka-ui:${{ steps.build.outputs.version }}
@ -89,12 +87,14 @@ jobs:
charts:
runs-on: ubuntu-latest
permissions:
contents: write
needs: release
steps:
- name: Repository Dispatch
uses: peter-evans/repository-dispatch@v2
with:
token: ${{ secrets.CHARTS_ACTIONS_TOKEN }}
repository: provectus/kafka-ui-charts
token: ${{ secrets.GITHUB_TOKEN }}
repository: provectus/kafka-ui
event-type: prepare-helm-release
client-payload: '{"appversion": "${{ needs.release.outputs.version }}"}'

View file

@ -1,34 +1,19 @@
name: "Infra: Release Drafter run"
name: Release Drafter
on:
push:
# branches to consider in the event; optional, defaults to all
branches:
- master
workflow_dispatch:
inputs:
version:
description: 'Release version'
required: false
branch:
description: 'Target branch'
required: false
default: 'master'
permissions:
contents: read
jobs:
update_release_draft:
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
steps:
- uses: release-drafter/release-drafter@v5
with:
config-name: release_drafter.yaml
disable-autolabeler: true
version: ${{ github.event.inputs.version }}
commitish: ${{ github.event.inputs.branch }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View file

@ -1,4 +1,4 @@
name: "Infra: Feature Testing Public: Init env"
name: Separate environment create
on:
workflow_dispatch:
inputs:
@ -12,8 +12,6 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: get branch name
id: extract_branch
run: |
@ -47,7 +45,7 @@ jobs:
restore-keys: |
${{ runner.os }}-buildx-
- name: Configure AWS credentials for Kafka-UI account
uses: aws-actions/configure-aws-credentials@v3
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
@ -57,7 +55,7 @@ jobs:
uses: aws-actions/amazon-ecr-login@v1
- name: Build and push
id: docker_build_and_push
uses: docker/build-push-action@v4
uses: docker/build-push-action@v3
with:
builder: ${{ steps.buildx.outputs.name }}
context: kafka-ui-api
@ -76,14 +74,14 @@ jobs:
steps:
- name: clone
run: |
git clone https://infra-tech:${{ secrets.INFRA_USER_ACCESS_TOKEN }}@github.com/provectus/kafka-ui-infra.git --branch envs
git clone https://kafka-ui-infra:${{ secrets.KAFKA_UI_INFRA_TOKEN }}@gitlab.provectus.com/provectus-internals/kafka-ui-infra.git
- name: separate env create
run: |
cd kafka-ui-infra/aws-infrastructure4eks/argocd/scripts
bash separate_env_create.sh ${{ github.event.inputs.ENV_NAME }} ${{ secrets.FEATURE_TESTING_UI_PASSWORD }} ${{ needs.build.outputs.tag }}
git config --global user.email "infra-tech@provectus.com"
git config --global user.name "infra-tech"
git config --global user.email "kafka-ui-infra@provectus.com"
git config --global user.name "kafka-ui-infra"
git add -A
git commit -m "separate env added: ${{ github.event.inputs.ENV_NAME }}" && git push || true

View file

@ -1,4 +1,4 @@
name: "Infra: Feature Testing Public: Destroy env"
name: Separate environment remove
on:
workflow_dispatch:
inputs:
@ -13,12 +13,12 @@ jobs:
steps:
- name: clone
run: |
git clone https://infra-tech:${{ secrets.INFRA_USER_ACCESS_TOKEN }}@github.com/provectus/kafka-ui-infra.git --branch envs
git clone https://kafka-ui-infra:${{ secrets.KAFKA_UI_INFRA_TOKEN }}@gitlab.provectus.com/provectus-internals/kafka-ui-infra.git
- name: separate environment remove
run: |
cd kafka-ui-infra/aws-infrastructure4eks/argocd/scripts
bash separate_env_remove.sh ${{ github.event.inputs.ENV_NAME }}
git config --global user.email "infra-tech@provectus.com"
git config --global user.name "infra-tech"
git config --global user.email "kafka-ui-infra@provectus.com"
git config --global user.name "kafka-ui-infra"
git add -A
git commit -m "separate env removed: ${{ github.event.inputs.ENV_NAME }}" && git push || true

View file

@ -1,4 +1,4 @@
name: 'Infra: Close stale issues'
name: 'Close stale issues'
on:
schedule:
- cron: '30 1 * * *'
@ -7,7 +7,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v8
- uses: actions/stale@v6
with:
days-before-issue-stale: 7
days-before-issue-close: 3

View file

@ -1,4 +1,4 @@
name: "Infra: Terraform deploy"
name: terraform_deploy
on:
workflow_dispatch:
inputs:
@ -26,7 +26,7 @@ jobs:
echo "Terraform will be triggered in this dir $TF_DIR"
- name: Configure AWS credentials for Kafka-UI account
uses: aws-actions/configure-aws-credentials@v3
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

View file

@ -1,4 +1,4 @@
name: "Infra: Triage: Apply triage label for issues"
name: Add triage label to new issues
on:
issues:
types:

View file

@ -1,4 +1,4 @@
name: "Infra: Triage: Apply triage label for PRs"
name: Add triage label to new PRs
on:
pull_request:
types:

View file

@ -7,9 +7,7 @@ on:
issues:
types:
- opened
permissions:
issues: write
pull-requests: write
jobs:
welcome:
runs-on: ubuntu-latest

View file

@ -1,4 +1,4 @@
name: "Infra: Workflow linter"
name: "Workflow linter"
on:
pull_request:
types:

3
.gitignore vendored
View file

@ -31,9 +31,6 @@ build/
.vscode/
/kafka-ui-api/app/node
### SDKMAN ###
.sdkmanrc
.DS_Store
*.code-workspace

View file

@ -1,5 +1,3 @@
This guide is an exact copy of the same documented located [in our official docs](https://docs.kafka-ui.provectus.io/development/contributing). If there are any differences between the documents, the one located in our official docs should prevail.
This guide aims to walk you through the process of working on issues and Pull Requests (PRs).
Bear in mind that you will not be able to complete some steps on your own if you do not have a “write” permission. Feel free to reach out to the maintainers to help you unlock these activities.
@ -22,7 +20,7 @@ You also need to consider labels. You can sort the issues by scope labels, such
## Grabbing the issue
There is a bunch of criteria that make an issue feasible for development. <br/>
The implementation of any features and/or their enhancements should be reasonable, must be backed by justified requirements (demanded by the community, [roadmap](https://docs.kafka-ui.provectus.io/project/roadmap) plans, etc.). The final decision is left for the maintainers' discretion.
The implementation of any features and/or their enhancements should be reasonable, must be backed by justified requirements (demanded by the community, [roadmap](documentation/project/ROADMAP.md) plans, etc.). The final decision is left for the maintainers' discretion.
All bugs should be confirmed as such (i.e. the behavior is unintended).
@ -41,7 +39,7 @@ To keep the status of the issue clear to everyone, please keep the card's status
## Setting up a local development environment
Please refer to [this guide](https://docs.kafka-ui.provectus.io/development/contributing).
Please refer to [this guide](documentation/project/contributing/README.md).
# Pull Requests

191
README.md
View file

@ -1,35 +1,21 @@
![UI for Apache Kafka logo](documentation/images/kafka-ui-logo.png) UI for Apache Kafka&nbsp;
------------------
#### Versatile, fast and lightweight web UI for managing Apache Kafka® clusters. Built by developers, for developers.
<br/>
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/provectus/kafka-ui/blob/master/LICENSE)
![UI for Apache Kafka Price Free](documentation/images/free-open-source.svg)
[![Release version](https://img.shields.io/github/v/release/provectus/kafka-ui)](https://github.com/provectus/kafka-ui/releases)
[![Chat with us](https://img.shields.io/discord/897805035122077716)](https://discord.gg/4DWzD7pGE5)
[![Docker pulls](https://img.shields.io/docker/pulls/provectuslabs/kafka-ui)](https://hub.docker.com/r/provectuslabs/kafka-ui)
<p align="center">
<a href="https://docs.kafka-ui.provectus.io/">DOCS</a>
<a href="https://docs.kafka-ui.provectus.io/configuration/quick-start">QUICK START</a>
<a href="https://discord.gg/4DWzD7pGE5">COMMUNITY DISCORD</a>
<br/>
<a href="https://aws.amazon.com/marketplace/pp/prodview-ogtt5hfhzkq6a">AWS Marketplace</a>
<a href="https://www.producthunt.com/products/ui-for-apache-kafka/reviews/new">ProductHunt</a>
</p>
<p align="center">
<img src="https://repobeats.axiom.co/api/embed/2e8a7c2d711af9daddd34f9791143e7554c35d0f.svg" />
</p>
#### UI for Apache Kafka is a free, open-source web UI to monitor and manage Apache Kafka clusters.
UI for Apache Kafka is a simple tool that makes your data flows observable, helps find and troubleshoot issues faster and deliver optimal performance. Its lightweight dashboard makes it easy to track key metrics of your Kafka clusters - Brokers, Topics, Partitions, Production, and Consumption.
### DISCLAIMER
<em>UI for Apache Kafka is a free tool built and supported by the open-source community. Curated by Provectus, it will remain free and open-source, without any paid features or subscription plans to be added in the future.
Looking for the help of Kafka experts? Provectus can help you design, build, deploy, and manage Apache Kafka clusters and streaming applications. Discover [Professional Services for Apache Kafka](https://provectus.com/professional-services-apache-kafka/), to unlock the full potential of Kafka in your enterprise! </em>
#### UI for Apache Kafka is a free, open-source web UI to monitor and manage Apache Kafka clusters.
UI for Apache Kafka is a simple tool that makes your data flows observable, helps find and troubleshoot issues faster and deliver optimal performance. Its lightweight dashboard makes it easy to track key metrics of your Kafka clusters - Brokers, Topics, Partitions, Production, and Consumption.
Set up UI for Apache Kafka with just a couple of easy commands to visualize your Kafka data in a comprehensible way. You can run the tool locally or in
the cloud.
@ -43,10 +29,10 @@ the cloud.
* **View Consumer Groups** — view per-partition parked offsets, combined and per-partition lag
* **Browse Messages** — browse messages with JSON, plain text, and Avro encoding
* **Dynamic Topic Configuration** — create and configure new topics with dynamic configuration
* **Configurable Authentification**[secure](https://docs.kafka-ui.provectus.io/configuration/authentication) your installation with optional Github/Gitlab/Google OAuth 2.0
* **Custom serialization/deserialization plugins** - [use](https://docs.kafka-ui.provectus.io/configuration/serialization-serde) a ready-to-go serde for your data like AWS Glue or Smile, or code your own!
* **Role based access control** - [manage permissions](https://docs.kafka-ui.provectus.io/configuration/rbac-role-based-access-control) to access the UI with granular precision
* **Data masking** - [obfuscate](https://docs.kafka-ui.provectus.io/configuration/data-masking) sensitive data in topic messages
* **Configurable Authentification** — secure your installation with optional Github/Gitlab/Google OAuth 2.0
* **Custom serialization/deserialization plugins** - use a ready-to-go serde for your data like AWS Glue or Smile, or code your own!
* **Role based access control** - [manage permissions](https://github.com/provectus/kafka-ui/wiki/RBAC-(role-based-access-control)) to access the UI with granular precision
* **Data masking** - [obfuscate](https://github.com/provectus/kafka-ui/blob/master/documentation/guides/DataMasking.md) sensitive data in topic messages
# The Interface
UI for Apache Kafka wraps major functions of Apache Kafka with an intuitive user interface.
@ -74,68 +60,157 @@ There are 3 supported types of schemas: Avro®, JSON Schema, and Protobuf schema
![Create Schema Registry](documentation/images/Create_schema.gif)
Before producing avro/protobuf encoded messages, you have to add a schema for the topic in Schema Registry. Now all these steps are easy to do
Before producing avro-encoded messages, you have to add an avro schema for the topic in Schema Registry. Now all these steps are easy to do
with a few clicks in a user-friendly interface.
![Avro Schema Topic](documentation/images/Schema_Topic.gif)
# Getting Started
To run UI for Apache Kafka, you can use either a pre-built Docker image or build it (or a jar file) yourself.
To run UI for Apache Kafka, you can use a pre-built Docker image or build it locally.
## Quick start (Demo run)
## Configuration
```
docker run -it -p 8080:8080 -e DYNAMIC_CONFIG_ENABLED=true provectuslabs/kafka-ui
We have plenty of [docker-compose files](documentation/compose/DOCKER_COMPOSE.md) as examples. They're built for various configuration stacks.
# Guides
- [SSO configuration](documentation/guides/SSO.md)
- [AWS IAM configuration](documentation/guides/AWS_IAM.md)
- [Docker-compose files](documentation/compose/DOCKER_COMPOSE.md)
- [Connection to a secure broker](documentation/guides/SECURE_BROKER.md)
- [Configure seriliazation/deserialization plugins or code your own](documentation/guides/Serialization.md)
### Configuration File
Example of how to configure clusters in the [application-local.yml](https://github.com/provectus/kafka-ui/blob/master/kafka-ui-api/src/main/resources/application-local.yml) configuration file:
```sh
kafka:
clusters:
-
name: local
bootstrapServers: localhost:29091
schemaRegistry: http://localhost:8085
schemaRegistryAuth:
username: username
password: password
# schemaNameTemplate: "%s-value"
metrics:
port: 9997
type: JMX
-
```
Then access the web UI at [http://localhost:8080](http://localhost:8080)
* `name`: cluster name
* `bootstrapServers`: where to connect
* `schemaRegistry`: schemaRegistry's address
* `schemaRegistryAuth.username`: schemaRegistry's basic authentication username
* `schemaRegistryAuth.password`: schemaRegistry's basic authentication password
* `schemaNameTemplate`: how keys are saved to schemaRegistry
* `metrics.port`: open JMX port of a broker
* `metrics.type`: Type of metrics, either JMX or PROMETHEUS. Defaulted to JMX.
* `readOnly`: enable read only mode
The command is sufficient to try things out. When you're done trying things out, you can proceed with a [persistent installation](https://docs.kafka-ui.provectus.io/quick-start/persistent-start)
Configure as many clusters as you need by adding their configs below separated with `-`.
## Persistent installation
## Running a Docker Image
The official Docker image for UI for Apache Kafka is hosted here: [hub.docker.com/r/provectuslabs/kafka-ui](https://hub.docker.com/r/provectuslabs/kafka-ui).
Launch Docker container in the background:
```sh
docker run -p 8080:8080 \
-e KAFKA_CLUSTERS_0_NAME=local \
-e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092 \
-d provectuslabs/kafka-ui:latest
```
services:
kafka-ui:
container_name: kafka-ui
image: provectuslabs/kafka-ui:latest
ports:
- 8080:8080
environment:
DYNAMIC_CONFIG_ENABLED: 'true'
volumes:
- ~/kui/config.yml:/etc/kafkaui/dynamic_config.yaml
```
Then access the web UI at [http://localhost:8080](http://localhost:8080).
Further configuration with environment variables - [see environment variables](#env_variables)
Please refer to our [configuration](https://docs.kafka-ui.provectus.io/configuration/quick-start) page to proceed with further app configuration.
### Docker Compose
## Some useful configuration related links
If you prefer to use `docker-compose` please refer to the [documentation](docker-compose.md).
[Web UI Cluster Configuration Wizard](https://docs.kafka-ui.provectus.io/configuration/configuration-wizard)
### Helm chart
Helm chart could be found under [charts/kafka-ui](https://github.com/provectus/kafka-ui/tree/master/charts/kafka-ui) directory
[Configuration file explanation](https://docs.kafka-ui.provectus.io/configuration/configuration-file)
Quick-start instruction [here](helm_chart.md)
[Docker Compose examples](https://docs.kafka-ui.provectus.io/configuration/compose-examples)
## Building With Docker
[Misc configuration properties](https://docs.kafka-ui.provectus.io/configuration/misc-configuration-properties)
### Prerequisites
## Helm charts
Check [prerequisites.md](documentation/project/contributing/prerequisites.md)
[Quick start](https://docs.kafka-ui.provectus.io/configuration/helm-charts/quick-start)
### Building and Running
## Building from sources
Check [building.md](documentation/project/contributing/building.md)
[Quick start](https://docs.kafka-ui.provectus.io/development/building/prerequisites) with building
## Building Without Docker
### Prerequisites
[Prerequisites](documentation/project/contributing/prerequisites.md) will mostly remain the same with the exception of docker.
### Running without Building
[How to run quickly without building](documentation/project/contributing/building-and-running-without-docker.md#run_without_docker_quickly)
### Building and Running
[How to build and run](documentation/project/contributing/building-and-running-without-docker.md#build_and_run_without_docker)
## Liveliness and readiness probes
Liveliness and readiness endpoint is at `/actuator/health`.<br/>
Liveliness and readiness endpoint is at `/actuator/health`.
Info endpoint (build info) is located at `/actuator/info`.
# Configuration options
## <a name="env_variables"></a> Environment Variables
All of the environment variables/config properties could be found [here](https://docs.kafka-ui.provectus.io/configuration/misc-configuration-properties).
Alternatively, each variable of the .yml file can be set with an environment variable.
For example, if you want to use an environment variable to set the `name` parameter, you can write it like this: `KAFKA_CLUSTERS_2_NAME`
# Contributing
Please refer to [contributing guide](https://docs.kafka-ui.provectus.io/development/contributing), we'll guide you from there.
|Name |Description
|-----------------------|-------------------------------
|`SERVER_SERVLET_CONTEXT_PATH` | URI basePath
|`LOGGING_LEVEL_ROOT` | Setting log level (trace, debug, info, warn, error). Default: info
|`LOGGING_LEVEL_COM_PROVECTUS` |Setting log level (trace, debug, info, warn, error). Default: debug
|`SERVER_PORT` |Port for the embedded server. Default: `8080`
|`KAFKA_ADMIN-CLIENT-TIMEOUT` | Kafka API timeout in ms. Default: `30000`
|`KAFKA_CLUSTERS_0_NAME` | Cluster name
|`KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS` |Address where to connect
|`KAFKA_CLUSTERS_0_KSQLDBSERVER` | KSQL DB server address
|`KAFKA_CLUSTERS_0_KSQLDBSERVERAUTH_USERNAME` | KSQL DB server's basic authentication username
|`KAFKA_CLUSTERS_0_KSQLDBSERVERAUTH_PASSWORD` | KSQL DB server's basic authentication password
|`KAFKA_CLUSTERS_0_KSQLDBSERVERSSL_KEYSTORELOCATION` |Path to the JKS keystore to communicate to KSQL DB
|`KAFKA_CLUSTERS_0_KSQLDBSERVERSSL_KEYSTOREPASSWORD` |Password of the JKS keystore for KSQL DB
|`KAFKA_CLUSTERS_0_KSQLDBSERVERSSL_TRUSTSTORELOCATION` |Path to the JKS truststore to communicate to KSQL DB
|`KAFKA_CLUSTERS_0_KSQLDBSERVERSSL_TRUSTSTOREPASSWORD` |Password of the JKS truststore for KSQL DB
|`KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL` |Security protocol to connect to the brokers. For SSL connection use "SSL", for plaintext connection don't set this environment variable
|`KAFKA_CLUSTERS_0_SCHEMAREGISTRY` |SchemaRegistry's address
|`KAFKA_CLUSTERS_0_SCHEMAREGISTRYAUTH_USERNAME` |SchemaRegistry's basic authentication username
|`KAFKA_CLUSTERS_0_SCHEMAREGISTRYAUTH_PASSWORD` |SchemaRegistry's basic authentication password
|`KAFKA_CLUSTERS_0_SCHEMAREGISTRYSSL_KEYSTORELOCATION` |Path to the JKS keystore to communicate to SchemaRegistry
|`KAFKA_CLUSTERS_0_SCHEMAREGISTRYSSL_KEYSTOREPASSWORD` |Password of the JKS keystore for SchemaRegistry
|`KAFKA_CLUSTERS_0_SCHEMAREGISTRYSSL_TRUSTSTORELOCATION` |Path to the JKS truststore to communicate to SchemaRegistry
|`KAFKA_CLUSTERS_0_SCHEMAREGISTRYSSL_TRUSTSTOREPASSWORD` |Password of the JKS truststore for SchemaRegistry
|`KAFKA_CLUSTERS_0_SCHEMANAMETEMPLATE` |How keys are saved to schemaRegistry
|`KAFKA_CLUSTERS_0_METRICS_PORT` |Open metrics port of a broker
|`KAFKA_CLUSTERS_0_METRICS_TYPE` |Type of metrics retriever to use. Valid values are JMX (default) or PROMETHEUS. If Prometheus, then metrics are read from prometheus-jmx-exporter instead of jmx
|`KAFKA_CLUSTERS_0_READONLY` |Enable read-only mode. Default: false
|`KAFKA_CLUSTERS_0_DISABLELOGDIRSCOLLECTION` |Disable collecting segments information. It should be true for confluent cloud. Default: false
|`KAFKA_CLUSTERS_0_KAFKACONNECT_0_NAME` |Given name for the Kafka Connect cluster
|`KAFKA_CLUSTERS_0_KAFKACONNECT_0_ADDRESS` |Address of the Kafka Connect service endpoint
|`KAFKA_CLUSTERS_0_KAFKACONNECT_0_USERNAME`| Kafka Connect cluster's basic authentication username
|`KAFKA_CLUSTERS_0_KAFKACONNECT_0_PASSWORD`| Kafka Connect cluster's basic authentication password
|`KAFKA_CLUSTERS_0_KAFKACONNECT_0_KEYSTORELOCATION`| Path to the JKS keystore to communicate to Kafka Connect
|`KAFKA_CLUSTERS_0_KAFKACONNECT_0_KEYSTOREPASSWORD`| Password of the JKS keystore for Kafka Connect
|`KAFKA_CLUSTERS_0_KAFKACONNECT_0_TRUSTSTORELOCATION`| Path to the JKS truststore to communicate to Kafka Connect
|`KAFKA_CLUSTERS_0_KAFKACONNECT_0_TRUSTSTOREPASSWORD`| Password of the JKS truststore for Kafka Connect
|`KAFKA_CLUSTERS_0_METRICS_SSL` |Enable SSL for Metrics? `true` or `false`. For advanced setup, see `kafka-ui-jmx-secured.yml`
|`KAFKA_CLUSTERS_0_METRICS_USERNAME` |Username for Metrics authentication
|`KAFKA_CLUSTERS_0_METRICS_PASSWORD` |Password for Metrics authentication
|`KAFKA_CLUSTERS_0_POLLING_THROTTLE_RATE` |Max traffic rate (bytes/sec) that kafka-ui allowed to reach when polling messages from the cluster. Default: 0 (not limited)
|`TOPIC_RECREATE_DELAY_SECONDS` |Time delay between topic deletion and topic creation attempts for topic recreate functionality. Default: 1
|`TOPIC_RECREATE_MAXRETRIES` |Number of attempts of topic creation after topic deletion for topic recreate functionality. Default: 15

View file

@ -6,10 +6,8 @@ Following versions of the project are currently being supported with security up
| Version | Supported |
| ------- | ------------------ |
| 0.7.x | :white_check_mark: |
| 0.6.x | :x: |
| 0.5.x | :x: |
| 0.4.x | :x: |
| 0.5.x | :white_check_mark: |
| 0.4.x | :x: |
| 0.3.x | :x: |
| 0.2.x | :x: |
| 0.1.x | :x: |

View file

@ -0,0 +1,25 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
example/
README.md

View file

@ -0,0 +1,7 @@
apiVersion: v2
name: kafka-ui
description: A Helm chart for kafka-UI
type: application
version: 0.5.1
appVersion: v0.5.0
icon: https://github.com/provectus/kafka-ui/raw/master/documentation/images/kafka-ui-logo.png

34
charts/kafka-ui/README.md Normal file
View file

@ -0,0 +1,34 @@
# Kafka-UI Helm Chart
## Configuration
Most of the Helm charts parameters are common, follow table describe unique parameters related to application configuration.
### Kafka-UI parameters
| Parameter | Description | Default |
| ---------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `existingConfigMap` | Name of the existing ConfigMap with Kafka-UI environment variables | `nil` |
| `existingSecret` | Name of the existing Secret with Kafka-UI environment variables | `nil` |
| `envs.secret` | Set of the sensitive environment variables to pass to Kafka-UI | `{}` |
| `envs.config` | Set of the environment variables to pass to Kafka-UI | `{}` |
| `yamlApplicationConfigConfigMap` | Map with name and keyName keys, name refers to the existing ConfigMap, keyName refers to the ConfigMap key with Kafka-UI config in Yaml format | `{}` |
| `yamlApplicationConfig` | Kafka-UI config in Yaml format | `{}` |
| `networkPolicy.enabled` | Enable network policies | `false` |
| `networkPolicy.egressRules.customRules` | Custom network egress policy rules | `[]` |
| `networkPolicy.ingressRules.customRules` | Custom network ingress policy rules | `[]` |
| `podLabels` | Extra labels for Kafka-UI pod | `{}` |
## Example
To install Kafka-UI need to execute follow:
``` bash
helm repo add kafka-ui https://provectus.github.io/kafka-ui
helm install kafka-ui kafka-ui/kafka-ui --set envs.config.KAFKA_CLUSTERS_0_NAME=local --set envs.config.KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
```
To connect to Kafka-UI web application need to execute:
``` bash
kubectl port-forward svc/kafka-ui 8080:80
```
Open the `http://127.0.0.1:8080` on the browser to access Kafka-UI.

View file

@ -0,0 +1,3 @@
apiVersion: v1
entries: {}
generated: "2021-11-11T12:26:08.479581+03:00"

View file

@ -0,0 +1,21 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "kafka-ui.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "kafka-ui.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "kafka-ui.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "kafka-ui.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:8080
{{- end }}

View file

@ -0,0 +1,79 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "kafka-ui.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "kafka-ui.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "kafka-ui.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "kafka-ui.labels" -}}
helm.sh/chart: {{ include "kafka-ui.chart" . }}
{{ include "kafka-ui.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "kafka-ui.selectorLabels" -}}
app.kubernetes.io/name: {{ include "kafka-ui.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "kafka-ui.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "kafka-ui.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{/*
This allows us to check if the registry of the image is specified or not.
*/}}
{{- define "kafka-ui.imageName" -}}
{{- $registryName := .Values.image.registry -}}
{{- $repository := .Values.image.repository -}}
{{- $tag := .Values.image.tag | default .Chart.AppVersion -}}
{{- if $registryName }}
{{- printf "%s/%s:%s" $registryName $repository $tag -}}
{{- else }}
{{- printf "%s:%s" $repository $tag -}}
{{- end }}
{{- end -}}

View file

@ -0,0 +1,10 @@
{{- if .Values.envs.config -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "kafka-ui.fullname" . }}
labels:
{{- include "kafka-ui.labels" . | nindent 4 }}
data:
{{- toYaml .Values.envs.config | nindent 2 }}
{{- end -}}

View file

@ -0,0 +1,11 @@
{{- if .Values.yamlApplicationConfig -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "kafka-ui.fullname" . }}-fromvalues
labels:
{{- include "kafka-ui.labels" . | nindent 4 }}
data:
config.yml: |-
{{- toYaml .Values.yamlApplicationConfig | nindent 4}}
{{ end }}

View file

@ -0,0 +1,150 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "kafka-ui.fullname" . }}
labels:
{{- include "kafka-ui.labels" . | nindent 4 }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "kafka-ui.selectorLabels" . | nindent 6 }}
template:
metadata:
annotations:
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
checksum/configFromValues: {{ include (print $.Template.BasePath "/configmap_fromValues.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
labels:
{{- include "kafka-ui.selectorLabels" . | nindent 8 }}
{{- if .Values.podLabels }}
{{- toYaml .Values.podLabels | nindent 8 }}
{{- end }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.initContainers }}
initContainers:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "kafka-ui.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: {{ include "kafka-ui.imageName" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if or .Values.env .Values.yamlApplicationConfig .Values.yamlApplicationConfigConfigMap}}
env:
{{- with .Values.env }}
{{- toYaml . | nindent 12 }}
{{- end }}
{{- if or .Values.yamlApplicationConfig .Values.yamlApplicationConfigConfigMap}}
- name: SPRING_CONFIG_ADDITIONAL-LOCATION
{{- if .Values.yamlApplicationConfig }}
value: /kafka-ui/config.yml
{{- else if .Values.yamlApplicationConfigConfigMap }}
value: /kafka-ui/{{ .Values.yamlApplicationConfigConfigMap.keyName | default "config.yml" }}
{{- end }}
{{- end }}
{{- end }}
envFrom:
{{- if .Values.existingConfigMap }}
- configMapRef:
name: {{ .Values.existingConfigMap }}
{{- end }}
{{- if .Values.envs.config }}
- configMapRef:
name: {{ include "kafka-ui.fullname" . }}
{{- end }}
{{- if .Values.existingSecret }}
- secretRef:
name: {{ .Values.existingSecret }}
{{- end }}
{{- if .Values.envs.secret}}
- secretRef:
name: {{ include "kafka-ui.fullname" . }}
{{- end}}
ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
httpGet:
{{- $contextPath := .Values.envs.config.SERVER_SERVLET_CONTEXT_PATH | default "" | printf "%s/actuator/health" | urlParse }}
path: {{ get $contextPath "path" }}
port: http
{{- if .Values.probes.useHttpsScheme }}
scheme: HTTPS
{{- end }}
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 10
readinessProbe:
httpGet:
{{- $contextPath := .Values.envs.config.SERVER_SERVLET_CONTEXT_PATH | default "" | printf "%s/actuator/health" | urlParse }}
path: {{ get $contextPath "path" }}
port: http
{{- if .Values.probes.useHttpsScheme }}
scheme: HTTPS
{{- end }}
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 10
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- if or .Values.yamlApplicationConfig .Values.volumeMounts .Values.yamlApplicationConfigConfigMap}}
volumeMounts:
{{- with .Values.volumeMounts }}
{{- toYaml . | nindent 12 }}
{{- end }}
{{- if .Values.yamlApplicationConfig }}
- name: kafka-ui-yaml-conf
mountPath: /kafka-ui/
{{- end }}
{{- if .Values.yamlApplicationConfigConfigMap}}
- name: kafka-ui-yaml-conf-configmap
mountPath: /kafka-ui/
{{- end }}
{{- end }}
{{- if or .Values.yamlApplicationConfig .Values.volumes .Values.yamlApplicationConfigConfigMap}}
volumes:
{{- with .Values.volumes }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.yamlApplicationConfig }}
- name: kafka-ui-yaml-conf
configMap:
name: {{ include "kafka-ui.fullname" . }}-fromvalues
{{- end }}
{{- if .Values.yamlApplicationConfigConfigMap}}
- name: kafka-ui-yaml-conf-configmap
configMap:
name: {{ .Values.yamlApplicationConfigConfigMap.name }}
{{- end }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View file

@ -0,0 +1,46 @@
{{- if .Values.autoscaling.enabled }}
{{- $kubeCapabilityVersion := semver .Capabilities.KubeVersion.Version -}}
{{- $isHigher1p25 := ge (semver "1.25" | $kubeCapabilityVersion.Compare) 0 -}}
{{- if and ($.Capabilities.APIVersions.Has "autoscaling/v2") $isHigher1p25 -}}
apiVersion: autoscaling/v2
{{- else }}
apiVersion: autoscaling/v2beta1
{{- end }}
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "kafka-ui.fullname" . }}
labels:
{{- include "kafka-ui.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "kafka-ui.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
{{- if $isHigher1p25 }}
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- else }}
targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
{{- if $isHigher1p25 }}
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- else }}
targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}
{{- end }}

View file

@ -0,0 +1,89 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "kafka-ui.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- $kubeCapabilityVersion := semver .Capabilities.KubeVersion.Version -}}
{{- $isHigher1p19 := ge (semver "1.19" | $kubeCapabilityVersion.Compare) 0 -}}
{{- if and ($.Capabilities.APIVersions.Has "networking.k8s.io/v1") $isHigher1p19 -}}
apiVersion: networking.k8s.io/v1
{{- else if $.Capabilities.APIVersions.Has "networking.k8s.io/v1beta1" }}
apiVersion: networking.k8s.io/v1beta1
{{- else }}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "kafka-ui.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls.enabled }}
tls:
- hosts:
- {{ tpl .Values.ingress.host . }}
secretName: {{ .Values.ingress.tls.secretName }}
{{- end }}
{{- if .Values.ingress.ingressClassName }}
ingressClassName: {{ .Values.ingress.ingressClassName }}
{{- end }}
rules:
- http:
paths:
{{- if and ($.Capabilities.APIVersions.Has "networking.k8s.io/v1") $isHigher1p19 -}}
{{- range .Values.ingress.precedingPaths }}
- path: {{ .path }}
pathType: Prefix
backend:
service:
name: {{ .serviceName }}
port:
number: {{ .servicePort }}
{{- end }}
- backend:
service:
name: {{ $fullName }}
port:
number: {{ $svcPort }}
pathType: Prefix
{{- if .Values.ingress.path }}
path: {{ .Values.ingress.path }}
{{- end }}
{{- range .Values.ingress.succeedingPaths }}
- path: {{ .path }}
pathType: Prefix
backend:
service:
name: {{ .serviceName }}
port:
number: {{ .servicePort }}
{{- end }}
{{- if tpl .Values.ingress.host . }}
host: {{tpl .Values.ingress.host . }}
{{- end }}
{{- else -}}
{{- range .Values.ingress.precedingPaths }}
- path: {{ .path }}
backend:
serviceName: {{ .serviceName }}
servicePort: {{ .servicePort }}
{{- end }}
- backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- if .Values.ingress.path }}
path: {{ .Values.ingress.path }}
{{- end }}
{{- range .Values.ingress.succeedingPaths }}
- path: {{ .path }}
backend:
serviceName: {{ .serviceName }}
servicePort: {{ .servicePort }}
{{- end }}
{{- if tpl .Values.ingress.host . }}
host: {{ tpl .Values.ingress.host . }}
{{- end }}
{{- end }}
{{- end }}

View file

@ -0,0 +1,18 @@
{{- if and .Values.networkPolicy.enabled .Values.networkPolicy.egressRules.customRules }}
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: {{ printf "%s-egress" (include "kafka-ui.fullname" .) }}
labels:
{{- include "kafka-ui.labels" . | nindent 4 }}
spec:
podSelector:
matchLabels:
{{- include "kafka-ui.selectorLabels" . | nindent 6 }}
policyTypes:
- Egress
egress:
{{- if .Values.networkPolicy.egressRules.customRules }}
{{- toYaml .Values.networkPolicy.egressRules.customRules | nindent 4 }}
{{- end }}
{{- end }}

View file

@ -0,0 +1,18 @@
{{- if and .Values.networkPolicy.enabled .Values.networkPolicy.ingressRules.customRules }}
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: {{ printf "%s-ingress" (include "kafka-ui.fullname" .) }}
labels:
{{- include "kafka-ui.labels" . | nindent 4 }}
spec:
podSelector:
matchLabels:
{{- include "kafka-ui.selectorLabels" . | nindent 6 }}
policyTypes:
- Ingress
ingress:
{{- if .Values.networkPolicy.ingressRules.customRules }}
{{- toYaml .Values.networkPolicy.ingressRules.customRules | nindent 4 }}
{{- end }}
{{- end }}

View file

@ -0,0 +1,11 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ include "kafka-ui.fullname" . }}
labels:
{{- include "kafka-ui.labels" . | nindent 4 }}
type: Opaque
data:
{{- range $key, $val := .Values.envs.secret }}
{{ $key }}: {{ $val | b64enc | quote }}
{{- end -}}

View file

@ -0,0 +1,22 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "kafka-ui.fullname" . }}
labels:
{{- include "kafka-ui.labels" . | nindent 4 }}
{{- if .Values.service.annotations }}
annotations:
{{ toYaml .Values.service.annotations | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
{{- if (and (eq .Values.service.type "NodePort") .Values.service.nodePort) }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
selector:
{{- include "kafka-ui.selectorLabels" . | nindent 4 }}

View file

@ -0,0 +1,12 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "kafka-ui.serviceAccountName" . }}
labels:
{{- include "kafka-ui.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

158
charts/kafka-ui/values.yaml Normal file
View file

@ -0,0 +1,158 @@
replicaCount: 1
image:
registry: docker.io
repository: provectuslabs/kafka-ui
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
existingConfigMap: ""
yamlApplicationConfig:
{}
# kafka:
# clusters:
# - name: yaml
# bootstrapServers: kafka-service:9092
# spring:
# security:
# oauth2:
# auth:
# type: disabled
# management:
# health:
# ldap:
# enabled: false
yamlApplicationConfigConfigMap:
{}
# keyName: config.yml
# name: configMapName
existingSecret: ""
envs:
secret: {}
config: {}
networkPolicy:
enabled: false
egressRules:
## Additional custom egress rules
## e.g:
## customRules:
## - to:
## - namespaceSelector:
## matchLabels:
## label: example
customRules: []
ingressRules:
## Additional custom ingress rules
## e.g:
## customRules:
## - from:
## - namespaceSelector:
## matchLabels:
## label: example
customRules: []
podAnnotations: {}
podLabels: {}
## Annotations to be added to kafka-ui Deployment
##
annotations: {}
## Set field schema as HTTPS for readines and liveness probe
##
probes:
useHttpsScheme: false
podSecurityContext:
{}
# fsGroup: 2000
securityContext:
{}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 80
# if you want to force a specific nodePort. Must be use with service.type=NodePort
# nodePort:
# Ingress configuration
ingress:
# Enable ingress resource
enabled: false
# Annotations for the Ingress
annotations: {}
# ingressClassName for the Ingress
ingressClassName: ""
# The path for the Ingress
path: "/"
# The hostname for the Ingress
host: ""
# configs for Ingress TLS
tls:
# Enable TLS termination for the Ingress
enabled: false
# the name of a pre-created Secret containing a TLS private key and certificate
secretName: ""
# HTTP paths to add to the Ingress before the default path
precedingPaths: []
# Http paths to add to the Ingress after the default path
succeedingPaths: []
resources:
{}
# limits:
# cpu: 200m
# memory: 512Mi
# requests:
# cpu: 200m
# memory: 256Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
env: {}
initContainers: {}
volumeMounts: {}
volumes: {}

43
docker-compose.md Normal file
View file

@ -0,0 +1,43 @@
# Quick Start with docker-compose
Environment variables documentation - [see usage](README.md#env_variables).<br/>
We have plenty of example files with more complex configurations. Please check them out in ``docker`` directory.
* Add a new service in docker-compose.yml
```yaml
version: '2'
services:
kafka-ui:
image: provectuslabs/kafka-ui
container_name: kafka-ui
ports:
- "8080:8080"
restart: always
environment:
- KAFKA_CLUSTERS_0_NAME=local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
```
* If you prefer UI for Apache Kafka in read only mode
```yaml
version: '2'
services:
kafka-ui:
image: provectuslabs/kafka-ui
container_name: kafka-ui
ports:
- "8080:8080"
restart: always
environment:
- KAFKA_CLUSTERS_0_NAME=local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
- KAFKA_CLUSTERS_0_READONLY=true
```
* Start UI for Apache Kafka process
```bash
docker-compose up -d kafka-ui
```

View file

@ -8,9 +8,9 @@
6. [kafka-ui-auth-context.yaml](./kafka-ui-auth-context.yaml) - Basic (username/password) authentication with custom path (URL) (issue 861).
7. [e2e-tests.yaml](./e2e-tests.yaml) - Configuration with different connectors (github-source, s3, sink-activities, source-activities) and Ksql functionality.
8. [kafka-ui-jmx-secured.yml](./kafka-ui-jmx-secured.yml) - Kafkas JMX with SSL and authentication.
9. [kafka-ui-reverse-proxy.yaml](./nginx-proxy.yaml) - An example for using the app behind a proxy (like nginx).
9. [kafka-ui-reverse-proxy.yaml](./kafka-ui-reverse-proxy.yaml) - An example for using the app behind a proxy (like nginx).
10. [kafka-ui-sasl.yaml](./kafka-ui-sasl.yaml) - SASL auth for Kafka.
11. [kafka-ui-traefik-proxy.yaml](./traefik-proxy.yaml) - Traefik specific proxy configuration.
11. [kafka-ui-traefik-proxy.yaml](./kafka-ui-traefik-proxy.yaml) - Traefik specific proxy configuration.
12. [oauth-cognito.yaml](./oauth-cognito.yaml) - OAuth2 with Cognito
13. [kafka-ui-with-jmx-exporter.yaml](./kafka-ui-with-jmx-exporter.yaml) - A configuration with 2 kafka clusters with enabled prometheus jmx exporters instead of jmx.
14. [kafka-with-zookeeper.yaml](./kafka-with-zookeeper.yaml) - An example for using kafka with zookeeper

View file

@ -15,23 +15,26 @@ services:
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka0:29092
KAFKA_CLUSTERS_0_METRICS_PORT: 9997
KAFKA_CLUSTERS_0_SCHEMAREGISTRY: http://schemaregistry0:8085
AUTH_TYPE: "LDAP"
SPRING_LDAP_URLS: "ldap://ldap:10389"
SPRING_LDAP_BASE: "cn={0},ou=people,dc=planetexpress,dc=com"
SPRING_LDAP_ADMIN_USER: "cn=admin,dc=planetexpress,dc=com"
SPRING_LDAP_ADMIN_PASSWORD: "GoodNewsEveryone"
SPRING_LDAP_USER_FILTER_SEARCH_BASE: "dc=planetexpress,dc=com"
SPRING_LDAP_USER_FILTER_SEARCH_FILTER: "(&(uid={0})(objectClass=inetOrgPerson))"
SPRING_LDAP_GROUP_FILTER_SEARCH_BASE: "ou=people,dc=planetexpress,dc=com"
# OAUTH2.LDAP.ACTIVEDIRECTORY: true
# OAUTH2.LDAP.AСTIVEDIRECTORY.DOMAIN: "memelord.lol"
SPRING_LDAP_DN_PATTERN: "cn={0},ou=people,dc=planetexpress,dc=com"
# ===== USER SEARCH FILTER INSTEAD OF DN =====
# SPRING_LDAP_USERFILTER_SEARCHBASE: "dc=planetexpress,dc=com"
# SPRING_LDAP_USERFILTER_SEARCHFILTER: "(&(uid={0})(objectClass=inetOrgPerson))"
# LDAP ADMIN USER
# SPRING_LDAP_ADMINUSER: "cn=admin,dc=planetexpress,dc=com"
# SPRING_LDAP_ADMINPASSWORD: "GoodNewsEveryone"
# ===== ACTIVE DIRECTORY =====
# OAUTH2.LDAP.ACTIVEDIRECTORY: true
# OAUTH2.LDAP.AСTIVEDIRECTORY.DOMAIN: "memelord.lol"
ldap:
image: rroemhild/test-openldap:latest
hostname: "ldap"
ports:
- 10389:10389
kafka0:
image: confluentinc/cp-kafka:7.2.1
@ -76,4 +79,4 @@ services:
SCHEMA_REGISTRY_SCHEMA_REGISTRY_INTER_INSTANCE_PROTOCOL: "http"
SCHEMA_REGISTRY_LOG4J_ROOT_LOGLEVEL: INFO
SCHEMA_REGISTRY_KAFKASTORE_TOPIC: _schemas
SCHEMA_REGISTRY_KAFKASTORE_TOPIC: _schemas

View file

@ -11,14 +11,14 @@ services:
test: wget --no-verbose --tries=1 --spider http://localhost:8080/actuator/health
interval: 30s
timeout: 10s
retries: 10
retries: 10
depends_on:
kafka0:
condition: service_healthy
schemaregistry0:
condition: service_healthy
kafka-connect0:
condition: service_healthy
kafka0:
condition: service_healthy
schemaregistry0:
condition: service_healthy
kafka-connect0:
condition: service_healthy
environment:
KAFKA_CLUSTERS_0_NAME: local
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka0:29092
@ -33,10 +33,10 @@ services:
hostname: kafka0
container_name: kafka0
healthcheck:
test: unset JMX_PORT && KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=kafka0 -Dcom.sun.management.jmxremote.rmi.port=9999" && kafka-broker-api-versions --bootstrap-server=localhost:9092
interval: 30s
timeout: 10s
retries: 10
test: unset JMX_PORT && KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=kafka0 -Dcom.sun.management.jmxremote.rmi.port=9999" && kafka-broker-api-versions --bootstrap-server=localhost:9092
interval: 30s
timeout: 10s
retries: 10
ports:
- "9092:9092"
- "9997:9997"
@ -68,12 +68,12 @@ services:
- 8085:8085
depends_on:
kafka0:
condition: service_healthy
condition: service_healthy
healthcheck:
test: [ "CMD", "timeout", "1", "curl", "--silent", "--fail", "http://schemaregistry0:8085/subjects" ]
interval: 30s
timeout: 10s
retries: 10
test: ["CMD", "timeout", "1", "curl", "--silent", "--fail", "http://schemaregistry0:8085/subjects"]
interval: 30s
timeout: 10s
retries: 10
environment:
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://kafka0:29092
SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL: PLAINTEXT
@ -93,11 +93,11 @@ services:
- 8083:8083
depends_on:
kafka0:
condition: service_healthy
condition: service_healthy
schemaregistry0:
condition: service_healthy
condition: service_healthy
healthcheck:
test: [ "CMD", "nc", "127.0.0.1", "8083" ]
test: ["CMD", "nc", "127.0.0.1", "8083"]
interval: 30s
timeout: 10s
retries: 10
@ -118,16 +118,16 @@ services:
CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_REST_ADVERTISED_HOST_NAME: kafka-connect0
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
# AWS_ACCESS_KEY_ID: ""
# AWS_SECRET_ACCESS_KEY: ""
# AWS_ACCESS_KEY_ID: ""
# AWS_SECRET_ACCESS_KEY: ""
kafka-init-topics:
image: confluentinc/cp-kafka:7.2.1
volumes:
- ./data/message.json:/data/message.json
- ./message.json:/data/message.json
depends_on:
kafka0:
condition: service_healthy
condition: service_healthy
command: "bash -c 'echo Waiting for Kafka to be ready... && \
cub kafka-ready -b kafka0:29092 1 30 && \
kafka-topics --create --topic users --partitions 3 --replication-factor 1 --if-not-exists --bootstrap-server kafka0:29092 && \
@ -142,10 +142,10 @@ services:
ports:
- 5432:5432
healthcheck:
test: [ "CMD-SHELL", "pg_isready -U dev_user" ]
test: ["CMD-SHELL", "pg_isready -U dev_user"]
interval: 10s
timeout: 5s
retries: 5
retries: 5
environment:
POSTGRES_USER: 'dev_user'
POSTGRES_PASSWORD: '12345'
@ -154,7 +154,7 @@ services:
image: ellerbrock/alpine-bash-curl-ssl
depends_on:
postgres-db:
condition: service_healthy
condition: service_healthy
kafka-connect0:
condition: service_healthy
volumes:
@ -164,7 +164,7 @@ services:
ksqldb:
image: confluentinc/ksqldb-server:0.18.0
healthcheck:
test: [ "CMD", "timeout", "1", "curl", "--silent", "--fail", "http://localhost:8088/info" ]
test: ["CMD", "timeout", "1", "curl", "--silent", "--fail", "http://localhost:8088/info"]
interval: 30s
timeout: 10s
retries: 10
@ -174,7 +174,7 @@ services:
kafka-connect0:
condition: service_healthy
schemaregistry0:
condition: service_healthy
condition: service_healthy
ports:
- 8088:8088
environment:
@ -187,4 +187,4 @@ services:
KSQL_KSQL_SCHEMA_REGISTRY_URL: http://schemaregistry0:8085
KSQL_KSQL_SERVICE_ID: my_ksql_1
KSQL_KSQL_HIDDEN_TOPICS: '^_.*'
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_CACHE_MAX_BYTES_BUFFERING: 0

0
documentation/compose/jaas/client.properties Executable file → Normal file
View file

0
documentation/compose/jaas/kafka_connect.jaas Executable file → Normal file
View file

0
documentation/compose/jaas/kafka_connect.password Executable file → Normal file
View file

View file

@ -11,8 +11,4 @@ KafkaClient {
user_admin="admin-secret";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="zkuser"
password="zkuserpassword";
};
Client {};

0
documentation/compose/jaas/schema_registry.jaas Executable file → Normal file
View file

0
documentation/compose/jaas/schema_registry.password Executable file → Normal file
View file

View file

@ -1,4 +0,0 @@
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_zkuser="zkuserpassword";
};

View file

@ -1,2 +1,2 @@
rules:
- pattern: ".*"
- pattern: ".*"

View file

@ -57,7 +57,7 @@ services:
kafka-init-topics:
image: confluentinc/cp-kafka:7.2.1
volumes:
- ./data/message.json:/data/message.json
- ./message.json:/data/message.json
depends_on:
- kafka1
command: "bash -c 'echo Waiting for Kafka to be ready... && \
@ -80,4 +80,4 @@ services:
KAFKA_CLUSTERS_0_METRICS_PORT: 9997
KAFKA_CLUSTERS_0_SCHEMAREGISTRY: http://schemaregistry1:8085
KAFKA_CLUSTERS_0_SCHEMAREGISTRYAUTH_USERNAME: admin
KAFKA_CLUSTERS_0_SCHEMAREGISTRYAUTH_PASSWORD: letmein
KAFKA_CLUSTERS_0_SCHEMAREGISTRYAUTH_PASSWORD: letmein

View file

@ -0,0 +1,84 @@
---
version: "2"
services:
kafka0:
image: confluentinc/cp-kafka:7.2.1
hostname: kafka0
container_name: kafka0
ports:
- "9092:9092"
- "9997:9997"
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: "CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT"
KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka0:29092,PLAINTEXT_HOST://localhost:9092"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9997
KAFKA_JMX_HOSTNAME: localhost
KAFKA_PROCESS_ROLES: "broker,controller"
KAFKA_NODE_ID: 1
KAFKA_CONTROLLER_QUORUM_VOTERS: "1@kafka0:29093"
KAFKA_LISTENERS: "PLAINTEXT://kafka0:29092,CONTROLLER://kafka0:29093,PLAINTEXT_HOST://0.0.0.0:9092"
KAFKA_INTER_BROKER_LISTENER_NAME: "PLAINTEXT"
KAFKA_CONTROLLER_LISTENER_NAMES: "CONTROLLER"
KAFKA_LOG_DIRS: "/tmp/kraft-combined-logs"
volumes:
- ./scripts/update_run_cluster.sh:/tmp/update_run.sh
- ./scripts/clusterID:/tmp/clusterID
command: 'bash -c ''if [ ! -f /tmp/update_run.sh ]; then echo "ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'''
schemaregistry0:
image: confluentinc/cp-schema-registry:7.2.1
depends_on:
- kafka0
environment:
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://kafka0:29092
SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL: PLAINTEXT
SCHEMA_REGISTRY_HOST_NAME: schemaregistry0
SCHEMA_REGISTRY_LISTENERS: http://schemaregistry0:8085
SCHEMA_REGISTRY_SCHEMA_REGISTRY_INTER_INSTANCE_PROTOCOL: "http"
SCHEMA_REGISTRY_LOG4J_ROOT_LOGLEVEL: INFO
SCHEMA_REGISTRY_KAFKASTORE_TOPIC: _schemas
ports:
- 8085:8085
kafka-connect0:
image: confluentinc/cp-kafka-connect:7.2.1
ports:
- 8083:8083
depends_on:
- kafka0
- schemaregistry0
environment:
CONNECT_BOOTSTRAP_SERVERS: kafka0:29092
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: _connect_configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_STORAGE_TOPIC: _connect_offset
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: _connect_status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: http://schemaregistry0:8085
CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schemaregistry0:8085
CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_REST_ADVERTISED_HOST_NAME: kafka-connect0
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
kafka-init-topics:
image: confluentinc/cp-kafka:7.2.1
volumes:
- ./message.json:/data/message.json
depends_on:
- kafka0
command: "bash -c 'echo Waiting for Kafka to be ready... && \
cub kafka-ready -b kafka0:29092 1 30 && \
kafka-topics --create --topic users --partitions 3 --replication-factor 1 --if-not-exists --bootstrap-server kafka0:29092 && \
kafka-topics --create --topic messages --partitions 2 --replication-factor 1 --if-not-exists --bootstrap-server kafka0:29092 && \
kafka-console-producer --bootstrap-server kafka0:29092 --topic users < /data/message.json'"

View file

@ -15,25 +15,27 @@ services:
KAFKA_CLUSTERS_0_NAME: local
KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL: SSL
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka0:29092 # SSL LISTENER!
KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_LOCATION: /kafka.truststore.jks
KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_PASSWORD: secret
KAFKA_CLUSTERS_0_PROPERTIES_SSL_KEYSTORE_LOCATION: /kafka.keystore.jks
KAFKA_CLUSTERS_0_PROPERTIES_SSL_KEYSTORE_PASSWORD: secret
KAFKA_CLUSTERS_0_PROPERTIES_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: '' # DISABLE COMMON NAME VERIFICATION
KAFKA_CLUSTERS_0_SCHEMAREGISTRY: https://schemaregistry0:8085
KAFKA_CLUSTERS_0_SCHEMAREGISTRYSSL_KEYSTORELOCATION: /kafka.keystore.jks
KAFKA_CLUSTERS_0_SCHEMAREGISTRYSSL_KEYSTOREPASSWORD: "secret"
KAFKA_CLUSTERS_0_SCHEMAREGISTRYSSL_TRUSTSTORELOCATION: /kafka.truststore.jks
KAFKA_CLUSTERS_0_SCHEMAREGISTRYSSL_TRUSTSTOREPASSWORD: "secret"
KAFKA_CLUSTERS_0_KSQLDBSERVER: https://ksqldb0:8088
KAFKA_CLUSTERS_0_KSQLDBSERVERSSL_KEYSTORELOCATION: /kafka.keystore.jks
KAFKA_CLUSTERS_0_KSQLDBSERVERSSL_KEYSTOREPASSWORD: "secret"
KAFKA_CLUSTERS_0_KSQLDBSERVERSSL_TRUSTSTORELOCATION: /kafka.truststore.jks
KAFKA_CLUSTERS_0_KSQLDBSERVERSSL_TRUSTSTOREPASSWORD: "secret"
KAFKA_CLUSTERS_0_KAFKACONNECT_0_NAME: local
KAFKA_CLUSTERS_0_KAFKACONNECT_0_ADDRESS: https://kafka-connect0:8083
KAFKA_CLUSTERS_0_KAFKACONNECT_0_KEYSTORELOCATION: /kafka.keystore.jks
KAFKA_CLUSTERS_0_KAFKACONNECT_0_KEYSTOREPASSWORD: "secret"
KAFKA_CLUSTERS_0_SSL_TRUSTSTORELOCATION: /kafka.truststore.jks
KAFKA_CLUSTERS_0_SSL_TRUSTSTOREPASSWORD: "secret"
DYNAMIC_CONFIG_ENABLED: 'true' # not necessary for ssl, added for tests
KAFKA_CLUSTERS_0_KAFKACONNECT_0_TRUSTSTORELOCATION: /kafka.truststore.jks
KAFKA_CLUSTERS_0_KAFKACONNECT_0_TRUSTSTOREPASSWORD: "secret"
volumes:
- ./ssl/kafka.truststore.jks:/kafka.truststore.jks
- ./ssl/kafka.keystore.jks:/kafka.keystore.jks

View file

@ -11,11 +11,11 @@ services:
environment:
KAFKA_CLUSTERS_0_NAME: local
KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL: SSL
KAFKA_CLUSTERS_0_PROPERTIES_SSL_KEYSTORE_LOCATION: /kafka.keystore.jks
KAFKA_CLUSTERS_0_PROPERTIES_SSL_KEYSTORE_PASSWORD: "secret"
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:29092 # SSL LISTENER!
KAFKA_CLUSTERS_0_SSL_TRUSTSTORELOCATION: /kafka.truststore.jks
KAFKA_CLUSTERS_0_SSL_TRUSTSTOREPASSWORD: "secret"
KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_LOCATION: /kafka.truststore.jks
KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_PASSWORD: secret
KAFKA_CLUSTERS_0_PROPERTIES_SSL_KEYSTORE_LOCATION: /kafka.keystore.jks
KAFKA_CLUSTERS_0_PROPERTIES_SSL_KEYSTORE_PASSWORD: secret
KAFKA_CLUSTERS_0_PROPERTIES_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: '' # DISABLE COMMON NAME VERIFICATION
volumes:
- ./ssl/kafka.truststore.jks:/kafka.truststore.jks
@ -60,4 +60,4 @@ services:
- ./ssl/creds:/etc/kafka/secrets/creds
- ./ssl/kafka.truststore.jks:/etc/kafka/secrets/kafka.truststore.jks
- ./ssl/kafka.keystore.jks:/etc/kafka/secrets/kafka.keystore.jks
command: "bash -c 'if [ ! -f /tmp/update_run.sh ]; then echo \"ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?\" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'"
command: "bash -c 'if [ ! -f /tmp/update_run.sh ]; then echo \"ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?\" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'"

View file

@ -1,59 +0,0 @@
---
version: '2'
services:
kafka-ui:
container_name: kafka-ui
image: provectuslabs/kafka-ui:latest
ports:
- 8080:8080
depends_on:
- zookeeper
- kafka
environment:
KAFKA_CLUSTERS_0_NAME: local
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:29092
KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL: SASL_PLAINTEXT
KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM: PLAIN
KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";'
zookeeper:
image: wurstmeister/zookeeper:3.4.6
environment:
JVMFLAGS: "-Djava.security.auth.login.config=/etc/zookeeper/zookeeper_jaas.conf"
volumes:
- ./jaas/zookeeper_jaas.conf:/etc/zookeeper/zookeeper_jaas.conf
ports:
- 2181:2181
kafka:
image: confluentinc/cp-kafka:7.2.1
hostname: kafka
container_name: kafka
ports:
- "9092:9092"
- "9997:9997"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:PLAINTEXT,SASL_PLAINTEXT:SASL_PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT'
KAFKA_ADVERTISED_LISTENERS: 'SASL_PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092'
KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/jaas/kafka_server.conf"
KAFKA_AUTHORIZER_CLASS_NAME: "kafka.security.authorizer.AclAuthorizer"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9997
KAFKA_JMX_HOSTNAME: localhost
KAFKA_NODE_ID: 1
KAFKA_CONTROLLER_QUORUM_VOTERS: '1@kafka:29093'
KAFKA_LISTENERS: 'SASL_PLAINTEXT://kafka:29092,CONTROLLER://kafka:29093,PLAINTEXT_HOST://0.0.0.0:9092'
KAFKA_INTER_BROKER_LISTENER_NAME: 'SASL_PLAINTEXT'
KAFKA_SASL_ENABLED_MECHANISMS: 'PLAIN'
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: 'PLAIN'
KAFKA_SECURITY_PROTOCOL: 'SASL_PLAINTEXT'
KAFKA_SUPER_USERS: 'User:admin'
volumes:
- ./scripts/update_run.sh:/tmp/update_run.sh
- ./jaas:/etc/kafka/jaas

View file

@ -19,9 +19,6 @@ services:
KAFKA_CLUSTERS_0_SCHEMAREGISTRY: http://schema-registry0:8085
KAFKA_CLUSTERS_0_KAFKACONNECT_0_NAME: first
KAFKA_CLUSTERS_0_KAFKACONNECT_0_ADDRESS: http://kafka-connect0:8083
DYNAMIC_CONFIG_ENABLED: 'true' # not necessary, added for tests
KAFKA_CLUSTERS_0_AUDIT_TOPICAUDITENABLED: 'true'
KAFKA_CLUSTERS_0_AUDIT_CONSOLEAUDITENABLED: 'true'
kafka0:
image: confluentinc/cp-kafka:7.2.1.arm64
@ -95,7 +92,7 @@ services:
kafka-init-topics:
image: confluentinc/cp-kafka:7.2.1.arm64
volumes:
- ./data/message.json:/data/message.json
- ./message.json:/data/message.json
depends_on:
- kafka0
command: "bash -c 'echo Waiting for Kafka to be ready... && \

View file

@ -69,7 +69,7 @@ services:
build:
context: ./kafka-connect
args:
image: confluentinc/cp-kafka-connect:7.2.1
image: confluentinc/cp-kafka-connect:6.0.1
ports:
- 8083:8083
depends_on:
@ -104,7 +104,7 @@ services:
kafka-init-topics:
image: confluentinc/cp-kafka:7.2.1
volumes:
- ./data/message.json:/data/message.json
- ./message.json:/data/message.json
depends_on:
- kafka0
command: "bash -c 'echo Waiting for Kafka to be ready... && \

View file

@ -7,8 +7,11 @@ services:
image: provectuslabs/kafka-ui:latest
ports:
- 8080:8080
- 5005:5005
depends_on:
- kafka0
- schemaregistry0
- kafka-connect0
environment:
KAFKA_CLUSTERS_0_NAME: local
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka0:29092
@ -16,12 +19,15 @@ services:
KAFKA_CLUSTERS_0_KAFKACONNECT_0_NAME: first
KAFKA_CLUSTERS_0_KAFKACONNECT_0_ADDRESS: http://kafka-connect0:8083
KAFKA_CLUSTERS_0_METRICS_PORT: 9997
KAFKA_CLUSTERS_0_METRICS_SSL: 'true'
KAFKA_CLUSTERS_0_METRICS_USERNAME: root
KAFKA_CLUSTERS_0_METRICS_PASSWORD: password
KAFKA_CLUSTERS_0_METRICS_KEYSTORE_LOCATION: /jmx/clientkeystore
KAFKA_CLUSTERS_0_METRICS_KEYSTORE_PASSWORD: '12345678'
KAFKA_CLUSTERS_0_SSL_TRUSTSTORE_LOCATION: /jmx/clienttruststore
KAFKA_CLUSTERS_0_SSL_TRUSTSTORE_PASSWORD: '12345678'
JAVA_OPTS: >-
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005
-Djavax.net.ssl.trustStore=/jmx/clienttruststore
-Djavax.net.ssl.trustStorePassword=12345678
-Djavax.net.ssl.keyStore=/jmx/clientkeystore
-Djavax.net.ssl.keyStorePassword=12345678
volumes:
- ./jmx/clienttruststore:/jmx/clienttruststore
- ./jmx/clientkeystore:/jmx/clientkeystore
@ -64,6 +70,8 @@ services:
-Dcom.sun.management.jmxremote.access.file=/jmx/jmxremote.access
-Dcom.sun.management.jmxremote.rmi.port=9997
-Djava.rmi.server.hostname=kafka0
-Djava.rmi.server.logCalls=true
# -Djavax.net.debug=ssl:handshake
volumes:
- ./jmx/serverkeystore:/jmx/serverkeystore
- ./jmx/servertruststore:/jmx/servertruststore
@ -71,3 +79,56 @@ services:
- ./jmx/jmxremote.access:/jmx/jmxremote.access
- ./scripts/update_run.sh:/tmp/update_run.sh
command: "bash -c 'if [ ! -f /tmp/update_run.sh ]; then echo \"ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?\" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'"
schemaregistry0:
image: confluentinc/cp-schema-registry:7.2.1
ports:
- 8085:8085
depends_on:
- kafka0
environment:
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://kafka0:29092
SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL: PLAINTEXT
SCHEMA_REGISTRY_HOST_NAME: schemaregistry0
SCHEMA_REGISTRY_LISTENERS: http://schemaregistry0:8085
SCHEMA_REGISTRY_SCHEMA_REGISTRY_INTER_INSTANCE_PROTOCOL: "http"
SCHEMA_REGISTRY_LOG4J_ROOT_LOGLEVEL: INFO
SCHEMA_REGISTRY_KAFKASTORE_TOPIC: _schemas
kafka-connect0:
image: confluentinc/cp-kafka-connect:7.2.1
ports:
- 8083:8083
depends_on:
- kafka0
- schemaregistry0
environment:
CONNECT_BOOTSTRAP_SERVERS: kafka0:29092
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: _connect_configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_STORAGE_TOPIC: _connect_offset
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: _connect_status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: http://schemaregistry0:8085
CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schemaregistry0:8085
CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_REST_ADVERTISED_HOST_NAME: kafka-connect0
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
kafka-init-topics:
image: confluentinc/cp-kafka:7.2.1
volumes:
- ./message.json:/data/message.json
depends_on:
- kafka0
command: "bash -c 'echo Waiting for Kafka to be ready... && \
cub kafka-ready -b kafka0:29092 1 30 && \
kafka-topics --create --topic second.users --partitions 3 --replication-factor 1 --if-not-exists --bootstrap-server kafka0:29092 && \
kafka-topics --create --topic first.messages --partitions 2 --replication-factor 1 --if-not-exists --bootstrap-server kafka0:29092 && \
kafka-console-producer --bootstrap-server kafka0:29092 --topic second.users < /data/message.json'"

View file

@ -4,7 +4,7 @@ services:
nginx:
image: nginx:latest
volumes:
- ./data/proxy.conf:/etc/nginx/conf.d/default.conf
- ./proxy.conf:/etc/nginx/conf.d/default.conf
ports:
- 8080:80

View file

@ -15,7 +15,6 @@ services:
KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL: SASL_PLAINTEXT
KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM: PLAIN
KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";'
DYNAMIC_CONFIG_ENABLED: true # not necessary for sasl auth, added for tests
kafka:
image: confluentinc/cp-kafka:7.2.1
@ -49,4 +48,4 @@ services:
volumes:
- ./scripts/update_run.sh:/tmp/update_run.sh
- ./jaas:/etc/kafka/jaas
command: "bash -c 'if [ ! -f /tmp/update_run.sh ]; then echo \"ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?\" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'"
command: "bash -c 'if [ ! -f /tmp/update_run.sh ]; then echo \"ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?\" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'"

View file

@ -14,16 +14,13 @@ services:
kafka.clusters.0.name: SerdeExampleCluster
kafka.clusters.0.bootstrapServers: kafka0:29092
kafka.clusters.0.schemaRegistry: http://schemaregistry0:8085
# optional SSL settings for cluster (will be used by SchemaRegistry serde, if set)
#kafka.clusters.0.ssl.keystoreLocation: /kafka.keystore.jks
#kafka.clusters.0.ssl.keystorePassword: "secret"
#kafka.clusters.0.ssl.truststoreLocation: /kafka.truststore.jks
#kafka.clusters.0.ssl.truststorePassword: "secret"
# optional auth properties for SR
# optional auth and ssl properties for SR
#kafka.clusters.0.schemaRegistryAuth.username: "use"
#kafka.clusters.0.schemaRegistryAuth.password: "pswrd"
#kafka.clusters.0.schemaRegistrySSL.keystoreLocation: /kafka.keystore.jks
#kafka.clusters.0.schemaRegistrySSL.keystorePassword: "secret"
#kafka.clusters.0.schemaRegistrySSL.truststoreLocation: /kafka.truststore.jks
#kafka.clusters.0.schemaRegistrySSL.truststorePassword: "secret"
kafka.clusters.0.defaultKeySerde: Int32 #optional
kafka.clusters.0.defaultValueSerde: String #optional
@ -31,7 +28,8 @@ services:
kafka.clusters.0.serde.0.name: ProtobufFile
kafka.clusters.0.serde.0.topicKeysPattern: "topic1"
kafka.clusters.0.serde.0.topicValuesPattern: "topic1"
kafka.clusters.0.serde.0.properties.protobufFilesDir: /protofiles/
kafka.clusters.0.serde.0.properties.protobufFiles.0: /protofiles/key-types.proto
kafka.clusters.0.serde.0.properties.protobufFiles.1: /protofiles/values.proto
kafka.clusters.0.serde.0.properties.protobufMessageNameForKey: test.MyKey # default type for keys
kafka.clusters.0.serde.0.properties.protobufMessageName: test.MyValue # default type for values
kafka.clusters.0.serde.0.properties.protobufMessageNameForKeyByTopic.topic1: test.MySpecificTopicKey # keys type for topic "topic1"
@ -54,7 +52,7 @@ services:
kafka.clusters.0.serde.4.properties.keySchemaNameTemplate: "%s-key"
kafka.clusters.0.serde.4.properties.schemaNameTemplate: "%s-value"
#kafka.clusters.0.serde.4.topicValuesPattern: "sr2-topic.*"
# optional auth and ssl properties for SR (overrides cluster-level):
# optional auth and ssl properties for SR:
#kafka.clusters.0.serde.4.properties.username: "user"
#kafka.clusters.0.serde.4.properties.password: "passw"
#kafka.clusters.0.serde.4.properties.keystoreLocation: /kafka.keystore.jks

View file

@ -24,7 +24,6 @@ services:
KAFKA_CLUSTERS_1_BOOTSTRAPSERVERS: kafka1:29092
KAFKA_CLUSTERS_1_METRICS_PORT: 9998
KAFKA_CLUSTERS_1_SCHEMAREGISTRY: http://schemaregistry1:8085
DYNAMIC_CONFIG_ENABLED: 'true'
kafka0:
image: confluentinc/cp-kafka:7.2.1
@ -115,7 +114,7 @@ services:
SCHEMA_REGISTRY_KAFKASTORE_TOPIC: _schemas
kafka-connect0:
image: confluentinc/cp-kafka-connect:7.2.1
image: confluentinc/cp-kafka-connect:6.0.1
ports:
- 8083:8083
depends_on:
@ -142,7 +141,7 @@ services:
kafka-init-topics:
image: confluentinc/cp-kafka:7.2.1
volumes:
- ./data/message.json:/data/message.json
- ./message.json:/data/message.json
depends_on:
- kafka1
command: "bash -c 'echo Waiting for Kafka to be ready... && \

View file

@ -38,7 +38,7 @@ services:
kafka-init-topics:
image: confluentinc/cp-kafka:7.2.1
volumes:
- ./data/message.json:/data/message.json
- ./message.json:/data/message.json
depends_on:
- kafka
command: "bash -c 'echo Waiting for Kafka to be ready... && \

View file

@ -0,0 +1,22 @@
---
version: '3.4'
services:
kafka-ui:
container_name: kafka-ui
image: provectuslabs/kafka-ui:local
ports:
- 8080:8080
depends_on:
- kafka0 # OMITTED, TAKE UP AN EXAMPLE FROM OTHER COMPOSE FILES
environment:
KAFKA_CLUSTERS_0_NAME: local
KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL: SSL
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka0:29092
AUTH_TYPE: OAUTH2_COGNITO
AUTH_COGNITO_ISSUER_URI: "https://cognito-idp.eu-central-1.amazonaws.com/eu-central-xxxxxx"
AUTH_COGNITO_CLIENT_ID: ""
AUTH_COGNITO_CLIENT_SECRET: ""
AUTH_COGNITO_SCOPE: "openid"
AUTH_COGNITO_USER_NAME_ATTRIBUTE: "username"
AUTH_COGNITO_LOGOUT_URI: "https://<domain>.auth.eu-central-1.amazoncognito.com/logout"

View file

@ -1,15 +1,11 @@
syntax = "proto3";
package test;
import "google/protobuf/wrappers.proto";
message MyKey {
string myKeyF1 = 1;
google.protobuf.UInt64Value uint_64_wrapper = 2;
}
message MySpecificTopicKey {
string special_field1 = 1;
string special_field2 = 2;
google.protobuf.FloatValue float_wrapper = 3;
}

View file

@ -9,6 +9,4 @@ message MySpecificTopicValue {
message MyValue {
int32 version = 1;
string payload = 2;
map<int32, string> intToStringMap = 3;
map<string, MyValue> strToObjMap = 4;
}

View file

@ -0,0 +1,41 @@
# How to configure AWS IAM Authentication
UI for Apache Kafka comes with built-in [aws-msk-iam-auth](https://github.com/aws/aws-msk-iam-auth) library.
You could pass sasl configs in properties section for each cluster.
More details could be found here: [aws-msk-iam-auth](https://github.com/aws/aws-msk-iam-auth)
## Examples:
Please replace
* <KAFKA_URL> with broker list
* <PROFILE_NAME> with your aws profile
### Running From Docker Image
```sh
docker run -p 8080:8080 \
-e KAFKA_CLUSTERS_0_NAME=local \
-e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=<KAFKA_URL> \
-e KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL=SASL_SSL \
-e KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM=AWS_MSK_IAM \
-e KAFKA_CLUSTERS_0_PROPERTIES_SASL_CLIENT_CALLBACK_HANDLER_CLASS=software.amazon.msk.auth.iam.IAMClientCallbackHandler \
-e KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG=software.amazon.msk.auth.iam.IAMLoginModule required awsProfileName="<PROFILE_NAME>"; \
-d provectuslabs/kafka-ui:latest
```
### Configuring by application.yaml
```yaml
kafka:
clusters:
- name: local
bootstrapServers: <KAFKA_URL>
properties:
security.protocol: SASL_SSL
sasl.mechanism: AWS_MSK_IAM
sasl.client.callback.handler.class: software.amazon.msk.auth.iam.IAMClientCallbackHandler
sasl.jaas.config: software.amazon.msk.auth.iam.IAMLoginModule required awsProfileName="<PROFILE_NAME>";
```

View file

@ -0,0 +1,123 @@
# Topics data masking
You can configure kafka-ui to mask sensitive data shown in Messages page.
Several masking policies supported:
### REMOVE
For json objects - remove target fields, otherwise - return "null" string.
```yaml
- type: REMOVE
fields: [ "id", "name" ]
...
```
Apply examples:
```
{ "id": 1234, "name": { "first": "James" }, "age": 30 }
->
{ "age": 30 }
```
```
non-json string -> null
```
### REPLACE
For json objects - replace target field's values with specified replacement string (by default with `***DATA_MASKED***`). Note: if target field's value is object, then replacement applied to all its fields recursively (see example).
```yaml
- type: REPLACE
fields: [ "id", "name" ]
replacement: "***" #optional, "***DATA_MASKED***" by default
...
```
Apply examples:
```
{ "id": 1234, "name": { "first": "James", "last": "Bond" }, "age": 30 }
->
{ "id": "***", "name": { "first": "***", "last": "***" }, "age": 30 }
```
```
non-json string -> ***
```
### MASK
Mask target field's values with specified masking characters, recursively (spaces and line separators will be kept as-is).
`pattern` array specifies what symbols will be used to replace upper-case chars, lower-case chars, digits and other symbols correspondingly.
```yaml
- type: MASK
fields: [ "id", "name" ]
pattern: ["A", "a", "N", "_"] # optional, default is ["X", "x", "n", "-"]
...
```
Apply examples:
```
{ "id": 1234, "name": { "first": "James", "last": "Bond!" }, "age": 30 }
->
{ "id": "NNNN", "name": { "first": "Aaaaa", "last": "Aaaa_" }, "age": 30 }
```
```
Some string! -> Aaaa aaaaaa_
```
----
For each policy, if `fields` not specified, then policy will be applied to all object's fields or whole string if it is not a json-object.
You can specify which masks will be applied to topic's keys/values. Multiple policies will be applied if topic matches both policy's patterns.
Yaml configuration example:
```yaml
kafka:
clusters:
- name: ClusterName
# Other Cluster configuration omitted ...
masking:
- type: REMOVE
fields: [ "id" ]
topicKeysPattern: "events-with-ids-.*"
topicValuesPattern: "events-with-ids-.*"
- type: REPLACE
fields: [ "companyName", "organizationName" ]
replacement: "***MASKED_ORG_NAME***" #optional
topicValuesPattern: "org-events-.*"
- type: MASK
fields: [ "name", "surname" ]
pattern: ["A", "a", "N", "_"] #optional
topicValuesPattern: "user-states"
- type: MASK
topicValuesPattern: "very-secured-topic"
```
Same configuration in env-vars fashion:
```
...
KAFKA_CLUSTERS_0_MASKING_0_TYPE: REMOVE
KAFKA_CLUSTERS_0_MASKING_0_FIELDS_0: "id"
KAFKA_CLUSTERS_0_MASKING_0_TOPICKEYSPATTERN: "events-with-ids-.*"
KAFKA_CLUSTERS_0_MASKING_0_TOPICVALUESPATTERN: "events-with-ids-.*"
KAFKA_CLUSTERS_0_MASKING_1_TYPE: REPLACE
KAFKA_CLUSTERS_0_MASKING_1_FIELDS_0: "companyName"
KAFKA_CLUSTERS_0_MASKING_1_FIELDS_1: "organizationName"
KAFKA_CLUSTERS_0_MASKING_1_REPLACEMENT: "***MASKED_ORG_NAME***"
KAFKA_CLUSTERS_0_MASKING_1_TOPICVALUESPATTERN: "org-events-.*"
KAFKA_CLUSTERS_0_MASKING_2_TYPE: MASK
KAFKA_CLUSTERS_0_MASKING_2_FIELDS_0: "name"
KAFKA_CLUSTERS_0_MASKING_2_FIELDS_1: "surname"
KAFKA_CLUSTERS_0_MASKING_2_PATTERN_0: 'A'
KAFKA_CLUSTERS_0_MASKING_2_PATTERN_1: 'a'
KAFKA_CLUSTERS_0_MASKING_2_PATTERN_2: 'N'
KAFKA_CLUSTERS_0_MASKING_2_PATTERN_3: '_'
KAFKA_CLUSTERS_0_MASKING_2_TOPICVALUESPATTERN: "user-states"
KAFKA_CLUSTERS_0_MASKING_3_TYPE: MASK
KAFKA_CLUSTERS_0_MASKING_3_TOPICVALUESPATTERN: "very-secured-topic"
```

View file

@ -0,0 +1,51 @@
# Kafkaui Protobuf Support
### This document is deprecated, please see examples in [Serialization document](Serialization.md).
Kafkaui supports deserializing protobuf messages in two ways:
1. Using Confluent Schema Registry's [protobuf support](https://docs.confluent.io/platform/current/schema-registry/serdes-develop/serdes-protobuf.html).
2. Supplying a protobuf file as well as a configuration that maps topic names to protobuf types.
## Configuring Kafkaui with a Protobuf File
To configure Kafkaui to deserialize protobuf messages using a supplied protobuf schema add the following to the config:
```yaml
kafka:
clusters:
- # Cluster configuration omitted.
# protobufFile is the path to the protobuf schema. (deprecated: please use "protobufFiles")
protobufFile: path/to/my.proto
# protobufFiles is the path to one or more protobuf schemas.
protobufFiles:
- /path/to/my.proto
- /path/to/another.proto
# protobufMessageName is the default protobuf type that is used to deserilize
# the message's value if the topic is not found in protobufMessageNameByTopic.
protobufMessageName: my.DefaultValType
# protobufMessageNameByTopic is a mapping of topic names to protobuf types.
# This mapping is required and is used to deserialize the Kafka message's value.
protobufMessageNameByTopic:
topic1: my.Type1
topic2: my.Type2
# protobufMessageNameForKey is the default protobuf type that is used to deserilize
# the message's key if the topic is not found in protobufMessageNameForKeyByTopic.
protobufMessageNameForKey: my.DefaultKeyType
# protobufMessageNameForKeyByTopic is a mapping of topic names to protobuf types.
# This mapping is optional and is used to deserialize the Kafka message's key.
# If a protobuf type is not found for a topic's key, the key is deserialized as a string,
# unless protobufMessageNameForKey is specified.
protobufMessageNameForKeyByTopic:
topic1: my.KeyType1
```
Same config with flattened config (for docker-compose):
```text
kafka.clusters.0.protobufFiles.0: /path/to/my.proto
kafka.clusters.0.protobufFiles.1: /path/to/another.proto
kafka.clusters.0.protobufMessageName: my.DefaultValType
kafka.clusters.0.protobufMessageNameByTopic.topic1: my.Type1
kafka.clusters.0.protobufMessageNameByTopic.topic2: my.Type2
kafka.clusters.0.protobufMessageNameForKey: my.DefaultKeyType
kafka.clusters.0.protobufMessageNameForKeyByTopic.topic1: my.KeyType1
```

View file

@ -0,0 +1,58 @@
# How to configure SASL SCRAM Authentication
You could pass sasl configs in properties section for each cluster.
## Examples:
Please replace
- <KAFKA_NAME> with cluster name
- <KAFKA_URL> with broker list
- <KAFKA_USERNAME> with username
- <KAFKA_PASSWORD> with password
### Running From Docker Image
```sh
docker run -p 8080:8080 \
-e KAFKA_CLUSTERS_0_NAME=<KAFKA_NAME> \
-e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=<KAFKA_URL> \
-e KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL=SASL_SSL \
-e KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM=SCRAM-SHA-512 \
-e KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG=org.apache.kafka.common.security.scram.ScramLoginModule required username="<KAFKA_USERNAME>" password="<KAFKA_PASSWORD>"; \
-d provectuslabs/kafka-ui:latest
```
### Running From Docker-compose file
```yaml
version: '3.4'
services:
kafka-ui:
image: provectuslabs/kafka-ui
container_name: kafka-ui
ports:
- "888:8080"
restart: always
environment:
- KAFKA_CLUSTERS_0_NAME=<KAFKA_NAME>
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=<KAFKA_URL>
- KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL=SASL_SSL
- KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM=SCRAM-SHA-512
- KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG=org.apache.kafka.common.security.scram.ScramLoginModule required username="<KAFKA_USERNAME>" password="<KAFKA_PASSWORD>";
- KAFKA_CLUSTERS_0_PROPERTIES_PROTOCOL=SASL
```
### Configuring by application.yaml
```yaml
kafka:
clusters:
- name: local
bootstrapServers: <KAFKA_URL>
properties:
security.protocol: SASL_SSL
sasl.mechanism: SCRAM-SHA-512
sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="<KAFKA_USERNAME>" password="<KAFKA_PASSWORD>";
```

View file

@ -0,0 +1,7 @@
## Connecting to a Secure Broker
The app supports TLS (SSL) and SASL connections for [encryption and authentication](http://kafka.apache.org/090/documentation.html#security). <br/>
### Running From Docker-compose file
See [this](/documentation/compose/kafka-ssl.yml) docker-compose file reference for ssl-enabled kafka

View file

@ -0,0 +1,71 @@
# How to configure SSO
SSO require additionaly to configure TLS for application, in that example we will use self-signed certificate, in case of use legal certificates please skip step 1.
## Step 1
At this step we will generate self-signed PKCS12 keypair.
``` bash
mkdir cert
keytool -genkeypair -alias ui-for-apache-kafka -keyalg RSA -keysize 2048 \
-storetype PKCS12 -keystore cert/ui-for-apache-kafka.p12 -validity 3650
```
## Step 2
Create new application in any SSO provider, we will continue with [Auth0](https://auth0.com).
<img src="https://github.com/provectus/kafka-ui/raw/images/images/sso-new-app.png" width="70%"/>
After that need to provide callback URLs, in our case we will use `https://127.0.0.1:8080/login/oauth2/code/auth0`
<img src="https://github.com/provectus/kafka-ui/raw/images/images/sso-configuration.png" width="70%"/>
This is a main parameters required for enabling SSO
<img src="https://github.com/provectus/kafka-ui/raw/images/images/sso-parameters.png" width="70%"/>
## Step 3
To launch UI for Apache Kafka with enabled TLS and SSO run following:
``` bash
docker run -p 8080:8080 -v `pwd`/cert:/opt/cert -e AUTH_TYPE=LOGIN_FORM \
-e SECURITY_BASIC_ENABLED=true \
-e SERVER_SSL_KEY_STORE_TYPE=PKCS12 \
-e SERVER_SSL_KEY_STORE=/opt/cert/ui-for-apache-kafka.p12 \
-e SERVER_SSL_KEY_STORE_PASSWORD=123456 \
-e SERVER_SSL_KEY_ALIAS=ui-for-apache-kafka \
-e SERVER_SSL_ENABLED=true \
-e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTID=uhvaPKIHU4ZF8Ne4B6PGvF0hWW6OcUSB \
-e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTSECRET=YXfRjmodifiedTujnkVr7zuW9ECCAK4TcnCio-i \
-e SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_AUTH0_ISSUER_URI=https://dev-a63ggcut.auth0.com/ \
-e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_SCOPE=openid \
-e TRUST_STORE=/opt/cert/ui-for-apache-kafka.p12 \
-e TRUST_STORE_PASSWORD=123456 \
provectuslabs/kafka-ui:latest
```
In the case with trusted CA-signed SSL certificate and SSL termination somewhere outside of application we can pass only SSO related environment variables:
``` bash
docker run -p 8080:8080 -v `pwd`/cert:/opt/cert -e AUTH_TYPE=OAUTH2 \
-e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTID=uhvaPKIHU4ZF8Ne4B6PGvF0hWW6OcUSB \
-e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTSECRET=YXfRjmodifiedTujnkVr7zuW9ECCAK4TcnCio-i \
-e SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_AUTH0_ISSUER_URI=https://dev-a63ggcut.auth0.com/ \
-e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_SCOPE=openid \
provectuslabs/kafka-ui:latest
```
## Step 4 (Load Balancer HTTP) (optional)
If you're using load balancer/proxy and use HTTP between the proxy and the app, you might want to set `server_forward-headers-strategy` to `native` as well (`SERVER_FORWARDHEADERSSTRATEGY=native`), for more info refer to [this issue](https://github.com/provectus/kafka-ui/issues/1017).
## Step 5 (Azure) (optional)
For Azure AD (Office365) OAUTH2 you'll want to add additional environment variables:
```bash
docker run -p 8080:8080 \
-e KAFKA_CLUSTERS_0_NAME="${cluster_name}"\
-e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS="${kafka_listeners}" \
-e KAFKA_CLUSTERS_0_KAFKACONNECT_0_ADDRESS="${kafka_connect_servers}"
-e AUTH_TYPE=OAUTH2 \
-e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTID=uhvaPKIHU4ZF8Ne4B6PGvF0hWW6OcUSB \
-e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTSECRET=YXfRjmodifiedTujnkVr7zuW9ECCAK4TcnCio-i \
-e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_SCOPE="https://graph.microsoft.com/User.Read" \
-e SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_AUTH0_ISSUER_URI="https://login.microsoftonline.com/{tenant-id}/v2.0" \
-d provectuslabs/kafka-ui:latest"
```
Note that scope is created by default when Application registration is done in Azure portal.
You'll need to update application registration manifest to include `"accessTokenAcceptedVersion": 2`

View file

@ -0,0 +1,169 @@
## Serialization and deserialization and custom plugins
Kafka-ui supports multiple ways to serialize/deserialize data.
### Int32, Int64, UInt32, UInt64
Big-endian 4/8 bytes representation of signed/unsigned integers.
### Base64
Base64 (RFC4648) binary data representation. Can be useful in case if the actual data is not important, but exactly the same (byte-wise) key/value should be send.
### String
Treats binary data as a string in specified encoding. Default encoding is UTF-8.
Class name: `com.provectus.kafka.ui.serdes.builtin.StringSerde`
Sample configuration (if you want to overwrite default configuration):
```yaml
kafka:
clusters:
- name: Cluster1
# Other Cluster configuration omitted ...
serdes:
# registering String serde with custom config
- name: AsciiString
className: com.provectus.kafka.ui.serdes.builtin.StringSerde
properties:
encoding: "ASCII"
# overriding build-it String serde config
- name: String
properties:
encoding: "UTF-16"
```
### Protobuf
Class name: `com.provectus.kafka.ui.serdes.builtin.ProtobufFileSerde`
Sample configuration:
```yaml
kafka:
clusters:
- name: Cluster1
# Other Cluster configuration omitted ...
serdes:
- name: ProtobufFile
properties:
# path to the protobuf schema files
protobufFiles:
- path/to/my.proto
- path/to/another.proto
# default protobuf type that is used for KEY serialization/deserialization
# optional
protobufMessageNameForKey: my.Type1
# mapping of topic names to protobuf types, that will be used for KEYS serialization/deserialization
# optional
protobufMessageNameForKeyByTopic:
topic1: my.KeyType1
topic2: my.KeyType2
# default protobuf type that is used for VALUE serialization/deserialization
# optional, if not set - first type in file will be used as default
protobufMessageName: my.Type1
# mapping of topic names to protobuf types, that will be used for VALUES serialization/deserialization
# optional
protobufMessageNameByTopic:
topic1: my.Type1
"topic.2": my.Type2
```
Docker-compose sample for Protobuf serialization is [here](../compose/kafka-ui-serdes.yaml).
Legacy configuration for protobuf is [here](Protobuf.md).
### SchemaRegistry
SchemaRegistry serde is automatically configured if schema registry properties set on cluster level.
But you can add new SchemaRegistry-typed serdes that will connect to another schema-registry instance.
Class name: `com.provectus.kafka.ui.serdes.builtin.sr.SchemaRegistrySerde`
Sample configuration:
```yaml
kafka:
clusters:
- name: Cluster1
# this url will be used by "SchemaRegistry" by default
schemaRegistry: http://main-schema-registry:8081
serdes:
- name: AnotherSchemaRegistry
className: com.provectus.kafka.ui.serdes.builtin.sr.SchemaRegistrySerde
properties:
url: http://another-schema-registry:8081
# auth properties, optional
username: nameForAuth
password: P@ssW0RdForAuth
# and also add another SchemaRegistry serde
- name: ThirdSchemaRegistry
className: com.provectus.kafka.ui.serdes.builtin.sr.SchemaRegistrySerde
properties:
url: http://another-yet-schema-registry:8081
```
## Setting serdes for specific topics
You can specify preferable serde for topics key/value. This serde will be chosen by default in UI on topic's view/produce pages.
To do so, set `topicValuesPattern/topicValuesPattern` properties for the selected serde. Kafka-ui will choose a first serde that matches specified pattern.
Sample configuration:
```yaml
kafka:
clusters:
- name: Cluster1
serdes:
- name: String
topicKeysPattern: click-events|imp-events
- name: Int64
topicKeysPattern: ".*-events"
- name: SchemaRegistry
topicValuesPattern: click-events|imp-events
```
## Default serdes
You can specify which serde will be chosen in UI by default if no other serdes selected via `topicKeysPattern/topicValuesPattern` settings.
Sample configuration:
```yaml
kafka:
clusters:
- name: Cluster1
defaultKeySerde: Int32
defaultValueSerde: String
serdes:
- name: Int32
topicKeysPattern: click-events|imp-events
```
## Fallback
If selected serde couldn't be applied (exception was thrown), then fallback (String serde with UTF-8 encoding) serde will be applied. Such messages will be specially highlighted in UI.
## Custom pluggable serde registration
You can implement your own serde and register it in kafka-ui application.
To do so:
1. Add `kafka-ui-serde-api` dependency (should be downloadable via maven central)
2. Implement `com.provectus.kafka.ui.serde.api.Serde` interface. See javadoc for implementation requirements.
3. Pack your serde into uber jar, or provide directory with no-dependency jar and it's dependencies jars
Example pluggable serdes :
https://github.com/provectus/kafkaui-smile-serde
https://github.com/provectus/kafkaui-glue-sr-serde
Sample configuration:
```yaml
kafka:
clusters:
- name: Cluster1
serdes:
- name: MyCustomSerde
className: my.lovely.org.KafkaUiSerde
filePath: /var/lib/kui-serde/my-kui-serde.jar
- name: MyCustomSerde2
className: my.lovely.org.KafkaUiSerde2
filePath: /var/lib/kui-serde2
properties:
prop1: v1
```

Some files were not shown because too many files have changed in this diff Show more