Compare commits
363 commits
feature/me
...
master
Author | SHA1 | Date | |
---|---|---|---|
![]() |
83b5a60cc0 | ||
![]() |
3dc4446321 | ||
![]() |
53a6553765 | ||
![]() |
fc97dfa874 | ||
![]() |
68f08a0c9b | ||
![]() |
cc12814a95 | ||
![]() |
5d5358010b | ||
![]() |
de2f06ccf8 | ||
![]() |
ff106a2061 | ||
![]() |
c00cb320cd | ||
![]() |
8a1e9ad8e8 | ||
![]() |
39bb860f8e | ||
![]() |
f66d234d83 | ||
![]() |
68a7268f8b | ||
![]() |
aca3d25dc8 | ||
![]() |
0616883fee | ||
![]() |
59584ed369 | ||
![]() |
bbb739af92 | ||
![]() |
145bf07b5d | ||
![]() |
ceb821acdf | ||
![]() |
d2b0cc51e3 | ||
![]() |
9e7bc02c8a | ||
![]() |
2836b2f5d2 | ||
![]() |
a47848f809 | ||
![]() |
5c9fb994a4 | ||
![]() |
14efe9da1e | ||
![]() |
6676747606 | ||
![]() |
b0583a3ca7 | ||
![]() |
4ec7975b2e | ||
![]() |
c05abc1e0a | ||
![]() |
729ca79581 | ||
![]() |
80024c8758 | ||
![]() |
0d6f293ab9 | ||
![]() |
8f2a29d15d | ||
![]() |
552691fc5d | ||
![]() |
342b534ac9 | ||
![]() |
2051f6f653 | ||
![]() |
b2b02a5d60 | ||
![]() |
d7eb3ba99e | ||
![]() |
7de883d3ab | ||
![]() |
4519d9a48c | ||
![]() |
cca2c96997 | ||
![]() |
844eb17d7a | ||
![]() |
37a6e62684 | ||
![]() |
4f211b39ba | ||
![]() |
8d35761b8d | ||
![]() |
b12a0634a0 | ||
![]() |
8d402798c5 | ||
![]() |
ed9f91fd8a | ||
![]() |
d2a5acc82d | ||
![]() |
7a82079471 | ||
![]() |
9acbf2b681 | ||
![]() |
5f89e3b97e | ||
![]() |
1df8625fc8 | ||
![]() |
c8ad262d77 | ||
![]() |
bdbbdcccbe | ||
![]() |
3114509ebf | ||
![]() |
6224b12ed3 | ||
![]() |
78e53d7d93 | ||
![]() |
f9e89661d7 | ||
![]() |
2a61b97fab | ||
![]() |
b32ab01436 | ||
![]() |
fa9547b95a | ||
![]() |
d915de4fd8 | ||
![]() |
150fc21fb8 | ||
![]() |
ba18f3b042 | ||
![]() |
ac09efcd34 | ||
![]() |
333eae2475 | ||
![]() |
69ebd3d52b | ||
![]() |
6a40146fb1 | ||
![]() |
4515ecaf41 | ||
![]() |
92157bdd39 | ||
![]() |
8126607b91 | ||
![]() |
77f1ec9490 | ||
![]() |
3cde6c21ec | ||
![]() |
15f4543402 | ||
![]() |
c96a0c6be5 | ||
![]() |
b2c3fcc321 | ||
![]() |
1cd303a90b | ||
![]() |
895d27a306 | ||
![]() |
476cbfb691 | ||
![]() |
2db89593a7 | ||
![]() |
0b99f745b0 | ||
![]() |
7eaae31345 | ||
![]() |
ca2d53f936 | ||
![]() |
a32272d07e | ||
![]() |
32cd55928a | ||
![]() |
d4001b5a39 | ||
![]() |
f124fa632d | ||
![]() |
8ae8ae40a4 | ||
![]() |
5f231c7681 | ||
![]() |
17cde82dff | ||
![]() |
9ab4580c47 | ||
![]() |
d572e43b4f | ||
![]() |
ab58618d83 | ||
![]() |
216c87670d | ||
![]() |
e57b0bac43 | ||
![]() |
0c732db436 | ||
![]() |
d26490e82e | ||
![]() |
be2f9f0605 | ||
![]() |
50b9c56112 | ||
![]() |
401c9f12c1 | ||
![]() |
b700ac3991 | ||
![]() |
b9bbb1a823 | ||
![]() |
81805703c8 | ||
![]() |
6b67313d1a | ||
![]() |
9549f68d7e | ||
![]() |
8337c9c183 | ||
![]() |
b1ac3482db | ||
![]() |
cdb4f84e23 | ||
![]() |
4134d68316 | ||
![]() |
742e6eed3e | ||
![]() |
328d91de8b | ||
![]() |
c743067ffa | ||
![]() |
7f7242eb8b | ||
![]() |
593ef7ec9c | ||
![]() |
55ed7f4821 | ||
![]() |
e60fe062b6 | ||
![]() |
0a35038826 | ||
![]() |
fa65ec2753 | ||
![]() |
f84bbb9ebb | ||
![]() |
d14b935765 | ||
![]() |
f2ef0c2793 | ||
![]() |
c998e17e83 | ||
![]() |
d0088490a4 | ||
![]() |
6fe6165427 | ||
![]() |
2ac8646769 | ||
![]() |
af2cff20b6 | ||
![]() |
2fb05ca947 | ||
![]() |
fdd4947142 | ||
![]() |
5c59239456 | ||
![]() |
9a2f6bfc8e | ||
![]() |
5d23f2a4ed | ||
![]() |
20bb274f0e | ||
![]() |
03b7d1bd60 | ||
![]() |
100bb1dac6 | ||
![]() |
c355955641 | ||
![]() |
4b724fd852 | ||
![]() |
cd9bc43d2e | ||
![]() |
73bd6ca3a5 | ||
![]() |
8a68ba0778 | ||
![]() |
e118aaba3d | ||
![]() |
5771c11316 | ||
![]() |
29d91bca4b | ||
![]() |
7e47906d88 | ||
![]() |
f19abb2036 | ||
![]() |
61bf71f9b7 | ||
![]() |
004de798e4 | ||
![]() |
80b748b02e | ||
![]() |
71a7a1ec84 | ||
![]() |
0099169a2b | ||
![]() |
ab9d0e2b3f | ||
![]() |
f22c910f5c | ||
![]() |
1b9c189bfa | ||
![]() |
63f71b8a05 | ||
![]() |
17ea464ec1 | ||
![]() |
f7900ba478 | ||
![]() |
f7d85d86e6 | ||
![]() |
62bee1ced8 | ||
![]() |
baeb494f53 | ||
![]() |
ba6d6b2b1f | ||
![]() |
c7cb7a4027 | ||
![]() |
902f11a1d9 | ||
![]() |
0796bf0112 | ||
![]() |
6a50a8ecee | ||
![]() |
78cc4dd981 | ||
![]() |
fdd9ad94c1 | ||
![]() |
1c35ded909 | ||
![]() |
e7429ce6c6 | ||
![]() |
1d8c6197ac | ||
![]() |
52a42e698e | ||
![]() |
aa7429eeba | ||
![]() |
3ca417f64a | ||
![]() |
43ec02ce30 | ||
![]() |
725c95f348 | ||
![]() |
3ef5a9f492 | ||
![]() |
cfcfb851c6 | ||
![]() |
c813e74609 | ||
![]() |
e31cd2e442 | ||
![]() |
bc85924d7d | ||
![]() |
9ac8549d7d | ||
![]() |
f6fe14cea5 | ||
![]() |
a1e7a20887 | ||
![]() |
97a694b3f0 | ||
![]() |
61fb62276e | ||
![]() |
db86942e47 | ||
![]() |
5e539f1ba8 | ||
![]() |
147b539c37 | ||
![]() |
379d9926df | ||
![]() |
86a7ba44fb | ||
![]() |
727f38401b | ||
![]() |
690dcd3f74 | ||
![]() |
7857bd5000 | ||
![]() |
abfdf97a9f | ||
![]() |
c7a7921b82 | ||
![]() |
744bdb32a3 | ||
![]() |
da3932e342 | ||
![]() |
4e25522078 | ||
![]() |
601bd6bbf5 | ||
![]() |
5efb380c42 | ||
![]() |
039f50273e | ||
![]() |
0278700edb | ||
![]() |
eec9fcb5f1 | ||
![]() |
ad9d7dec2c | ||
![]() |
1b2827fb2f | ||
![]() |
fb515871cb | ||
![]() |
8ecb719e9b | ||
![]() |
aed6c16496 | ||
![]() |
a33e7064ee | ||
![]() |
0e1f4ddfcf | ||
![]() |
7365cfe394 | ||
![]() |
734d4ccdf7 | ||
![]() |
8783da313f | ||
![]() |
5dd690aa24 | ||
![]() |
bd782213d1 | ||
![]() |
c89953435a | ||
![]() |
73a6d7cade | ||
![]() |
a1f955ab7c | ||
![]() |
47c8f8eeb5 | ||
![]() |
838fb604d5 | ||
![]() |
814035e254 | ||
![]() |
40c198f0fc | ||
![]() |
96a577a98c | ||
![]() |
a640a52fe6 | ||
![]() |
0f5a9d7a63 | ||
![]() |
39aca05fe3 | ||
![]() |
696cde7dcc | ||
![]() |
c148f112a4 | ||
![]() |
98f1f6ebcd | ||
![]() |
89019dae19 | ||
![]() |
29f49b667d | ||
![]() |
005e74f248 | ||
![]() |
dbdced5bab | ||
![]() |
94da2f4e7f | ||
![]() |
b3240d9057 | ||
![]() |
83f9432569 | ||
![]() |
a2741291bf | ||
![]() |
5c357f94fd | ||
![]() |
ee1cd72dd5 | ||
![]() |
7a47e6e8ba | ||
![]() |
a3daa45ccb | ||
![]() |
ca225440d8 | ||
![]() |
e31580a16a | ||
![]() |
87a8f08ae1 | ||
![]() |
1bcdec4acc | ||
![]() |
e3ee4c7fa7 | ||
![]() |
0ff7e63386 | ||
![]() |
dd4b653b8e | ||
![]() |
de21721e00 | ||
![]() |
c9488422c2 | ||
![]() |
ecc8db1948 | ||
![]() |
866966a638 | ||
![]() |
d06f77ad53 | ||
![]() |
bfb80f36b3 | ||
![]() |
15b78c0a2e | ||
![]() |
c79660b32a | ||
![]() |
df8a16e8a2 | ||
![]() |
ff759fa455 | ||
![]() |
8348241e3d | ||
![]() |
4a1e987a1d | ||
![]() |
ef0dacb0c3 | ||
![]() |
4623f8d7b8 | ||
![]() |
20cc1e489b | ||
![]() |
9f1a4df0a1 | ||
![]() |
7e040818a4 | ||
![]() |
dc08701246 | ||
![]() |
58102faa16 | ||
![]() |
58eca230fc | ||
![]() |
deb3dba29e | ||
![]() |
acfe7a4afc | ||
![]() |
8d3bac8834 | ||
![]() |
84d3b329ba | ||
![]() |
d8289d2ee6 | ||
![]() |
75a6282a84 | ||
![]() |
5b726e84fa | ||
![]() |
bd6394cb14 | ||
![]() |
e2dc12dc02 | ||
![]() |
270d52882e | ||
![]() |
d42e911379 | ||
![]() |
4d03802a5d | ||
![]() |
98580551f6 | ||
![]() |
36112fa26b | ||
![]() |
738136eed2 | ||
![]() |
5cdd44daee | ||
![]() |
b890dc34b6 | ||
![]() |
1117b296a7 | ||
![]() |
4c2d37dd52 | ||
![]() |
51f89cb900 | ||
![]() |
41dabf3858 | ||
![]() |
3d1e7a0979 | ||
![]() |
f8083e25b7 | ||
![]() |
f51da4bb61 | ||
![]() |
4a7893ff1b | ||
![]() |
c153d6f634 | ||
![]() |
c5d6896ae1 | ||
![]() |
cdb5590025 | ||
![]() |
76fbaa7ead | ||
![]() |
d5a5f66528 | ||
![]() |
c5ac7fbe11 | ||
![]() |
1f2c3bcb73 | ||
![]() |
b366d3a520 | ||
![]() |
8ac760119c | ||
![]() |
f2a2574ddc | ||
![]() |
e72f6d6d5d | ||
![]() |
334ba3df99 | ||
![]() |
4d20cb6958 | ||
![]() |
b3f74cbfea | ||
![]() |
e261143bb4 | ||
![]() |
be151b4d82 | ||
![]() |
8889463f7b | ||
![]() |
f193e5fed7 | ||
![]() |
18c046af5b | ||
![]() |
b5e3d1f928 | ||
![]() |
59837394fb | ||
![]() |
ffa49ebb3d | ||
![]() |
c8619268cd | ||
![]() |
9b76d59513 | ||
![]() |
ad5b0d44f0 | ||
![]() |
ea348102c2 | ||
![]() |
526b2915f5 | ||
![]() |
741bbc1be1 | ||
![]() |
e584b15d97 | ||
![]() |
c9f0298000 | ||
![]() |
fadd307564 | ||
![]() |
37e6f021b3 | ||
![]() |
ba99c20ad9 | ||
![]() |
f2ec4d76de | ||
![]() |
a87b31aca1 | ||
![]() |
cbb166026d | ||
![]() |
ebd25c61b1 | ||
![]() |
eeef330fc0 | ||
![]() |
8663ef6e84 | ||
![]() |
45a6e73d29 | ||
![]() |
6ffcd845fa | ||
![]() |
9e1e9b3799 | ||
![]() |
398181e0d2 | ||
![]() |
fdf8db98a2 | ||
![]() |
9f9bd36b0f | ||
![]() |
d1f04e05bb | ||
![]() |
1b09251419 | ||
![]() |
640777dbda | ||
![]() |
43fcf6dce1 | ||
![]() |
0e671a9396 | ||
![]() |
ded4e26825 | ||
![]() |
9cfa184cea | ||
![]() |
0ff8c0d4fb | ||
![]() |
ceb9c5dd85 | ||
![]() |
5d31189609 | ||
![]() |
c2d7d70a8e | ||
![]() |
f55376b40b | ||
![]() |
799c2c455a | ||
![]() |
7c6d04bca0 | ||
![]() |
b4e7f3763b | ||
![]() |
6096ad1d49 | ||
![]() |
a03b6844e0 | ||
![]() |
578468d090 | ||
![]() |
57585891d1 | ||
![]() |
aeda502b09 | ||
![]() |
566dab078f | ||
![]() |
356be08fc7 | ||
![]() |
cf4571b964 | ||
![]() |
8ffd542a80 | ||
![]() |
7fc94ecdbf |
847 changed files with 33693 additions and 20547 deletions
36
.devcontainer/devcontainer.json
Normal file
36
.devcontainer/devcontainer.json
Normal file
|
@ -0,0 +1,36 @@
|
|||
{
|
||||
"name": "Java",
|
||||
|
||||
"image": "mcr.microsoft.com/devcontainers/java:0-17",
|
||||
|
||||
"features": {
|
||||
"ghcr.io/devcontainers/features/java:1": {
|
||||
"version": "none",
|
||||
"installMaven": "true",
|
||||
"installGradle": "false"
|
||||
},
|
||||
"ghcr.io/devcontainers/features/docker-in-docker:2": {}
|
||||
},
|
||||
|
||||
// Use 'forwardPorts' to make a list of ports inside the container available locally.
|
||||
// "forwardPorts": [],
|
||||
|
||||
// Use 'postCreateCommand' to run commands after the container is created.
|
||||
// "postCreateCommand": "java -version",
|
||||
|
||||
"customizations": {
|
||||
"vscode": {
|
||||
"extensions" : [
|
||||
"vscjava.vscode-java-pack",
|
||||
"vscjava.vscode-maven",
|
||||
"vscjava.vscode-java-debug",
|
||||
"EditorConfig.EditorConfig",
|
||||
"ms-azuretools.vscode-docker",
|
||||
"antfu.vite",
|
||||
"ms-kubernetes-tools.vscode-kubernetes-tools",
|
||||
"github.vscode-pull-request-github"
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
}
|
4
.github/CODEOWNERS
vendored
4
.github/CODEOWNERS
vendored
|
@ -14,5 +14,5 @@
|
|||
# TESTS
|
||||
/kafka-ui-e2e-checks/ @provectus/kafka-qa
|
||||
|
||||
# HELM CHARTS
|
||||
/charts/ @provectus/kafka-devops
|
||||
# INFRA
|
||||
/.github/workflows/ @provectus/kafka-devops
|
||||
|
|
92
.github/ISSUE_TEMPLATE/bug.yml
vendored
Normal file
92
.github/ISSUE_TEMPLATE/bug.yml
vendored
Normal file
|
@ -0,0 +1,92 @@
|
|||
name: "\U0001F41E Bug report"
|
||||
description: File a bug report
|
||||
labels: ["status/triage", "type/bug"]
|
||||
assignees: []
|
||||
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Hi, thanks for raising the issue(-s), all contributions really matter!
|
||||
Please, note that we'll close the issue without further explanation if you don't follow
|
||||
this template and don't provide the information requested within this template.
|
||||
|
||||
- type: checkboxes
|
||||
id: terms
|
||||
attributes:
|
||||
label: Issue submitter TODO list
|
||||
description: By you checking these checkboxes we can be sure you've done the essential things.
|
||||
options:
|
||||
- label: I've looked up my issue in [FAQ](https://docs.kafka-ui.provectus.io/faq/common-problems)
|
||||
required: true
|
||||
- label: I've searched for an already existing issues [here](https://github.com/provectus/kafka-ui/issues)
|
||||
required: true
|
||||
- label: I've tried running `master`-labeled docker image and the issue still persists there
|
||||
required: true
|
||||
- label: I'm running a supported version of the application which is listed [here](https://github.com/provectus/kafka-ui/blob/master/SECURITY.md)
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Describe the bug (actual behavior)
|
||||
description: A clear and concise description of what the bug is. Use a list, if there is more than one problem
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Expected behavior
|
||||
description: A clear and concise description of what you expected to happen
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Your installation details
|
||||
description: |
|
||||
How do you run the app? Please provide as much info as possible:
|
||||
1. App version (commit hash in the top left corner of the UI)
|
||||
2. Helm chart version, if you use one
|
||||
3. Your application config. Please remove the sensitive info like passwords or API keys.
|
||||
4. Any IAAC configs
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Steps to reproduce
|
||||
description: |
|
||||
Please write down the order of the actions required to reproduce the issue.
|
||||
For the advanced setups/complicated issue, we might need you to provide
|
||||
a minimal [reproducible example](https://stackoverflow.com/help/minimal-reproducible-example).
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Screenshots
|
||||
description: |
|
||||
If applicable, add screenshots to help explain your problem
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Logs
|
||||
description: |
|
||||
If applicable, *upload* screenshots to help explain your problem
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Additional context
|
||||
description: |
|
||||
Add any other context about the problem here. E.G.:
|
||||
1. Are there any alternative scenarios (different data/methods/configuration/setup) you have tried?
|
||||
Were they successful or the same issue occurred? Please provide steps as well.
|
||||
2. Related issues (if there are any).
|
||||
3. Logs (if available)
|
||||
4. Is there any serious impact or behaviour on the end-user because of this issue, that can be overlooked?
|
||||
validations:
|
||||
required: false
|
62
.github/ISSUE_TEMPLATE/bug_report.md
vendored
62
.github/ISSUE_TEMPLATE/bug_report.md
vendored
|
@ -1,62 +0,0 @@
|
|||
---
|
||||
name: "\U0001F41E Bug report"
|
||||
about: Create a bug report
|
||||
title: ''
|
||||
labels: status/triage, type/bug
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
<!--
|
||||
|
||||
Don't forget to check for existing issues/discussions regarding your proposal. We might already have it.
|
||||
https://github.com/provectus/kafka-ui/issues
|
||||
https://github.com/provectus/kafka-ui/discussions
|
||||
|
||||
-->
|
||||
|
||||
<!--
|
||||
Please follow the naming conventions for bugs:
|
||||
<Feature/Area/Scope> : <Compact, but specific problem summary>
|
||||
Avoid generic titles, like “Topics: incorrect layout of message sorting drop-down list”. Better use something like: “Topics: Message sorting drop-down list overlaps the "Submit" button”.
|
||||
|
||||
-->
|
||||
|
||||
**Describe the bug** (Actual behavior)
|
||||
<!--(A clear and concise description of what the bug is.Use a list, if there is more than one problem)-->
|
||||
|
||||
**Expected behavior**
|
||||
<!--(A clear and concise description of what you expected to happen.)-->
|
||||
|
||||
**Set up**
|
||||
<!--
|
||||
WE MIGHT CLOSE THE ISSUE without further explanation IF YOU DON'T PROVIDE THIS INFORMATION.
|
||||
|
||||
How do you run the app? Please provide as much info as possible:
|
||||
1. App version (docker image version or check commit hash in the top left corner in UI)
|
||||
2. Helm chart version, if you use one
|
||||
3. Any IAAC configs
|
||||
-->
|
||||
|
||||
|
||||
**Steps to Reproduce**
|
||||
<!-- We'd like you to provide an example setup (via docker-compose, helm, etc.)
|
||||
to reproduce the problem, especially with a complex setups. -->
|
||||
|
||||
1.
|
||||
|
||||
**Screenshots**
|
||||
<!--
|
||||
(If applicable, add screenshots to help explain your problem)
|
||||
-->
|
||||
|
||||
|
||||
**Additional context**
|
||||
<!--
|
||||
Add any other context about the problem here. E.g.:
|
||||
1. Are there any alternative scenarios (different data/methods/configuration/setup) you have tried?
|
||||
Were they successfull or same issue occured? Please provide steps as well.
|
||||
2. Related issues (if there are any).
|
||||
3. Logs (if available)
|
||||
4. Is there any serious impact or behaviour on the end-user because of this issue, that can be overlooked?
|
||||
-->
|
14
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
14
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
|
@ -0,0 +1,14 @@
|
|||
blank_issues_enabled: false
|
||||
contact_links:
|
||||
- name: Report helm issue
|
||||
url: https://github.com/provectus/kafka-ui-charts
|
||||
about: Our helm charts are located in another repo. Please raise issues/PRs regarding charts in that repo.
|
||||
- name: Official documentation
|
||||
url: https://docs.kafka-ui.provectus.io/
|
||||
about: Before reaching out for support, please refer to our documentation. Read "FAQ" and "Common problems", also try using search there.
|
||||
- name: Community Discord
|
||||
url: https://discord.gg/4DWzD7pGE5
|
||||
about: Chat with other users, get some support or ask questions.
|
||||
- name: GitHub Discussions
|
||||
url: https://github.com/provectus/kafka-ui/discussions
|
||||
about: An alternative place to ask questions or to get some support.
|
66
.github/ISSUE_TEMPLATE/feature.yml
vendored
Normal file
66
.github/ISSUE_TEMPLATE/feature.yml
vendored
Normal file
|
@ -0,0 +1,66 @@
|
|||
name: "\U0001F680 Feature request"
|
||||
description: Propose a new feature
|
||||
labels: ["status/triage", "type/feature"]
|
||||
assignees: []
|
||||
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Hi, thanks for raising the issue(-s), all contributions really matter!
|
||||
Please, note that we'll close the issue without further explanation if you don't follow
|
||||
this template and don't provide the information requested within this template.
|
||||
|
||||
- type: checkboxes
|
||||
id: terms
|
||||
attributes:
|
||||
label: Issue submitter TODO list
|
||||
description: By you checking these checkboxes we can be sure you've done the essential things.
|
||||
options:
|
||||
- label: I've searched for an already existing issues [here](https://github.com/provectus/kafka-ui/issues)
|
||||
required: true
|
||||
- label: I'm running a supported version of the application which is listed [here](https://github.com/provectus/kafka-ui/blob/master/SECURITY.md) and the feature is not present there
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Is your proposal related to a problem?
|
||||
description: |
|
||||
Provide a clear and concise description of what the problem is.
|
||||
For example, "I'm always frustrated when..."
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Describe the feature you're interested in
|
||||
description: |
|
||||
Provide a clear and concise description of what you want to happen.
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Describe alternatives you've considered
|
||||
description: |
|
||||
Let us know about other solutions you've tried or researched.
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: input
|
||||
attributes:
|
||||
label: Version you're running
|
||||
description: |
|
||||
Please provide the app version you're currently running:
|
||||
1. App version (commit hash in the top left corner of the UI)
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Additional context
|
||||
description: |
|
||||
Is there anything else you can add about the proposal?
|
||||
You might want to link to related issues here, if you haven't already.
|
||||
validations:
|
||||
required: false
|
46
.github/ISSUE_TEMPLATE/feature_request.md
vendored
46
.github/ISSUE_TEMPLATE/feature_request.md
vendored
|
@ -1,46 +0,0 @@
|
|||
---
|
||||
name: "\U0001F680 Feature request"
|
||||
about: Propose a new feature
|
||||
title: ''
|
||||
labels: status/triage, type/feature
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
<!--
|
||||
|
||||
Don't forget to check for existing issues/discussions regarding your proposal. We might already have it.
|
||||
https://github.com/provectus/kafka-ui/issues
|
||||
https://github.com/provectus/kafka-ui/discussions
|
||||
|
||||
-->
|
||||
|
||||
### Which version of the app are you running?
|
||||
<!-- Please provide docker image version or check commit hash in the top left corner in UI) -->
|
||||
|
||||
### Is your proposal related to a problem?
|
||||
|
||||
<!--
|
||||
Provide a clear and concise description of what the problem is.
|
||||
For example, "I'm always frustrated when..."
|
||||
-->
|
||||
|
||||
### Describe the solution you'd like
|
||||
|
||||
<!--
|
||||
Provide a clear and concise description of what you want to happen.
|
||||
-->
|
||||
|
||||
### Describe alternatives you've considered
|
||||
|
||||
<!--
|
||||
Let us know about other solutions you've tried or researched.
|
||||
-->
|
||||
|
||||
### Additional context
|
||||
|
||||
<!--
|
||||
Is there anything else you can add about the proposal?
|
||||
You might want to link to related issues here, if you haven't already.
|
||||
-->
|
||||
|
52
.github/ISSUE_TEMPLATE/k8s.md
vendored
52
.github/ISSUE_TEMPLATE/k8s.md
vendored
|
@ -1,52 +0,0 @@
|
|||
---
|
||||
name: "⎈ K8s/Helm problem report"
|
||||
about: Report a problem with k8s/helm charts/etc
|
||||
title: ''
|
||||
labels: scope/k8s, status/triage
|
||||
assignees: azatsafin
|
||||
|
||||
---
|
||||
|
||||
<!--
|
||||
|
||||
Don't forget to check for existing issues/discussions regarding your proposal. We might already have it.
|
||||
https://github.com/provectus/kafka-ui/issues
|
||||
https://github.com/provectus/kafka-ui/discussions
|
||||
|
||||
-->
|
||||
|
||||
**Describe the bug**
|
||||
<!--(A clear and concise description of what the bug is.)-->
|
||||
|
||||
|
||||
**Set up**
|
||||
<!--
|
||||
How do you run the app? Please provide as much info as possible:
|
||||
1. App version (docker image version or check commit hash in the top left corner in UI)
|
||||
2. Helm chart version, if you use one
|
||||
3. Any IAAC configs
|
||||
|
||||
We might close the issue without further explanation if you don't provide such information.
|
||||
-->
|
||||
|
||||
|
||||
**Steps to Reproduce**
|
||||
Steps to reproduce the behavior:
|
||||
|
||||
1.
|
||||
|
||||
**Expected behavior**
|
||||
<!--
|
||||
(A clear and concise description of what you expected to happen)
|
||||
-->
|
||||
|
||||
**Screenshots**
|
||||
<!--
|
||||
(If applicable, add screenshots to help explain your problem)
|
||||
-->
|
||||
|
||||
|
||||
**Additional context**
|
||||
<!--
|
||||
(Add any other context about the problem here)
|
||||
-->
|
16
.github/ISSUE_TEMPLATE/question.md
vendored
16
.github/ISSUE_TEMPLATE/question.md
vendored
|
@ -1,16 +0,0 @@
|
|||
---
|
||||
name: "❓ Question"
|
||||
about: Ask a question
|
||||
title: ''
|
||||
|
||||
---
|
||||
|
||||
<!--
|
||||
|
||||
To ask a question, please either:
|
||||
1. Open up a discussion (https://github.com/provectus/kafka-ui/discussions)
|
||||
2. Join us on discord (https://discord.gg/4DWzD7pGE5) and ask there.
|
||||
|
||||
Don't forget to check/search for existing issues/discussions.
|
||||
|
||||
-->
|
4
.github/dependabot.yml
vendored
4
.github/dependabot.yml
vendored
|
@ -8,8 +8,6 @@ updates:
|
|||
timezone: Europe/Moscow
|
||||
reviewers:
|
||||
- "Haarolean"
|
||||
assignees:
|
||||
- "Haarolean"
|
||||
labels:
|
||||
- "scope/backend"
|
||||
- "type/dependencies"
|
||||
|
@ -99,8 +97,6 @@ updates:
|
|||
timezone: Europe/Moscow
|
||||
reviewers:
|
||||
- "Haarolean"
|
||||
assignees:
|
||||
- "Haarolean"
|
||||
labels:
|
||||
- "scope/infrastructure"
|
||||
- "type/dependencies"
|
||||
|
|
8
.github/release_drafter.yaml
vendored
8
.github/release_drafter.yaml
vendored
|
@ -16,18 +16,26 @@ exclude-labels:
|
|||
- 'type/refactoring'
|
||||
|
||||
categories:
|
||||
- title: '🚩 Breaking Changes'
|
||||
labels:
|
||||
- 'impact/changelog'
|
||||
|
||||
- title: '⚙️Features'
|
||||
labels:
|
||||
- 'type/feature'
|
||||
|
||||
- title: '🪛Enhancements'
|
||||
labels:
|
||||
- 'type/enhancement'
|
||||
|
||||
- title: '🔨Bug Fixes'
|
||||
labels:
|
||||
- 'type/bug'
|
||||
|
||||
- title: 'Security'
|
||||
labels:
|
||||
- 'type/security'
|
||||
|
||||
- title: '⎈ Helm/K8S Changes'
|
||||
labels:
|
||||
- 'scope/k8s'
|
||||
|
|
6
.github/workflows/aws_publisher.yaml
vendored
6
.github/workflows/aws_publisher.yaml
vendored
|
@ -1,4 +1,4 @@
|
|||
name: AWS Marketplace Publisher
|
||||
name: "Infra: Release: AWS Marketplace Publisher"
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
|
@ -24,14 +24,14 @@ jobs:
|
|||
- name: Clone infra repo
|
||||
run: |
|
||||
echo "Cloning repo..."
|
||||
git clone https://kafka-ui-infra:${{ secrets.KAFKA_UI_INFRA_TOKEN }}@gitlab.provectus.com/provectus-internals/kafka-ui-infra.git --branch ${{ github.event.inputs.KafkaUIInfraBranch }}
|
||||
git clone https://infra-tech:${{ secrets.INFRA_USER_ACCESS_TOKEN }}@github.com/provectus/kafka-ui-infra.git --branch ${{ github.event.inputs.KafkaUIInfraBranch }}
|
||||
echo "Cd to packer DIR..."
|
||||
cd kafka-ui-infra/ami
|
||||
echo "WORK_DIR=$(pwd)" >> $GITHUB_ENV
|
||||
echo "Packer will be triggered in this dir $WORK_DIR"
|
||||
|
||||
- name: Configure AWS credentials for Kafka-UI account
|
||||
uses: aws-actions/configure-aws-credentials@v1
|
||||
uses: aws-actions/configure-aws-credentials@v3
|
||||
with:
|
||||
aws-access-key-id: ${{ secrets.AWS_AMI_PUBLISH_KEY_ID }}
|
||||
aws-secret-access-key: ${{ secrets.AWS_AMI_PUBLISH_KEY_SECRET }}
|
||||
|
|
7
.github/workflows/backend.yml
vendored
7
.github/workflows/backend.yml
vendored
|
@ -1,4 +1,4 @@
|
|||
name: backend
|
||||
name: "Backend: PR/master build & test"
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
|
@ -8,6 +8,9 @@ on:
|
|||
paths:
|
||||
- "kafka-ui-api/**"
|
||||
- "pom.xml"
|
||||
permissions:
|
||||
checks: write
|
||||
pull-requests: write
|
||||
jobs:
|
||||
build-and-test:
|
||||
runs-on: ubuntu-latest
|
||||
|
@ -29,7 +32,7 @@ jobs:
|
|||
key: ${{ runner.os }}-sonar
|
||||
restore-keys: ${{ runner.os }}-sonar
|
||||
- name: Build and analyze pull request target
|
||||
if: ${{ github.event_name == 'pull_request_target' }}
|
||||
if: ${{ github.event_name == 'pull_request' }}
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN_BACKEND }}
|
||||
|
|
4
.github/workflows/block_merge.yml
vendored
4
.github/workflows/block_merge.yml
vendored
|
@ -1,4 +1,4 @@
|
|||
name: Pull Request Labels
|
||||
name: "Infra: PR block merge"
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened, labeled, unlabeled, synchronize]
|
||||
|
@ -6,7 +6,7 @@ jobs:
|
|||
block_merge:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: mheap/github-action-required-labels@v2
|
||||
- uses: mheap/github-action-required-labels@v5
|
||||
with:
|
||||
mode: exactly
|
||||
count: 0
|
||||
|
|
38
.github/workflows/branch-deploy.yml
vendored
38
.github/workflows/branch-deploy.yml
vendored
|
@ -1,4 +1,4 @@
|
|||
name: DeployFromBranch
|
||||
name: "Infra: Feature Testing: Init env"
|
||||
on:
|
||||
workflow_dispatch:
|
||||
|
||||
|
@ -10,6 +10,8 @@ jobs:
|
|||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: ${{ github.event.pull_request.head.sha }}
|
||||
- name: get branch name
|
||||
id: extract_branch
|
||||
run: |
|
||||
|
@ -43,7 +45,7 @@ jobs:
|
|||
restore-keys: |
|
||||
${{ runner.os }}-buildx-
|
||||
- name: Configure AWS credentials for Kafka-UI account
|
||||
uses: aws-actions/configure-aws-credentials@v1
|
||||
uses: aws-actions/configure-aws-credentials@v3
|
||||
with:
|
||||
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
|
||||
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
||||
|
@ -53,7 +55,7 @@ jobs:
|
|||
uses: aws-actions/amazon-ecr-login@v1
|
||||
- name: Build and push
|
||||
id: docker_build_and_push
|
||||
uses: docker/build-push-action@v3
|
||||
uses: docker/build-push-action@v4
|
||||
with:
|
||||
builder: ${{ steps.buildx.outputs.name }}
|
||||
context: kafka-ui-api
|
||||
|
@ -71,29 +73,33 @@ jobs:
|
|||
steps:
|
||||
- name: clone
|
||||
run: |
|
||||
git clone https://kafka-ui-infra:${{ secrets.KAFKA_UI_INFRA_TOKEN }}@gitlab.provectus.com/provectus-internals/kafka-ui-infra.git
|
||||
git clone https://infra-tech:${{ secrets.INFRA_USER_ACCESS_TOKEN }}@github.com/provectus/kafka-ui-infra.git --branch envs
|
||||
- name: create deployment
|
||||
run: |
|
||||
cd kafka-ui-infra/aws-infrastructure4eks/argocd/scripts
|
||||
echo "Branch:${{ needs.build.outputs.tag }}"
|
||||
./kafka-ui-deployment-from-branch.sh ${{ needs.build.outputs.tag }} ${{ github.event.label.name }} ${{ secrets.FEATURE_TESTING_UI_PASSWORD }}
|
||||
git config --global user.email "kafka-ui-infra@provectus.com"
|
||||
git config --global user.name "kafka-ui-infra"
|
||||
git config --global user.email "infra-tech@provectus.com"
|
||||
git config --global user.name "infra-tech"
|
||||
git add ../kafka-ui-from-branch/
|
||||
git commit -m "added env:${{ needs.build.outputs.deploy }}" && git push || true
|
||||
|
||||
- name: make comment with private deployment link
|
||||
- name: update status check for private deployment
|
||||
if: ${{ github.event.label.name == 'status/feature_testing' }}
|
||||
uses: peter-evans/create-or-update-comment@v2
|
||||
uses: Sibz/github-status-action@v1.1.6
|
||||
with:
|
||||
issue-number: ${{ github.event.pull_request.number }}
|
||||
body: |
|
||||
Custom deployment will be available at http://${{ needs.build.outputs.tag }}.internal.kafka-ui.provectus.io
|
||||
authToken: ${{secrets.GITHUB_TOKEN}}
|
||||
context: "Click Details button to open custom deployment page"
|
||||
state: "success"
|
||||
sha: ${{ github.event.pull_request.head.sha || github.sha }}
|
||||
target_url: "http://${{ needs.build.outputs.tag }}.internal.kafka-ui.provectus.io"
|
||||
|
||||
- name: make comment with public deployment link
|
||||
- name: update status check for public deployment
|
||||
if: ${{ github.event.label.name == 'status/feature_testing_public' }}
|
||||
uses: peter-evans/create-or-update-comment@v2
|
||||
uses: Sibz/github-status-action@v1.1.6
|
||||
with:
|
||||
issue-number: ${{ github.event.pull_request.number }}
|
||||
body: |
|
||||
Custom deployment will be available at http://${{ needs.build.outputs.tag }}.kafka-ui.provectus.io in 5 minutes
|
||||
authToken: ${{secrets.GITHUB_TOKEN}}
|
||||
context: "Click Details button to open custom deployment page"
|
||||
state: "success"
|
||||
sha: ${{ github.event.pull_request.head.sha || github.sha }}
|
||||
target_url: "http://${{ needs.build.outputs.tag }}.internal.kafka-ui.provectus.io"
|
||||
|
|
14
.github/workflows/branch-remove.yml
vendored
14
.github/workflows/branch-remove.yml
vendored
|
@ -1,4 +1,4 @@
|
|||
name: RemoveCustomDeployment
|
||||
name: "Infra: Feature Testing: Destroy env"
|
||||
on:
|
||||
workflow_dispatch:
|
||||
pull_request:
|
||||
|
@ -11,18 +11,12 @@ jobs:
|
|||
- uses: actions/checkout@v3
|
||||
- name: clone
|
||||
run: |
|
||||
git clone https://kafka-ui-infra:${{ secrets.KAFKA_UI_INFRA_TOKEN }}@gitlab.provectus.com/provectus-internals/kafka-ui-infra.git
|
||||
git clone https://infra-tech:${{ secrets.INFRA_USER_ACCESS_TOKEN }}@github.com/provectus/kafka-ui-infra.git --branch envs
|
||||
- name: remove env
|
||||
run: |
|
||||
cd kafka-ui-infra/aws-infrastructure4eks/argocd/scripts
|
||||
./delete-env.sh pr${{ github.event.pull_request.number }} || true
|
||||
git config --global user.email "kafka-ui-infra@provectus.com"
|
||||
git config --global user.name "kafka-ui-infra"
|
||||
git config --global user.email "infra-tech@provectus.com"
|
||||
git config --global user.name "infra-tech"
|
||||
git add ../kafka-ui-from-branch/
|
||||
git commit -m "removed env:${{ needs.build.outputs.deploy }}" && git push || true
|
||||
- name: make comment with deployment link
|
||||
uses: peter-evans/create-or-update-comment@v2
|
||||
with:
|
||||
issue-number: ${{ github.event.pull_request.number }}
|
||||
body: |
|
||||
Custom deployment removed
|
||||
|
|
11
.github/workflows/build-public-image.yml
vendored
11
.github/workflows/build-public-image.yml
vendored
|
@ -1,4 +1,4 @@
|
|||
name: Build Docker image and push
|
||||
name: "Infra: Image Testing: Deploy"
|
||||
on:
|
||||
workflow_dispatch:
|
||||
pull_request:
|
||||
|
@ -9,6 +9,8 @@ jobs:
|
|||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: ${{ github.event.pull_request.head.sha }}
|
||||
- name: get branch name
|
||||
id: extract_branch
|
||||
run: |
|
||||
|
@ -40,7 +42,7 @@ jobs:
|
|||
restore-keys: |
|
||||
${{ runner.os }}-buildx-
|
||||
- name: Configure AWS credentials for Kafka-UI account
|
||||
uses: aws-actions/configure-aws-credentials@v1
|
||||
uses: aws-actions/configure-aws-credentials@v3
|
||||
with:
|
||||
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
|
||||
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
||||
|
@ -52,7 +54,7 @@ jobs:
|
|||
registry-type: 'public'
|
||||
- name: Build and push
|
||||
id: docker_build_and_push
|
||||
uses: docker/build-push-action@v3
|
||||
uses: docker/build-push-action@v4
|
||||
with:
|
||||
builder: ${{ steps.buildx.outputs.name }}
|
||||
context: kafka-ui-api
|
||||
|
@ -63,11 +65,10 @@ jobs:
|
|||
cache-from: type=local,src=/tmp/.buildx-cache
|
||||
cache-to: type=local,dest=/tmp/.buildx-cache
|
||||
- name: make comment with private deployment link
|
||||
uses: peter-evans/create-or-update-comment@v2
|
||||
uses: peter-evans/create-or-update-comment@v3
|
||||
with:
|
||||
issue-number: ${{ github.event.pull_request.number }}
|
||||
body: |
|
||||
Image published at public.ecr.aws/provectus/kafka-ui-custom-build:${{ steps.extract_branch.outputs.tag }}
|
||||
|
||||
outputs:
|
||||
tag: ${{ steps.extract_branch.outputs.tag }}
|
||||
|
|
28
.github/workflows/create-branch-for-helm.yaml
vendored
28
.github/workflows/create-branch-for-helm.yaml
vendored
|
@ -1,28 +0,0 @@
|
|||
name: prepare-helm-release
|
||||
on:
|
||||
repository_dispatch:
|
||||
types: [prepare-helm-release]
|
||||
jobs:
|
||||
change-app-version:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- run: |
|
||||
git config user.name github-actions
|
||||
git config user.email github-actions@github.com
|
||||
- name: Change versions
|
||||
run: |
|
||||
git checkout -b release-${{ github.event.client_payload.appversion}}
|
||||
version=$(cat charts/kafka-ui/Chart.yaml | grep version | awk '{print $2}')
|
||||
version=${version%.*}.$((${version##*.}+1))
|
||||
sed -i "s/version:.*/version: ${version}/" charts/kafka-ui/Chart.yaml
|
||||
sed -i "s/appVersion:.*/appVersion: ${{ github.event.client_payload.appversion}}/" charts/kafka-ui/Chart.yaml
|
||||
git add charts/kafka-ui/Chart.yaml
|
||||
git commit -m "release ${version}"
|
||||
git push --set-upstream origin release-${{ github.event.client_payload.appversion}}
|
||||
- name: Slack Notification
|
||||
uses: rtCamp/action-slack-notify@v2
|
||||
env:
|
||||
SLACK_TITLE: "release-${{ github.event.client_payload.appversion}}"
|
||||
SLACK_MESSAGE: "A new release of the helm chart has been prepared. Branch name: release-${{ github.event.client_payload.appversion}}"
|
||||
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
|
4
.github/workflows/cve.yaml
vendored
4
.github/workflows/cve.yaml
vendored
|
@ -40,7 +40,7 @@ jobs:
|
|||
${{ runner.os }}-buildx-
|
||||
|
||||
- name: Build docker image
|
||||
uses: docker/build-push-action@v3
|
||||
uses: docker/build-push-action@v4
|
||||
with:
|
||||
builder: ${{ steps.buildx.outputs.name }}
|
||||
context: kafka-ui-api
|
||||
|
@ -55,7 +55,7 @@ jobs:
|
|||
cache-to: type=local,dest=/tmp/.buildx-cache
|
||||
|
||||
- name: Run CVE checks
|
||||
uses: aquasecurity/trivy-action@0.8.0
|
||||
uses: aquasecurity/trivy-action@0.12.0
|
||||
with:
|
||||
image-ref: "provectuslabs/kafka-ui:${{ steps.build.outputs.version }}"
|
||||
format: "table"
|
||||
|
|
10
.github/workflows/delete-public-image.yml
vendored
10
.github/workflows/delete-public-image.yml
vendored
|
@ -1,4 +1,4 @@
|
|||
name: Delete Public ECR Image
|
||||
name: "Infra: Image Testing: Delete"
|
||||
on:
|
||||
workflow_dispatch:
|
||||
pull_request:
|
||||
|
@ -15,7 +15,7 @@ jobs:
|
|||
tag='${{ github.event.pull_request.number }}'
|
||||
echo "tag=${tag}" >> $GITHUB_OUTPUT
|
||||
- name: Configure AWS credentials for Kafka-UI account
|
||||
uses: aws-actions/configure-aws-credentials@v1
|
||||
uses: aws-actions/configure-aws-credentials@v3
|
||||
with:
|
||||
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
|
||||
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
||||
|
@ -32,9 +32,3 @@ jobs:
|
|||
--repository-name kafka-ui-custom-build \
|
||||
--image-ids imageTag=${{ steps.extract_branch.outputs.tag }} \
|
||||
--region us-east-1
|
||||
- name: make comment with private deployment link
|
||||
uses: peter-evans/create-or-update-comment@v2
|
||||
with:
|
||||
issue-number: ${{ github.event.pull_request.number }}
|
||||
body: |
|
||||
Image tag public.ecr.aws/provectus/kafka-ui-custom-build:${{ steps.extract_branch.outputs.tag }} has been removed
|
||||
|
|
2
.github/workflows/documentation.yaml
vendored
2
.github/workflows/documentation.yaml
vendored
|
@ -1,4 +1,4 @@
|
|||
name: Documentation
|
||||
name: "Infra: Docs: URL linter"
|
||||
on:
|
||||
pull_request:
|
||||
types:
|
||||
|
|
88
.github/workflows/e2e-automation.yml
vendored
Normal file
88
.github/workflows/e2e-automation.yml
vendored
Normal file
|
@ -0,0 +1,88 @@
|
|||
name: "E2E: Automation suite"
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
test_suite:
|
||||
description: 'Select test suite to run'
|
||||
default: 'regression'
|
||||
required: true
|
||||
type: choice
|
||||
options:
|
||||
- regression
|
||||
- sanity
|
||||
- smoke
|
||||
qase_token:
|
||||
description: 'Set Qase token to enable integration'
|
||||
required: false
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
build-and-test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: ${{ github.sha }}
|
||||
- name: Configure AWS credentials
|
||||
uses: aws-actions/configure-aws-credentials@v3
|
||||
with:
|
||||
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
|
||||
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
||||
aws-region: eu-central-1
|
||||
- name: Set up environment
|
||||
id: set_env_values
|
||||
run: |
|
||||
cat "./kafka-ui-e2e-checks/.env.ci" >> "./kafka-ui-e2e-checks/.env"
|
||||
- name: Pull with Docker
|
||||
id: pull_chrome
|
||||
run: |
|
||||
docker pull selenoid/vnc_chrome:103.0
|
||||
- name: Set up JDK
|
||||
uses: actions/setup-java@v3
|
||||
with:
|
||||
java-version: '17'
|
||||
distribution: 'zulu'
|
||||
cache: 'maven'
|
||||
- name: Build with Maven
|
||||
id: build_app
|
||||
run: |
|
||||
./mvnw -B -ntp versions:set -DnewVersion=${{ github.sha }}
|
||||
./mvnw -B -V -ntp clean install -Pprod -Dmaven.test.skip=true ${{ github.event.inputs.extraMavenOptions }}
|
||||
- name: Compose with Docker
|
||||
id: compose_app
|
||||
# use the following command until #819 will be fixed
|
||||
run: |
|
||||
docker-compose -f kafka-ui-e2e-checks/docker/selenoid-git.yaml up -d
|
||||
docker-compose -f ./documentation/compose/e2e-tests.yaml up -d
|
||||
- name: Run test suite
|
||||
run: |
|
||||
./mvnw -B -ntp versions:set -DnewVersion=${{ github.sha }}
|
||||
./mvnw -B -V -ntp -DQASEIO_API_TOKEN=${{ github.event.inputs.qase_token }} -Dsurefire.suiteXmlFiles='src/test/resources/${{ github.event.inputs.test_suite }}.xml' -Dsuite=${{ github.event.inputs.test_suite }} -f 'kafka-ui-e2e-checks' test -Pprod
|
||||
- name: Generate Allure report
|
||||
uses: simple-elf/allure-report-action@master
|
||||
if: always()
|
||||
id: allure-report
|
||||
with:
|
||||
allure_results: ./kafka-ui-e2e-checks/allure-results
|
||||
gh_pages: allure-results
|
||||
allure_report: allure-report
|
||||
subfolder: allure-results
|
||||
report_url: "http://kafkaui-allure-reports.s3-website.eu-central-1.amazonaws.com"
|
||||
- uses: jakejarvis/s3-sync-action@master
|
||||
if: always()
|
||||
env:
|
||||
AWS_S3_BUCKET: 'kafkaui-allure-reports'
|
||||
AWS_REGION: 'eu-central-1'
|
||||
SOURCE_DIR: 'allure-history/allure-results'
|
||||
- name: Deploy report to Amazon S3
|
||||
if: always()
|
||||
uses: Sibz/github-status-action@v1.1.6
|
||||
with:
|
||||
authToken: ${{secrets.GITHUB_TOKEN}}
|
||||
context: "Click Details button to open Allure report"
|
||||
state: "success"
|
||||
sha: ${{ github.sha }}
|
||||
target_url: http://kafkaui-allure-reports.s3-website.eu-central-1.amazonaws.com/${{ github.run_number }}
|
||||
- name: Dump Docker logs on failure
|
||||
if: failure()
|
||||
uses: jwalton/gh-docker-logs@v2.2.1
|
36
.github/workflows/e2e-checks.yaml
vendored
36
.github/workflows/e2e-checks.yaml
vendored
|
@ -1,13 +1,15 @@
|
|||
name: e2e-checks
|
||||
name: "E2E: PR healthcheck"
|
||||
on:
|
||||
pull_request_target:
|
||||
types: ["opened", "edited", "reopened", "synchronize"]
|
||||
types: [ "opened", "edited", "reopened", "synchronize" ]
|
||||
paths:
|
||||
- "kafka-ui-api/**"
|
||||
- "kafka-ui-contract/**"
|
||||
- "kafka-ui-react-app/**"
|
||||
- "kafka-ui-e2e-checks/**"
|
||||
- "pom.xml"
|
||||
permissions:
|
||||
statuses: write
|
||||
jobs:
|
||||
build-and-test:
|
||||
runs-on: ubuntu-latest
|
||||
|
@ -15,14 +17,20 @@ jobs:
|
|||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: ${{ github.event.pull_request.head.sha }}
|
||||
- name: Set the values
|
||||
- name: Configure AWS credentials
|
||||
uses: aws-actions/configure-aws-credentials@v3
|
||||
with:
|
||||
aws-access-key-id: ${{ secrets.S3_AWS_ACCESS_KEY_ID }}
|
||||
aws-secret-access-key: ${{ secrets.S3_AWS_SECRET_ACCESS_KEY }}
|
||||
aws-region: eu-central-1
|
||||
- name: Set up environment
|
||||
id: set_env_values
|
||||
run: |
|
||||
cat "./kafka-ui-e2e-checks/.env.ci" >> "./kafka-ui-e2e-checks/.env"
|
||||
- name: pull docker
|
||||
- name: Pull with Docker
|
||||
id: pull_chrome
|
||||
run: |
|
||||
docker pull selenium/standalone-chrome:103.0
|
||||
docker pull selenoid/vnc_chrome:103.0
|
||||
- name: Set up JDK
|
||||
uses: actions/setup-java@v3
|
||||
with:
|
||||
|
@ -33,16 +41,17 @@ jobs:
|
|||
id: build_app
|
||||
run: |
|
||||
./mvnw -B -ntp versions:set -DnewVersion=${{ github.event.pull_request.head.sha }}
|
||||
./mvnw -B -V -ntp clean package -Pprod -Dmaven.test.skip=true ${{ github.event.inputs.extraMavenOptions }}
|
||||
- name: compose app
|
||||
./mvnw -B -V -ntp clean install -Pprod -Dmaven.test.skip=true ${{ github.event.inputs.extraMavenOptions }}
|
||||
- name: Compose with Docker
|
||||
id: compose_app
|
||||
# use the following command until #819 will be fixed
|
||||
run: |
|
||||
docker-compose -f ./documentation/compose/e2e-tests.yaml up -d
|
||||
- name: e2e run
|
||||
docker-compose -f kafka-ui-e2e-checks/docker/selenoid-git.yaml up -d
|
||||
docker-compose -f ./documentation/compose/e2e-tests.yaml up -d && until [ "$(docker exec kafka-ui wget --spider --server-response http://localhost:8080/actuator/health 2>&1 | grep -c 'HTTP/1.1 200 OK')" == "1" ]; do echo "Waiting for kafka-ui ..." && sleep 1; done
|
||||
- name: Run test suite
|
||||
run: |
|
||||
./mvnw -B -ntp versions:set -DnewVersion=${{ github.event.pull_request.head.sha }}
|
||||
./mvnw -B -V -ntp -DQASEIO_API_TOKEN=${{ secrets.QASEIO_API_TOKEN }} -pl '!kafka-ui-api' test -Pprod
|
||||
./mvnw -B -V -ntp -Dsurefire.suiteXmlFiles='src/test/resources/smoke.xml' -f 'kafka-ui-e2e-checks' test -Pprod
|
||||
- name: Generate allure report
|
||||
uses: simple-elf/allure-report-action@master
|
||||
if: always()
|
||||
|
@ -52,20 +61,19 @@ jobs:
|
|||
gh_pages: allure-results
|
||||
allure_report: allure-report
|
||||
subfolder: allure-results
|
||||
report_url: "http://kafkaui-allure-reports.s3-website.eu-central-1.amazonaws.com"
|
||||
- uses: jakejarvis/s3-sync-action@master
|
||||
if: always()
|
||||
env:
|
||||
AWS_S3_BUCKET: 'kafkaui-allure-reports'
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
||||
AWS_REGION: 'eu-central-1'
|
||||
SOURCE_DIR: 'allure-history/allure-results'
|
||||
- name: Post the link to allure report
|
||||
- name: Deploy report to Amazon S3
|
||||
if: always()
|
||||
uses: Sibz/github-status-action@v1.1.6
|
||||
with:
|
||||
authToken: ${{secrets.GITHUB_TOKEN}}
|
||||
context: "Test report"
|
||||
context: "Click Details button to open Allure report"
|
||||
state: "success"
|
||||
sha: ${{ github.event.pull_request.head.sha || github.sha }}
|
||||
target_url: http://kafkaui-allure-reports.s3-website.eu-central-1.amazonaws.com/${{ github.run_number }}
|
||||
|
|
43
.github/workflows/e2e-manual.yml
vendored
Normal file
43
.github/workflows/e2e-manual.yml
vendored
Normal file
|
@ -0,0 +1,43 @@
|
|||
name: "E2E: Manual suite"
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
test_suite:
|
||||
description: 'Select test suite to run'
|
||||
default: 'manual'
|
||||
required: true
|
||||
type: choice
|
||||
options:
|
||||
- manual
|
||||
- qase
|
||||
qase_token:
|
||||
description: 'Set Qase token to enable integration'
|
||||
required: true
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
build-and-test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: ${{ github.sha }}
|
||||
- name: Set up environment
|
||||
id: set_env_values
|
||||
run: |
|
||||
cat "./kafka-ui-e2e-checks/.env.ci" >> "./kafka-ui-e2e-checks/.env"
|
||||
- name: Set up JDK
|
||||
uses: actions/setup-java@v3
|
||||
with:
|
||||
java-version: '17'
|
||||
distribution: 'zulu'
|
||||
cache: 'maven'
|
||||
- name: Build with Maven
|
||||
id: build_app
|
||||
run: |
|
||||
./mvnw -B -ntp versions:set -DnewVersion=${{ github.sha }}
|
||||
./mvnw -B -V -ntp clean install -Pprod -Dmaven.test.skip=true ${{ github.event.inputs.extraMavenOptions }}
|
||||
- name: Run test suite
|
||||
run: |
|
||||
./mvnw -B -ntp versions:set -DnewVersion=${{ github.sha }}
|
||||
./mvnw -B -V -ntp -DQASEIO_API_TOKEN=${{ github.event.inputs.qase_token }} -Dsurefire.suiteXmlFiles='src/test/resources/${{ github.event.inputs.test_suite }}.xml' -Dsuite=${{ github.event.inputs.test_suite }} -f 'kafka-ui-e2e-checks' test -Pprod
|
75
.github/workflows/e2e-weekly.yml
vendored
Normal file
75
.github/workflows/e2e-weekly.yml
vendored
Normal file
|
@ -0,0 +1,75 @@
|
|||
name: "E2E: Weekly suite"
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 1 * * 1'
|
||||
|
||||
jobs:
|
||||
build-and-test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: ${{ github.sha }}
|
||||
- name: Configure AWS credentials
|
||||
uses: aws-actions/configure-aws-credentials@v3
|
||||
with:
|
||||
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
|
||||
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
||||
aws-region: eu-central-1
|
||||
- name: Set up environment
|
||||
id: set_env_values
|
||||
run: |
|
||||
cat "./kafka-ui-e2e-checks/.env.ci" >> "./kafka-ui-e2e-checks/.env"
|
||||
- name: Pull with Docker
|
||||
id: pull_chrome
|
||||
run: |
|
||||
docker pull selenoid/vnc_chrome:103.0
|
||||
- name: Set up JDK
|
||||
uses: actions/setup-java@v3
|
||||
with:
|
||||
java-version: '17'
|
||||
distribution: 'zulu'
|
||||
cache: 'maven'
|
||||
- name: Build with Maven
|
||||
id: build_app
|
||||
run: |
|
||||
./mvnw -B -ntp versions:set -DnewVersion=${{ github.sha }}
|
||||
./mvnw -B -V -ntp clean install -Pprod -Dmaven.test.skip=true ${{ github.event.inputs.extraMavenOptions }}
|
||||
- name: Compose with Docker
|
||||
id: compose_app
|
||||
# use the following command until #819 will be fixed
|
||||
run: |
|
||||
docker-compose -f kafka-ui-e2e-checks/docker/selenoid-git.yaml up -d
|
||||
docker-compose -f ./documentation/compose/e2e-tests.yaml up -d
|
||||
- name: Run test suite
|
||||
run: |
|
||||
./mvnw -B -ntp versions:set -DnewVersion=${{ github.sha }}
|
||||
./mvnw -B -V -ntp -DQASEIO_API_TOKEN=${{ secrets.QASEIO_API_TOKEN }} -Dsurefire.suiteXmlFiles='src/test/resources/sanity.xml' -Dsuite=weekly -f 'kafka-ui-e2e-checks' test -Pprod
|
||||
- name: Generate Allure report
|
||||
uses: simple-elf/allure-report-action@master
|
||||
if: always()
|
||||
id: allure-report
|
||||
with:
|
||||
allure_results: ./kafka-ui-e2e-checks/allure-results
|
||||
gh_pages: allure-results
|
||||
allure_report: allure-report
|
||||
subfolder: allure-results
|
||||
report_url: "http://kafkaui-allure-reports.s3-website.eu-central-1.amazonaws.com"
|
||||
- uses: jakejarvis/s3-sync-action@master
|
||||
if: always()
|
||||
env:
|
||||
AWS_S3_BUCKET: 'kafkaui-allure-reports'
|
||||
AWS_REGION: 'eu-central-1'
|
||||
SOURCE_DIR: 'allure-history/allure-results'
|
||||
- name: Deploy report to Amazon S3
|
||||
if: always()
|
||||
uses: Sibz/github-status-action@v1.1.6
|
||||
with:
|
||||
authToken: ${{secrets.GITHUB_TOKEN}}
|
||||
context: "Click Details button to open Allure report"
|
||||
state: "success"
|
||||
sha: ${{ github.sha }}
|
||||
target_url: http://kafkaui-allure-reports.s3-website.eu-central-1.amazonaws.com/${{ github.run_number }}
|
||||
- name: Dump Docker logs on failure
|
||||
if: failure()
|
||||
uses: jwalton/gh-docker-logs@v2.2.1
|
15
.github/workflows/frontend.yaml
vendored
15
.github/workflows/frontend.yaml
vendored
|
@ -1,4 +1,4 @@
|
|||
name: frontend
|
||||
name: "Frontend: PR/master build & test"
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
|
@ -8,6 +8,9 @@ on:
|
|||
paths:
|
||||
- "kafka-ui-contract/**"
|
||||
- "kafka-ui-react-app/**"
|
||||
permissions:
|
||||
checks: write
|
||||
pull-requests: write
|
||||
jobs:
|
||||
build-and-test:
|
||||
env:
|
||||
|
@ -20,13 +23,13 @@ jobs:
|
|||
# Disabling shallow clone is recommended for improving relevancy of reporting
|
||||
fetch-depth: 0
|
||||
ref: ${{ github.event.pull_request.head.sha }}
|
||||
- uses: pnpm/action-setup@v2.2.4
|
||||
- uses: pnpm/action-setup@v2.4.0
|
||||
with:
|
||||
version: 7.4.0
|
||||
version: 8.6.12
|
||||
- name: Install node
|
||||
uses: actions/setup-node@v3.5.1
|
||||
uses: actions/setup-node@v3.8.1
|
||||
with:
|
||||
node-version: "16.15.0"
|
||||
node-version: "18.17.1"
|
||||
cache: "pnpm"
|
||||
cache-dependency-path: "./kafka-ui-react-app/pnpm-lock.yaml"
|
||||
- name: Install Node dependencies
|
||||
|
@ -46,7 +49,7 @@ jobs:
|
|||
cd kafka-ui-react-app/
|
||||
pnpm test:CI
|
||||
- name: SonarCloud Scan
|
||||
uses: workshur/sonarcloud-github-action@improved_basedir
|
||||
uses: sonarsource/sonarcloud-github-action@master
|
||||
with:
|
||||
projectBaseDir: ./kafka-ui-react-app
|
||||
args: -Dsonar.pullrequest.key=${{ github.event.pull_request.number }} -Dsonar.pullrequest.branch=${{ github.head_ref }} -Dsonar.pullrequest.base=${{ github.base_ref }}
|
||||
|
|
38
.github/workflows/helm.yaml
vendored
38
.github/workflows/helm.yaml
vendored
|
@ -1,38 +0,0 @@
|
|||
name: Helm
|
||||
on:
|
||||
pull_request:
|
||||
types: ["opened", "edited", "reopened", "synchronize"]
|
||||
branches:
|
||||
- 'master'
|
||||
paths:
|
||||
- "charts/**"
|
||||
jobs:
|
||||
build-and-test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Helm tool installer
|
||||
uses: Azure/setup-helm@v3
|
||||
- name: Setup Kubeval
|
||||
uses: lra/setup-kubeval@v1.0.1
|
||||
#check, was helm version increased in Chart.yaml?
|
||||
- name: Check version
|
||||
shell: bash
|
||||
run: |
|
||||
helm_version_new=$(cat charts/kafka-ui/Chart.yaml | grep version | awk '{print $2}')
|
||||
helm_version_old=$(curl -s https://raw.githubusercontent.com/provectus/kafka-ui/master/charts/kafka-ui/Chart.yaml | grep version | awk '{print $2}' )
|
||||
echo $helm_version_old
|
||||
echo $helm_version_new
|
||||
if [[ "$helm_version_new" > "$helm_version_old" ]]; then exit 0 ; else exit 1 ; fi
|
||||
- name: Run kubeval
|
||||
shell: bash
|
||||
run: |
|
||||
sed -i "s@enabled: false@enabled: true@g" charts/kafka-ui/values.yaml
|
||||
K8S_VERSIONS=$(git ls-remote --refs --tags https://github.com/kubernetes/kubernetes.git | cut -d/ -f3 | grep -e '^v1\.[0-9]\{2\}\.[0]\{1,2\}$' | grep -v -e '^v1\.1[0-7]\{1\}' | cut -c2-)
|
||||
echo "NEXT K8S VERSIONS ARE GOING TO BE TESTED: $K8S_VERSIONS"
|
||||
echo ""
|
||||
for version in $K8S_VERSIONS
|
||||
do
|
||||
echo $version;
|
||||
helm template --kube-version $version --set ingress.enabled=true charts/kafka-ui -f charts/kafka-ui/values.yaml | kubeval --additional-schema-locations https://raw.githubusercontent.com/yannh/kubernetes-json-schema/master --strict -v $version;
|
||||
done
|
13
.github/workflows/master.yaml
vendored
13
.github/workflows/master.yaml
vendored
|
@ -1,4 +1,4 @@
|
|||
name: Master
|
||||
name: "Master: Build & deploy"
|
||||
on:
|
||||
workflow_dispatch:
|
||||
push:
|
||||
|
@ -9,6 +9,8 @@ jobs:
|
|||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: ${{ github.event.pull_request.head.sha }}
|
||||
|
||||
- name: Set up JDK
|
||||
uses: actions/setup-java@v3
|
||||
|
@ -51,11 +53,12 @@ jobs:
|
|||
|
||||
- name: Build and push
|
||||
id: docker_build_and_push
|
||||
uses: docker/build-push-action@v3
|
||||
uses: docker/build-push-action@v4
|
||||
with:
|
||||
builder: ${{ steps.buildx.outputs.name }}
|
||||
context: kafka-ui-api
|
||||
platforms: linux/amd64,linux/arm64
|
||||
provenance: false
|
||||
push: true
|
||||
tags: |
|
||||
provectuslabs/kafka-ui:${{ steps.build.outputs.version }}
|
||||
|
@ -71,11 +74,11 @@ jobs:
|
|||
#################################
|
||||
- name: update-master-deployment
|
||||
run: |
|
||||
git clone https://kafka-ui-infra:${{ secrets.KAFKA_UI_INFRA_TOKEN }}@gitlab.provectus.com/provectus-internals/kafka-ui-infra.git --branch master
|
||||
git clone https://infra-tech:${{ secrets.INFRA_USER_ACCESS_TOKEN }}@github.com/provectus/kafka-ui-infra.git --branch master
|
||||
cd kafka-ui-infra/aws-infrastructure4eks/argocd/scripts
|
||||
echo "Image digest is:${{ steps.docker_build_and_push.outputs.digest }}"
|
||||
./kafka-ui-update-master-digest.sh ${{ steps.docker_build_and_push.outputs.digest }}
|
||||
git config --global user.email "kafka-ui-infra@provectus.com"
|
||||
git config --global user.name "kafka-ui-infra"
|
||||
git config --global user.email "infra-tech@provectus.com"
|
||||
git config --global user.name "infra-tech"
|
||||
git add ../kafka-ui/*
|
||||
git commit -m "updated master image digest: ${{ steps.docker_build_and_push.outputs.digest }}" && git push
|
||||
|
|
7
.github/workflows/pr-checks.yaml
vendored
7
.github/workflows/pr-checks.yaml
vendored
|
@ -1,13 +1,14 @@
|
|||
name: "PR Checklist checked"
|
||||
name: "PR: Checklist linter"
|
||||
on:
|
||||
pull_request_target:
|
||||
types: [opened, edited, synchronize, reopened]
|
||||
|
||||
permissions:
|
||||
checks: write
|
||||
jobs:
|
||||
task-check:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: kentaro-m/task-completed-checker-action@v0.1.0
|
||||
- uses: kentaro-m/task-completed-checker-action@v0.1.2
|
||||
with:
|
||||
repo-token: "${{ secrets.GITHUB_TOKEN }}"
|
||||
- uses: dekinderfiets/pr-description-enforcer@0.0.1
|
||||
|
|
39
.github/workflows/release-helm.yaml
vendored
39
.github/workflows/release-helm.yaml
vendored
|
@ -1,39 +0,0 @@
|
|||
name: Release helm
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
paths:
|
||||
- "charts/**"
|
||||
|
||||
jobs:
|
||||
release-helm:
|
||||
runs-on:
|
||||
ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
- run: |
|
||||
git config user.name github-actions
|
||||
git config user.email github-actions@github.com
|
||||
|
||||
- uses: azure/setup-helm@v3
|
||||
|
||||
- name: add chart #realse helm with new version
|
||||
run: |
|
||||
VERSION=$(cat charts/kafka-ui/Chart.yaml | grep version | awk '{print $2}')
|
||||
echo "HELM_VERSION=$(echo ${VERSION})" >> $GITHUB_ENV
|
||||
MSG=$(helm package charts/kafka-ui)
|
||||
git fetch origin
|
||||
git stash
|
||||
git checkout -b gh-pages origin/gh-pages
|
||||
git pull
|
||||
helm repo index .
|
||||
git add -f ${MSG##*/} index.yaml
|
||||
git commit -m "release ${VERSION}"
|
||||
git push
|
||||
- uses: rickstaa/action-create-tag@v1 #create new tag
|
||||
with:
|
||||
tag: "charts/kafka-ui-${{ env.HELM_VERSION }}"
|
2
.github/workflows/release-serde-api.yaml
vendored
2
.github/workflows/release-serde-api.yaml
vendored
|
@ -1,4 +1,4 @@
|
|||
name: Release-serde-api
|
||||
name: "Infra: Release: Serde API"
|
||||
on: workflow_dispatch
|
||||
|
||||
jobs:
|
||||
|
|
14
.github/workflows/release.yaml
vendored
14
.github/workflows/release.yaml
vendored
|
@ -1,4 +1,4 @@
|
|||
name: Release
|
||||
name: "Infra: Release"
|
||||
on:
|
||||
release:
|
||||
types: [published]
|
||||
|
@ -12,6 +12,7 @@ jobs:
|
|||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
ref: ${{ github.event.pull_request.head.sha }}
|
||||
|
||||
- run: |
|
||||
git config user.name github-actions
|
||||
|
@ -33,7 +34,7 @@ jobs:
|
|||
echo "version=${VERSION}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Upload files to a GitHub release
|
||||
uses: svenstaro/upload-release-action@2.3.0
|
||||
uses: svenstaro/upload-release-action@2.7.0
|
||||
with:
|
||||
repo_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
file: kafka-ui-api/target/kafka-ui-api-${{ steps.build.outputs.version }}.jar
|
||||
|
@ -71,11 +72,12 @@ jobs:
|
|||
|
||||
- name: Build and push
|
||||
id: docker_build_and_push
|
||||
uses: docker/build-push-action@v3
|
||||
uses: docker/build-push-action@v4
|
||||
with:
|
||||
builder: ${{ steps.buildx.outputs.name }}
|
||||
context: kafka-ui-api
|
||||
platforms: linux/amd64,linux/arm64
|
||||
provenance: false
|
||||
push: true
|
||||
tags: |
|
||||
provectuslabs/kafka-ui:${{ steps.build.outputs.version }}
|
||||
|
@ -87,14 +89,12 @@ jobs:
|
|||
|
||||
charts:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
needs: release
|
||||
steps:
|
||||
- name: Repository Dispatch
|
||||
uses: peter-evans/repository-dispatch@v2
|
||||
with:
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
repository: provectus/kafka-ui
|
||||
token: ${{ secrets.CHARTS_ACTIONS_TOKEN }}
|
||||
repository: provectus/kafka-ui-charts
|
||||
event-type: prepare-helm-release
|
||||
client-payload: '{"appversion": "${{ needs.release.outputs.version }}"}'
|
||||
|
|
19
.github/workflows/release_drafter.yml
vendored
19
.github/workflows/release_drafter.yml
vendored
|
@ -1,19 +1,34 @@
|
|||
name: Release Drafter
|
||||
name: "Infra: Release Drafter run"
|
||||
|
||||
on:
|
||||
push:
|
||||
# branches to consider in the event; optional, defaults to all
|
||||
branches:
|
||||
- master
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
version:
|
||||
description: 'Release version'
|
||||
required: false
|
||||
branch:
|
||||
description: 'Target branch'
|
||||
required: false
|
||||
default: 'master'
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
update_release_draft:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
pull-requests: write
|
||||
steps:
|
||||
- uses: release-drafter/release-drafter@v5
|
||||
with:
|
||||
config-name: release_drafter.yaml
|
||||
disable-autolabeler: true
|
||||
version: ${{ github.event.inputs.version }}
|
||||
commitish: ${{ github.event.inputs.branch }}
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
|
14
.github/workflows/separate_env_public_create.yml
vendored
14
.github/workflows/separate_env_public_create.yml
vendored
|
@ -1,4 +1,4 @@
|
|||
name: Separate environment create
|
||||
name: "Infra: Feature Testing Public: Init env"
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
|
@ -12,6 +12,8 @@ jobs:
|
|||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: ${{ github.event.pull_request.head.sha }}
|
||||
- name: get branch name
|
||||
id: extract_branch
|
||||
run: |
|
||||
|
@ -45,7 +47,7 @@ jobs:
|
|||
restore-keys: |
|
||||
${{ runner.os }}-buildx-
|
||||
- name: Configure AWS credentials for Kafka-UI account
|
||||
uses: aws-actions/configure-aws-credentials@v1
|
||||
uses: aws-actions/configure-aws-credentials@v3
|
||||
with:
|
||||
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
|
||||
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
||||
|
@ -55,7 +57,7 @@ jobs:
|
|||
uses: aws-actions/amazon-ecr-login@v1
|
||||
- name: Build and push
|
||||
id: docker_build_and_push
|
||||
uses: docker/build-push-action@v3
|
||||
uses: docker/build-push-action@v4
|
||||
with:
|
||||
builder: ${{ steps.buildx.outputs.name }}
|
||||
context: kafka-ui-api
|
||||
|
@ -74,14 +76,14 @@ jobs:
|
|||
steps:
|
||||
- name: clone
|
||||
run: |
|
||||
git clone https://kafka-ui-infra:${{ secrets.KAFKA_UI_INFRA_TOKEN }}@gitlab.provectus.com/provectus-internals/kafka-ui-infra.git
|
||||
git clone https://infra-tech:${{ secrets.INFRA_USER_ACCESS_TOKEN }}@github.com/provectus/kafka-ui-infra.git --branch envs
|
||||
|
||||
- name: separate env create
|
||||
run: |
|
||||
cd kafka-ui-infra/aws-infrastructure4eks/argocd/scripts
|
||||
bash separate_env_create.sh ${{ github.event.inputs.ENV_NAME }} ${{ secrets.FEATURE_TESTING_UI_PASSWORD }} ${{ needs.build.outputs.tag }}
|
||||
git config --global user.email "kafka-ui-infra@provectus.com"
|
||||
git config --global user.name "kafka-ui-infra"
|
||||
git config --global user.email "infra-tech@provectus.com"
|
||||
git config --global user.name "infra-tech"
|
||||
git add -A
|
||||
git commit -m "separate env added: ${{ github.event.inputs.ENV_NAME }}" && git push || true
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
name: Separate environment remove
|
||||
name: "Infra: Feature Testing Public: Destroy env"
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
|
@ -13,12 +13,12 @@ jobs:
|
|||
steps:
|
||||
- name: clone
|
||||
run: |
|
||||
git clone https://kafka-ui-infra:${{ secrets.KAFKA_UI_INFRA_TOKEN }}@gitlab.provectus.com/provectus-internals/kafka-ui-infra.git
|
||||
git clone https://infra-tech:${{ secrets.INFRA_USER_ACCESS_TOKEN }}@github.com/provectus/kafka-ui-infra.git --branch envs
|
||||
- name: separate environment remove
|
||||
run: |
|
||||
cd kafka-ui-infra/aws-infrastructure4eks/argocd/scripts
|
||||
bash separate_env_remove.sh ${{ github.event.inputs.ENV_NAME }}
|
||||
git config --global user.email "kafka-ui-infra@provectus.com"
|
||||
git config --global user.name "kafka-ui-infra"
|
||||
git config --global user.email "infra-tech@provectus.com"
|
||||
git config --global user.name "infra-tech"
|
||||
git add -A
|
||||
git commit -m "separate env removed: ${{ github.event.inputs.ENV_NAME }}" && git push || true
|
||||
|
|
4
.github/workflows/stale.yaml
vendored
4
.github/workflows/stale.yaml
vendored
|
@ -1,4 +1,4 @@
|
|||
name: 'Close stale issues'
|
||||
name: 'Infra: Close stale issues'
|
||||
on:
|
||||
schedule:
|
||||
- cron: '30 1 * * *'
|
||||
|
@ -7,7 +7,7 @@ jobs:
|
|||
stale:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/stale@v6
|
||||
- uses: actions/stale@v8
|
||||
with:
|
||||
days-before-issue-stale: 7
|
||||
days-before-issue-close: 3
|
||||
|
|
4
.github/workflows/terraform-deploy.yml
vendored
4
.github/workflows/terraform-deploy.yml
vendored
|
@ -1,4 +1,4 @@
|
|||
name: terraform_deploy
|
||||
name: "Infra: Terraform deploy"
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
|
@ -26,7 +26,7 @@ jobs:
|
|||
echo "Terraform will be triggered in this dir $TF_DIR"
|
||||
|
||||
- name: Configure AWS credentials for Kafka-UI account
|
||||
uses: aws-actions/configure-aws-credentials@v1
|
||||
uses: aws-actions/configure-aws-credentials@v3
|
||||
with:
|
||||
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
|
||||
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
||||
|
|
2
.github/workflows/triage_issues.yml
vendored
2
.github/workflows/triage_issues.yml
vendored
|
@ -1,4 +1,4 @@
|
|||
name: Add triage label to new issues
|
||||
name: "Infra: Triage: Apply triage label for issues"
|
||||
on:
|
||||
issues:
|
||||
types:
|
||||
|
|
2
.github/workflows/triage_prs.yml
vendored
2
.github/workflows/triage_prs.yml
vendored
|
@ -1,4 +1,4 @@
|
|||
name: Add triage label to new PRs
|
||||
name: "Infra: Triage: Apply triage label for PRs"
|
||||
on:
|
||||
pull_request:
|
||||
types:
|
||||
|
|
|
@ -7,7 +7,9 @@ on:
|
|||
issues:
|
||||
types:
|
||||
- opened
|
||||
|
||||
permissions:
|
||||
issues: write
|
||||
pull-requests: write
|
||||
jobs:
|
||||
welcome:
|
||||
runs-on: ubuntu-latest
|
||||
|
|
2
.github/workflows/workflow_linter.yaml
vendored
2
.github/workflows/workflow_linter.yaml
vendored
|
@ -1,4 +1,4 @@
|
|||
name: "Workflow linter"
|
||||
name: "Infra: Workflow linter"
|
||||
on:
|
||||
pull_request:
|
||||
types:
|
||||
|
|
3
.gitignore
vendored
3
.gitignore
vendored
|
@ -31,6 +31,9 @@ build/
|
|||
.vscode/
|
||||
/kafka-ui-api/app/node
|
||||
|
||||
### SDKMAN ###
|
||||
.sdkmanrc
|
||||
|
||||
.DS_Store
|
||||
*.code-workspace
|
||||
|
||||
|
|
|
@ -1,3 +1,5 @@
|
|||
This guide is an exact copy of the same documented located [in our official docs](https://docs.kafka-ui.provectus.io/development/contributing). If there are any differences between the documents, the one located in our official docs should prevail.
|
||||
|
||||
This guide aims to walk you through the process of working on issues and Pull Requests (PRs).
|
||||
|
||||
Bear in mind that you will not be able to complete some steps on your own if you do not have a “write” permission. Feel free to reach out to the maintainers to help you unlock these activities.
|
||||
|
@ -20,7 +22,7 @@ You also need to consider labels. You can sort the issues by scope labels, such
|
|||
## Grabbing the issue
|
||||
|
||||
There is a bunch of criteria that make an issue feasible for development. <br/>
|
||||
The implementation of any features and/or their enhancements should be reasonable, must be backed by justified requirements (demanded by the community, [roadmap](documentation/project/ROADMAP.md) plans, etc.). The final decision is left for the maintainers' discretion.
|
||||
The implementation of any features and/or their enhancements should be reasonable, must be backed by justified requirements (demanded by the community, [roadmap](https://docs.kafka-ui.provectus.io/project/roadmap) plans, etc.). The final decision is left for the maintainers' discretion.
|
||||
|
||||
All bugs should be confirmed as such (i.e. the behavior is unintended).
|
||||
|
||||
|
@ -39,7 +41,7 @@ To keep the status of the issue clear to everyone, please keep the card's status
|
|||
|
||||
## Setting up a local development environment
|
||||
|
||||
Please refer to [this guide](documentation/project/contributing/README.md).
|
||||
Please refer to [this guide](https://docs.kafka-ui.provectus.io/development/contributing).
|
||||
|
||||
# Pull Requests
|
||||
|
||||
|
|
189
README.md
189
README.md
|
@ -1,21 +1,35 @@
|
|||
 UI for Apache Kafka
|
||||
------------------
|
||||
#### Versatile, fast and lightweight web UI for managing Apache Kafka® clusters. Built by developers, for developers.
|
||||
<br/>
|
||||
|
||||
[](https://github.com/provectus/kafka-ui/blob/master/LICENSE)
|
||||

|
||||
[](https://github.com/provectus/kafka-ui/releases)
|
||||
[](https://discord.gg/4DWzD7pGE5)
|
||||
[](https://hub.docker.com/r/provectuslabs/kafka-ui)
|
||||
|
||||
### DISCLAIMER
|
||||
<em>UI for Apache Kafka is a free tool built and supported by the open-source community. Curated by Provectus, it will remain free and open-source, without any paid features or subscription plans to be added in the future.
|
||||
Looking for the help of Kafka experts? Provectus can help you design, build, deploy, and manage Apache Kafka clusters and streaming applications. Discover [Professional Services for Apache Kafka](https://provectus.com/professional-services-apache-kafka/), to unlock the full potential of Kafka in your enterprise! </em>
|
||||
<p align="center">
|
||||
<a href="https://docs.kafka-ui.provectus.io/">DOCS</a> •
|
||||
<a href="https://docs.kafka-ui.provectus.io/configuration/quick-start">QUICK START</a> •
|
||||
<a href="https://discord.gg/4DWzD7pGE5">COMMUNITY DISCORD</a>
|
||||
<br/>
|
||||
<a href="https://aws.amazon.com/marketplace/pp/prodview-ogtt5hfhzkq6a">AWS Marketplace</a> •
|
||||
<a href="https://www.producthunt.com/products/ui-for-apache-kafka/reviews/new">ProductHunt</a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<img src="https://repobeats.axiom.co/api/embed/2e8a7c2d711af9daddd34f9791143e7554c35d0f.svg" />
|
||||
</p>
|
||||
|
||||
#### UI for Apache Kafka is a free, open-source web UI to monitor and manage Apache Kafka clusters.
|
||||
|
||||
UI for Apache Kafka is a simple tool that makes your data flows observable, helps find and troubleshoot issues faster and deliver optimal performance. Its lightweight dashboard makes it easy to track key metrics of your Kafka clusters - Brokers, Topics, Partitions, Production, and Consumption.
|
||||
|
||||
### DISCLAIMER
|
||||
<em>UI for Apache Kafka is a free tool built and supported by the open-source community. Curated by Provectus, it will remain free and open-source, without any paid features or subscription plans to be added in the future.
|
||||
Looking for the help of Kafka experts? Provectus can help you design, build, deploy, and manage Apache Kafka clusters and streaming applications. Discover [Professional Services for Apache Kafka](https://provectus.com/professional-services-apache-kafka/), to unlock the full potential of Kafka in your enterprise! </em>
|
||||
|
||||
Set up UI for Apache Kafka with just a couple of easy commands to visualize your Kafka data in a comprehensible way. You can run the tool locally or in
|
||||
the cloud.
|
||||
|
||||
|
@ -29,10 +43,10 @@ the cloud.
|
|||
* **View Consumer Groups** — view per-partition parked offsets, combined and per-partition lag
|
||||
* **Browse Messages** — browse messages with JSON, plain text, and Avro encoding
|
||||
* **Dynamic Topic Configuration** — create and configure new topics with dynamic configuration
|
||||
* **Configurable Authentification** — secure your installation with optional Github/Gitlab/Google OAuth 2.0
|
||||
* **Custom serialization/deserialization plugins** - use a ready-to-go serde for your data like AWS Glue or Smile, or code your own!
|
||||
* **Role based access control** - [manage permissions](https://github.com/provectus/kafka-ui/wiki/RBAC-(role-based-access-control)) to access the UI with granular precision
|
||||
* **Data masking** - [obfuscate](https://github.com/provectus/kafka-ui/blob/master/documentation/guides/DataMasking.md) sensitive data in topic messages
|
||||
* **Configurable Authentification** — [secure](https://docs.kafka-ui.provectus.io/configuration/authentication) your installation with optional Github/Gitlab/Google OAuth 2.0
|
||||
* **Custom serialization/deserialization plugins** - [use](https://docs.kafka-ui.provectus.io/configuration/serialization-serde) a ready-to-go serde for your data like AWS Glue or Smile, or code your own!
|
||||
* **Role based access control** - [manage permissions](https://docs.kafka-ui.provectus.io/configuration/rbac-role-based-access-control) to access the UI with granular precision
|
||||
* **Data masking** - [obfuscate](https://docs.kafka-ui.provectus.io/configuration/data-masking) sensitive data in topic messages
|
||||
|
||||
# The Interface
|
||||
UI for Apache Kafka wraps major functions of Apache Kafka with an intuitive user interface.
|
||||
|
@ -60,157 +74,68 @@ There are 3 supported types of schemas: Avro®, JSON Schema, and Protobuf schema
|
|||
|
||||

|
||||
|
||||
Before producing avro-encoded messages, you have to add an avro schema for the topic in Schema Registry. Now all these steps are easy to do
|
||||
Before producing avro/protobuf encoded messages, you have to add a schema for the topic in Schema Registry. Now all these steps are easy to do
|
||||
with a few clicks in a user-friendly interface.
|
||||
|
||||

|
||||
|
||||
# Getting Started
|
||||
|
||||
To run UI for Apache Kafka, you can use a pre-built Docker image or build it locally.
|
||||
To run UI for Apache Kafka, you can use either a pre-built Docker image or build it (or a jar file) yourself.
|
||||
|
||||
## Configuration
|
||||
|
||||
We have plenty of [docker-compose files](documentation/compose/DOCKER_COMPOSE.md) as examples. They're built for various configuration stacks.
|
||||
|
||||
# Guides
|
||||
|
||||
- [SSO configuration](documentation/guides/SSO.md)
|
||||
- [AWS IAM configuration](documentation/guides/AWS_IAM.md)
|
||||
- [Docker-compose files](documentation/compose/DOCKER_COMPOSE.md)
|
||||
- [Connection to a secure broker](documentation/guides/SECURE_BROKER.md)
|
||||
- [Configure seriliazation/deserialization plugins or code your own](documentation/guides/Serialization.md)
|
||||
|
||||
### Configuration File
|
||||
Example of how to configure clusters in the [application-local.yml](https://github.com/provectus/kafka-ui/blob/master/kafka-ui-api/src/main/resources/application-local.yml) configuration file:
|
||||
|
||||
|
||||
```sh
|
||||
kafka:
|
||||
clusters:
|
||||
-
|
||||
name: local
|
||||
bootstrapServers: localhost:29091
|
||||
schemaRegistry: http://localhost:8085
|
||||
schemaRegistryAuth:
|
||||
username: username
|
||||
password: password
|
||||
# schemaNameTemplate: "%s-value"
|
||||
metrics:
|
||||
port: 9997
|
||||
type: JMX
|
||||
-
|
||||
```
|
||||
|
||||
* `name`: cluster name
|
||||
* `bootstrapServers`: where to connect
|
||||
* `schemaRegistry`: schemaRegistry's address
|
||||
* `schemaRegistryAuth.username`: schemaRegistry's basic authentication username
|
||||
* `schemaRegistryAuth.password`: schemaRegistry's basic authentication password
|
||||
* `schemaNameTemplate`: how keys are saved to schemaRegistry
|
||||
* `metrics.port`: open JMX port of a broker
|
||||
* `metrics.type`: Type of metrics, either JMX or PROMETHEUS. Defaulted to JMX.
|
||||
* `readOnly`: enable read only mode
|
||||
|
||||
Configure as many clusters as you need by adding their configs below separated with `-`.
|
||||
|
||||
## Running a Docker Image
|
||||
The official Docker image for UI for Apache Kafka is hosted here: [hub.docker.com/r/provectuslabs/kafka-ui](https://hub.docker.com/r/provectuslabs/kafka-ui).
|
||||
|
||||
Launch Docker container in the background:
|
||||
```sh
|
||||
|
||||
docker run -p 8080:8080 \
|
||||
-e KAFKA_CLUSTERS_0_NAME=local \
|
||||
-e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092 \
|
||||
-d provectuslabs/kafka-ui:latest
|
||||
## Quick start (Demo run)
|
||||
|
||||
```
|
||||
Then access the web UI at [http://localhost:8080](http://localhost:8080).
|
||||
Further configuration with environment variables - [see environment variables](#env_variables)
|
||||
docker run -it -p 8080:8080 -e DYNAMIC_CONFIG_ENABLED=true provectuslabs/kafka-ui
|
||||
```
|
||||
|
||||
### Docker Compose
|
||||
Then access the web UI at [http://localhost:8080](http://localhost:8080)
|
||||
|
||||
If you prefer to use `docker-compose` please refer to the [documentation](docker-compose.md).
|
||||
The command is sufficient to try things out. When you're done trying things out, you can proceed with a [persistent installation](https://docs.kafka-ui.provectus.io/quick-start/persistent-start)
|
||||
|
||||
### Helm chart
|
||||
Helm chart could be found under [charts/kafka-ui](https://github.com/provectus/kafka-ui/tree/master/charts/kafka-ui) directory
|
||||
## Persistent installation
|
||||
|
||||
Quick-start instruction [here](helm_chart.md)
|
||||
```
|
||||
services:
|
||||
kafka-ui:
|
||||
container_name: kafka-ui
|
||||
image: provectuslabs/kafka-ui:latest
|
||||
ports:
|
||||
- 8080:8080
|
||||
environment:
|
||||
DYNAMIC_CONFIG_ENABLED: 'true'
|
||||
volumes:
|
||||
- ~/kui/config.yml:/etc/kafkaui/dynamic_config.yaml
|
||||
```
|
||||
|
||||
## Building With Docker
|
||||
Please refer to our [configuration](https://docs.kafka-ui.provectus.io/configuration/quick-start) page to proceed with further app configuration.
|
||||
|
||||
### Prerequisites
|
||||
## Some useful configuration related links
|
||||
|
||||
Check [prerequisites.md](documentation/project/contributing/prerequisites.md)
|
||||
[Web UI Cluster Configuration Wizard](https://docs.kafka-ui.provectus.io/configuration/configuration-wizard)
|
||||
|
||||
### Building and Running
|
||||
[Configuration file explanation](https://docs.kafka-ui.provectus.io/configuration/configuration-file)
|
||||
|
||||
Check [building.md](documentation/project/contributing/building.md)
|
||||
[Docker Compose examples](https://docs.kafka-ui.provectus.io/configuration/compose-examples)
|
||||
|
||||
## Building Without Docker
|
||||
[Misc configuration properties](https://docs.kafka-ui.provectus.io/configuration/misc-configuration-properties)
|
||||
|
||||
### Prerequisites
|
||||
## Helm charts
|
||||
|
||||
[Prerequisites](documentation/project/contributing/prerequisites.md) will mostly remain the same with the exception of docker.
|
||||
[Quick start](https://docs.kafka-ui.provectus.io/configuration/helm-charts/quick-start)
|
||||
|
||||
### Running without Building
|
||||
## Building from sources
|
||||
|
||||
[How to run quickly without building](documentation/project/contributing/building-and-running-without-docker.md#run_without_docker_quickly)
|
||||
|
||||
### Building and Running
|
||||
|
||||
[How to build and run](documentation/project/contributing/building-and-running-without-docker.md#build_and_run_without_docker)
|
||||
[Quick start](https://docs.kafka-ui.provectus.io/development/building/prerequisites) with building
|
||||
|
||||
## Liveliness and readiness probes
|
||||
Liveliness and readiness endpoint is at `/actuator/health`.
|
||||
Liveliness and readiness endpoint is at `/actuator/health`.<br/>
|
||||
Info endpoint (build info) is located at `/actuator/info`.
|
||||
|
||||
## <a name="env_variables"></a> Environment Variables
|
||||
# Configuration options
|
||||
|
||||
Alternatively, each variable of the .yml file can be set with an environment variable.
|
||||
For example, if you want to use an environment variable to set the `name` parameter, you can write it like this: `KAFKA_CLUSTERS_2_NAME`
|
||||
All of the environment variables/config properties could be found [here](https://docs.kafka-ui.provectus.io/configuration/misc-configuration-properties).
|
||||
|
||||
|Name |Description
|
||||
|-----------------------|-------------------------------
|
||||
|`SERVER_SERVLET_CONTEXT_PATH` | URI basePath
|
||||
|`LOGGING_LEVEL_ROOT` | Setting log level (trace, debug, info, warn, error). Default: info
|
||||
|`LOGGING_LEVEL_COM_PROVECTUS` |Setting log level (trace, debug, info, warn, error). Default: debug
|
||||
|`SERVER_PORT` |Port for the embedded server. Default: `8080`
|
||||
|`KAFKA_ADMIN-CLIENT-TIMEOUT` | Kafka API timeout in ms. Default: `30000`
|
||||
|`KAFKA_CLUSTERS_0_NAME` | Cluster name
|
||||
|`KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS` |Address where to connect
|
||||
|`KAFKA_CLUSTERS_0_KSQLDBSERVER` | KSQL DB server address
|
||||
|`KAFKA_CLUSTERS_0_KSQLDBSERVERAUTH_USERNAME` | KSQL DB server's basic authentication username
|
||||
|`KAFKA_CLUSTERS_0_KSQLDBSERVERAUTH_PASSWORD` | KSQL DB server's basic authentication password
|
||||
|`KAFKA_CLUSTERS_0_KSQLDBSERVERSSL_KEYSTORELOCATION` |Path to the JKS keystore to communicate to KSQL DB
|
||||
|`KAFKA_CLUSTERS_0_KSQLDBSERVERSSL_KEYSTOREPASSWORD` |Password of the JKS keystore for KSQL DB
|
||||
|`KAFKA_CLUSTERS_0_KSQLDBSERVERSSL_TRUSTSTORELOCATION` |Path to the JKS truststore to communicate to KSQL DB
|
||||
|`KAFKA_CLUSTERS_0_KSQLDBSERVERSSL_TRUSTSTOREPASSWORD` |Password of the JKS truststore for KSQL DB
|
||||
|`KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL` |Security protocol to connect to the brokers. For SSL connection use "SSL", for plaintext connection don't set this environment variable
|
||||
|`KAFKA_CLUSTERS_0_SCHEMAREGISTRY` |SchemaRegistry's address
|
||||
|`KAFKA_CLUSTERS_0_SCHEMAREGISTRYAUTH_USERNAME` |SchemaRegistry's basic authentication username
|
||||
|`KAFKA_CLUSTERS_0_SCHEMAREGISTRYAUTH_PASSWORD` |SchemaRegistry's basic authentication password
|
||||
|`KAFKA_CLUSTERS_0_SCHEMAREGISTRYSSL_KEYSTORELOCATION` |Path to the JKS keystore to communicate to SchemaRegistry
|
||||
|`KAFKA_CLUSTERS_0_SCHEMAREGISTRYSSL_KEYSTOREPASSWORD` |Password of the JKS keystore for SchemaRegistry
|
||||
|`KAFKA_CLUSTERS_0_SCHEMAREGISTRYSSL_TRUSTSTORELOCATION` |Path to the JKS truststore to communicate to SchemaRegistry
|
||||
|`KAFKA_CLUSTERS_0_SCHEMAREGISTRYSSL_TRUSTSTOREPASSWORD` |Password of the JKS truststore for SchemaRegistry
|
||||
|`KAFKA_CLUSTERS_0_SCHEMANAMETEMPLATE` |How keys are saved to schemaRegistry
|
||||
|`KAFKA_CLUSTERS_0_METRICS_PORT` |Open metrics port of a broker
|
||||
|`KAFKA_CLUSTERS_0_METRICS_TYPE` |Type of metrics retriever to use. Valid values are JMX (default) or PROMETHEUS. If Prometheus, then metrics are read from prometheus-jmx-exporter instead of jmx
|
||||
|`KAFKA_CLUSTERS_0_READONLY` |Enable read-only mode. Default: false
|
||||
|`KAFKA_CLUSTERS_0_DISABLELOGDIRSCOLLECTION` |Disable collecting segments information. It should be true for confluent cloud. Default: false
|
||||
|`KAFKA_CLUSTERS_0_KAFKACONNECT_0_NAME` |Given name for the Kafka Connect cluster
|
||||
|`KAFKA_CLUSTERS_0_KAFKACONNECT_0_ADDRESS` |Address of the Kafka Connect service endpoint
|
||||
|`KAFKA_CLUSTERS_0_KAFKACONNECT_0_USERNAME`| Kafka Connect cluster's basic authentication username
|
||||
|`KAFKA_CLUSTERS_0_KAFKACONNECT_0_PASSWORD`| Kafka Connect cluster's basic authentication password
|
||||
|`KAFKA_CLUSTERS_0_KAFKACONNECT_0_KEYSTORELOCATION`| Path to the JKS keystore to communicate to Kafka Connect
|
||||
|`KAFKA_CLUSTERS_0_KAFKACONNECT_0_KEYSTOREPASSWORD`| Password of the JKS keystore for Kafka Connect
|
||||
|`KAFKA_CLUSTERS_0_KAFKACONNECT_0_TRUSTSTORELOCATION`| Path to the JKS truststore to communicate to Kafka Connect
|
||||
|`KAFKA_CLUSTERS_0_KAFKACONNECT_0_TRUSTSTOREPASSWORD`| Password of the JKS truststore for Kafka Connect
|
||||
|`KAFKA_CLUSTERS_0_METRICS_SSL` |Enable SSL for Metrics? `true` or `false`. For advanced setup, see `kafka-ui-jmx-secured.yml`
|
||||
|`KAFKA_CLUSTERS_0_METRICS_USERNAME` |Username for Metrics authentication
|
||||
|`KAFKA_CLUSTERS_0_METRICS_PASSWORD` |Password for Metrics authentication
|
||||
|`KAFKA_CLUSTERS_0_POLLING_THROTTLE_RATE` |Max traffic rate (bytes/sec) that kafka-ui allowed to reach when polling messages from the cluster. Default: 0 (not limited)
|
||||
|`TOPIC_RECREATE_DELAY_SECONDS` |Time delay between topic deletion and topic creation attempts for topic recreate functionality. Default: 1
|
||||
|`TOPIC_RECREATE_MAXRETRIES` |Number of attempts of topic creation after topic deletion for topic recreate functionality. Default: 15
|
||||
# Contributing
|
||||
|
||||
Please refer to [contributing guide](https://docs.kafka-ui.provectus.io/development/contributing), we'll guide you from there.
|
||||
|
|
|
@ -6,8 +6,10 @@ Following versions of the project are currently being supported with security up
|
|||
|
||||
| Version | Supported |
|
||||
| ------- | ------------------ |
|
||||
| 0.5.x | :white_check_mark: |
|
||||
| 0.4.x | :x: |
|
||||
| 0.7.x | :white_check_mark: |
|
||||
| 0.6.x | :x: |
|
||||
| 0.5.x | :x: |
|
||||
| 0.4.x | :x: |
|
||||
| 0.3.x | :x: |
|
||||
| 0.2.x | :x: |
|
||||
| 0.1.x | :x: |
|
||||
|
|
|
@ -1,25 +0,0 @@
|
|||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*.orig
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
example/
|
||||
README.md
|
|
@ -1,7 +0,0 @@
|
|||
apiVersion: v2
|
||||
name: kafka-ui
|
||||
description: A Helm chart for kafka-UI
|
||||
type: application
|
||||
version: 0.5.1
|
||||
appVersion: v0.5.0
|
||||
icon: https://github.com/provectus/kafka-ui/raw/master/documentation/images/kafka-ui-logo.png
|
|
@ -1,34 +0,0 @@
|
|||
# Kafka-UI Helm Chart
|
||||
|
||||
## Configuration
|
||||
|
||||
Most of the Helm charts parameters are common, follow table describe unique parameters related to application configuration.
|
||||
|
||||
### Kafka-UI parameters
|
||||
|
||||
| Parameter | Description | Default |
|
||||
| ---------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
|
||||
| `existingConfigMap` | Name of the existing ConfigMap with Kafka-UI environment variables | `nil` |
|
||||
| `existingSecret` | Name of the existing Secret with Kafka-UI environment variables | `nil` |
|
||||
| `envs.secret` | Set of the sensitive environment variables to pass to Kafka-UI | `{}` |
|
||||
| `envs.config` | Set of the environment variables to pass to Kafka-UI | `{}` |
|
||||
| `yamlApplicationConfigConfigMap` | Map with name and keyName keys, name refers to the existing ConfigMap, keyName refers to the ConfigMap key with Kafka-UI config in Yaml format | `{}` |
|
||||
| `yamlApplicationConfig` | Kafka-UI config in Yaml format | `{}` |
|
||||
| `networkPolicy.enabled` | Enable network policies | `false` |
|
||||
| `networkPolicy.egressRules.customRules` | Custom network egress policy rules | `[]` |
|
||||
| `networkPolicy.ingressRules.customRules` | Custom network ingress policy rules | `[]` |
|
||||
| `podLabels` | Extra labels for Kafka-UI pod | `{}` |
|
||||
|
||||
|
||||
## Example
|
||||
|
||||
To install Kafka-UI need to execute follow:
|
||||
``` bash
|
||||
helm repo add kafka-ui https://provectus.github.io/kafka-ui
|
||||
helm install kafka-ui kafka-ui/kafka-ui --set envs.config.KAFKA_CLUSTERS_0_NAME=local --set envs.config.KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
|
||||
```
|
||||
To connect to Kafka-UI web application need to execute:
|
||||
``` bash
|
||||
kubectl port-forward svc/kafka-ui 8080:80
|
||||
```
|
||||
Open the `http://127.0.0.1:8080` on the browser to access Kafka-UI.
|
|
@ -1,3 +0,0 @@
|
|||
apiVersion: v1
|
||||
entries: {}
|
||||
generated: "2021-11-11T12:26:08.479581+03:00"
|
|
@ -1,21 +0,0 @@
|
|||
1. Get the application URL by running these commands:
|
||||
{{- if .Values.ingress.enabled }}
|
||||
{{- range $host := .Values.ingress.hosts }}
|
||||
{{- range .paths }}
|
||||
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- else if contains "NodePort" .Values.service.type }}
|
||||
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "kafka-ui.fullname" . }})
|
||||
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||
echo http://$NODE_IP:$NODE_PORT
|
||||
{{- else if contains "LoadBalancer" .Values.service.type }}
|
||||
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
||||
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "kafka-ui.fullname" . }}'
|
||||
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "kafka-ui.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
|
||||
echo http://$SERVICE_IP:{{ .Values.service.port }}
|
||||
{{- else if contains "ClusterIP" .Values.service.type }}
|
||||
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "kafka-ui.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
|
||||
echo "Visit http://127.0.0.1:8080 to use your application"
|
||||
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:8080
|
||||
{{- end }}
|
|
@ -1,79 +0,0 @@
|
|||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "kafka-ui.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
*/}}
|
||||
{{- define "kafka-ui.fullname" -}}
|
||||
{{- if .Values.fullnameOverride }}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
|
||||
{{- else }}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride }}
|
||||
{{- if contains $name .Release.Name }}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
|
||||
{{- else }}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "kafka-ui.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Common labels
|
||||
*/}}
|
||||
{{- define "kafka-ui.labels" -}}
|
||||
helm.sh/chart: {{ include "kafka-ui.chart" . }}
|
||||
{{ include "kafka-ui.selectorLabels" . }}
|
||||
{{- if .Chart.AppVersion }}
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
|
||||
{{- end }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Selector labels
|
||||
*/}}
|
||||
{{- define "kafka-ui.selectorLabels" -}}
|
||||
app.kubernetes.io/name: {{ include "kafka-ui.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use
|
||||
*/}}
|
||||
{{- define "kafka-ui.serviceAccountName" -}}
|
||||
{{- if .Values.serviceAccount.create }}
|
||||
{{- default (include "kafka-ui.fullname" .) .Values.serviceAccount.name }}
|
||||
{{- else }}
|
||||
{{- default "default" .Values.serviceAccount.name }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
|
||||
{{/*
|
||||
This allows us to check if the registry of the image is specified or not.
|
||||
*/}}
|
||||
{{- define "kafka-ui.imageName" -}}
|
||||
{{- $registryName := .Values.image.registry -}}
|
||||
{{- $repository := .Values.image.repository -}}
|
||||
{{- $tag := .Values.image.tag | default .Chart.AppVersion -}}
|
||||
{{- if $registryName }}
|
||||
{{- printf "%s/%s:%s" $registryName $repository $tag -}}
|
||||
{{- else }}
|
||||
{{- printf "%s:%s" $repository $tag -}}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
{{- if .Values.envs.config -}}
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ include "kafka-ui.fullname" . }}
|
||||
labels:
|
||||
{{- include "kafka-ui.labels" . | nindent 4 }}
|
||||
data:
|
||||
{{- toYaml .Values.envs.config | nindent 2 }}
|
||||
{{- end -}}
|
|
@ -1,11 +0,0 @@
|
|||
{{- if .Values.yamlApplicationConfig -}}
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ include "kafka-ui.fullname" . }}-fromvalues
|
||||
labels:
|
||||
{{- include "kafka-ui.labels" . | nindent 4 }}
|
||||
data:
|
||||
config.yml: |-
|
||||
{{- toYaml .Values.yamlApplicationConfig | nindent 4}}
|
||||
{{ end }}
|
|
@ -1,150 +0,0 @@
|
|||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ include "kafka-ui.fullname" . }}
|
||||
labels:
|
||||
{{- include "kafka-ui.labels" . | nindent 4 }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if not .Values.autoscaling.enabled }}
|
||||
replicas: {{ .Values.replicaCount }}
|
||||
{{- end }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "kafka-ui.selectorLabels" . | nindent 6 }}
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
{{- with .Values.podAnnotations }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
|
||||
checksum/configFromValues: {{ include (print $.Template.BasePath "/configmap_fromValues.yaml") . | sha256sum }}
|
||||
checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
|
||||
labels:
|
||||
{{- include "kafka-ui.selectorLabels" . | nindent 8 }}
|
||||
{{- if .Values.podLabels }}
|
||||
{{- toYaml .Values.podLabels | nindent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- with .Values.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.initContainers }}
|
||||
initContainers:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
serviceAccountName: {{ include "kafka-ui.serviceAccountName" . }}
|
||||
securityContext:
|
||||
{{- toYaml .Values.podSecurityContext | nindent 8 }}
|
||||
containers:
|
||||
- name: {{ .Chart.Name }}
|
||||
securityContext:
|
||||
{{- toYaml .Values.securityContext | nindent 12 }}
|
||||
image: {{ include "kafka-ui.imageName" . }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
{{- if or .Values.env .Values.yamlApplicationConfig .Values.yamlApplicationConfigConfigMap}}
|
||||
env:
|
||||
{{- with .Values.env }}
|
||||
{{- toYaml . | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- if or .Values.yamlApplicationConfig .Values.yamlApplicationConfigConfigMap}}
|
||||
- name: SPRING_CONFIG_ADDITIONAL-LOCATION
|
||||
{{- if .Values.yamlApplicationConfig }}
|
||||
value: /kafka-ui/config.yml
|
||||
{{- else if .Values.yamlApplicationConfigConfigMap }}
|
||||
value: /kafka-ui/{{ .Values.yamlApplicationConfigConfigMap.keyName | default "config.yml" }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
envFrom:
|
||||
{{- if .Values.existingConfigMap }}
|
||||
- configMapRef:
|
||||
name: {{ .Values.existingConfigMap }}
|
||||
{{- end }}
|
||||
{{- if .Values.envs.config }}
|
||||
- configMapRef:
|
||||
name: {{ include "kafka-ui.fullname" . }}
|
||||
{{- end }}
|
||||
{{- if .Values.existingSecret }}
|
||||
- secretRef:
|
||||
name: {{ .Values.existingSecret }}
|
||||
{{- end }}
|
||||
{{- if .Values.envs.secret}}
|
||||
- secretRef:
|
||||
name: {{ include "kafka-ui.fullname" . }}
|
||||
{{- end}}
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 8080
|
||||
protocol: TCP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
{{- $contextPath := .Values.envs.config.SERVER_SERVLET_CONTEXT_PATH | default "" | printf "%s/actuator/health" | urlParse }}
|
||||
path: {{ get $contextPath "path" }}
|
||||
port: http
|
||||
{{- if .Values.probes.useHttpsScheme }}
|
||||
scheme: HTTPS
|
||||
{{- end }}
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
{{- $contextPath := .Values.envs.config.SERVER_SERVLET_CONTEXT_PATH | default "" | printf "%s/actuator/health" | urlParse }}
|
||||
path: {{ get $contextPath "path" }}
|
||||
port: http
|
||||
{{- if .Values.probes.useHttpsScheme }}
|
||||
scheme: HTTPS
|
||||
{{- end }}
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 10
|
||||
resources:
|
||||
{{- toYaml .Values.resources | nindent 12 }}
|
||||
{{- if or .Values.yamlApplicationConfig .Values.volumeMounts .Values.yamlApplicationConfigConfigMap}}
|
||||
volumeMounts:
|
||||
{{- with .Values.volumeMounts }}
|
||||
{{- toYaml . | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- if .Values.yamlApplicationConfig }}
|
||||
- name: kafka-ui-yaml-conf
|
||||
mountPath: /kafka-ui/
|
||||
{{- end }}
|
||||
{{- if .Values.yamlApplicationConfigConfigMap}}
|
||||
- name: kafka-ui-yaml-conf-configmap
|
||||
mountPath: /kafka-ui/
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if or .Values.yamlApplicationConfig .Values.volumes .Values.yamlApplicationConfigConfigMap}}
|
||||
volumes:
|
||||
{{- with .Values.volumes }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.yamlApplicationConfig }}
|
||||
- name: kafka-ui-yaml-conf
|
||||
configMap:
|
||||
name: {{ include "kafka-ui.fullname" . }}-fromvalues
|
||||
{{- end }}
|
||||
{{- if .Values.yamlApplicationConfigConfigMap}}
|
||||
- name: kafka-ui-yaml-conf-configmap
|
||||
configMap:
|
||||
name: {{ .Values.yamlApplicationConfigConfigMap.name }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- with .Values.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.affinity }}
|
||||
affinity:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
|
@ -1,46 +0,0 @@
|
|||
{{- if .Values.autoscaling.enabled }}
|
||||
{{- $kubeCapabilityVersion := semver .Capabilities.KubeVersion.Version -}}
|
||||
{{- $isHigher1p25 := ge (semver "1.25" | $kubeCapabilityVersion.Compare) 0 -}}
|
||||
{{- if and ($.Capabilities.APIVersions.Has "autoscaling/v2") $isHigher1p25 -}}
|
||||
apiVersion: autoscaling/v2
|
||||
{{- else }}
|
||||
apiVersion: autoscaling/v2beta1
|
||||
{{- end }}
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
name: {{ include "kafka-ui.fullname" . }}
|
||||
labels:
|
||||
{{- include "kafka-ui.labels" . | nindent 4 }}
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: {{ include "kafka-ui.fullname" . }}
|
||||
minReplicas: {{ .Values.autoscaling.minReplicas }}
|
||||
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
|
||||
metrics:
|
||||
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
|
||||
- type: Resource
|
||||
resource:
|
||||
name: cpu
|
||||
{{- if $isHigher1p25 }}
|
||||
target:
|
||||
type: Utilization
|
||||
averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
|
||||
{{- else }}
|
||||
targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
|
||||
- type: Resource
|
||||
resource:
|
||||
name: memory
|
||||
{{- if $isHigher1p25 }}
|
||||
target:
|
||||
type: Utilization
|
||||
averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
|
||||
{{- else }}
|
||||
targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -1,89 +0,0 @@
|
|||
{{- if .Values.ingress.enabled -}}
|
||||
{{- $fullName := include "kafka-ui.fullname" . -}}
|
||||
{{- $svcPort := .Values.service.port -}}
|
||||
{{- $kubeCapabilityVersion := semver .Capabilities.KubeVersion.Version -}}
|
||||
{{- $isHigher1p19 := ge (semver "1.19" | $kubeCapabilityVersion.Compare) 0 -}}
|
||||
{{- if and ($.Capabilities.APIVersions.Has "networking.k8s.io/v1") $isHigher1p19 -}}
|
||||
apiVersion: networking.k8s.io/v1
|
||||
{{- else if $.Capabilities.APIVersions.Has "networking.k8s.io/v1beta1" }}
|
||||
apiVersion: networking.k8s.io/v1beta1
|
||||
{{- else }}
|
||||
apiVersion: extensions/v1beta1
|
||||
{{- end }}
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: {{ $fullName }}
|
||||
labels:
|
||||
{{- include "kafka-ui.labels" . | nindent 4 }}
|
||||
{{- with .Values.ingress.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.ingress.tls.enabled }}
|
||||
tls:
|
||||
- hosts:
|
||||
- {{ tpl .Values.ingress.host . }}
|
||||
secretName: {{ .Values.ingress.tls.secretName }}
|
||||
{{- end }}
|
||||
{{- if .Values.ingress.ingressClassName }}
|
||||
ingressClassName: {{ .Values.ingress.ingressClassName }}
|
||||
{{- end }}
|
||||
rules:
|
||||
- http:
|
||||
paths:
|
||||
{{- if and ($.Capabilities.APIVersions.Has "networking.k8s.io/v1") $isHigher1p19 -}}
|
||||
{{- range .Values.ingress.precedingPaths }}
|
||||
- path: {{ .path }}
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: {{ .serviceName }}
|
||||
port:
|
||||
number: {{ .servicePort }}
|
||||
{{- end }}
|
||||
- backend:
|
||||
service:
|
||||
name: {{ $fullName }}
|
||||
port:
|
||||
number: {{ $svcPort }}
|
||||
pathType: Prefix
|
||||
{{- if .Values.ingress.path }}
|
||||
path: {{ .Values.ingress.path }}
|
||||
{{- end }}
|
||||
{{- range .Values.ingress.succeedingPaths }}
|
||||
- path: {{ .path }}
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: {{ .serviceName }}
|
||||
port:
|
||||
number: {{ .servicePort }}
|
||||
{{- end }}
|
||||
{{- if tpl .Values.ingress.host . }}
|
||||
host: {{tpl .Values.ingress.host . }}
|
||||
{{- end }}
|
||||
{{- else -}}
|
||||
{{- range .Values.ingress.precedingPaths }}
|
||||
- path: {{ .path }}
|
||||
backend:
|
||||
serviceName: {{ .serviceName }}
|
||||
servicePort: {{ .servicePort }}
|
||||
{{- end }}
|
||||
- backend:
|
||||
serviceName: {{ $fullName }}
|
||||
servicePort: {{ $svcPort }}
|
||||
{{- if .Values.ingress.path }}
|
||||
path: {{ .Values.ingress.path }}
|
||||
{{- end }}
|
||||
{{- range .Values.ingress.succeedingPaths }}
|
||||
- path: {{ .path }}
|
||||
backend:
|
||||
serviceName: {{ .serviceName }}
|
||||
servicePort: {{ .servicePort }}
|
||||
{{- end }}
|
||||
{{- if tpl .Values.ingress.host . }}
|
||||
host: {{ tpl .Values.ingress.host . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -1,18 +0,0 @@
|
|||
{{- if and .Values.networkPolicy.enabled .Values.networkPolicy.egressRules.customRules }}
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: {{ printf "%s-egress" (include "kafka-ui.fullname" .) }}
|
||||
labels:
|
||||
{{- include "kafka-ui.labels" . | nindent 4 }}
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
{{- include "kafka-ui.selectorLabels" . | nindent 6 }}
|
||||
policyTypes:
|
||||
- Egress
|
||||
egress:
|
||||
{{- if .Values.networkPolicy.egressRules.customRules }}
|
||||
{{- toYaml .Values.networkPolicy.egressRules.customRules | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -1,18 +0,0 @@
|
|||
{{- if and .Values.networkPolicy.enabled .Values.networkPolicy.ingressRules.customRules }}
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: {{ printf "%s-ingress" (include "kafka-ui.fullname" .) }}
|
||||
labels:
|
||||
{{- include "kafka-ui.labels" . | nindent 4 }}
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
{{- include "kafka-ui.selectorLabels" . | nindent 6 }}
|
||||
policyTypes:
|
||||
- Ingress
|
||||
ingress:
|
||||
{{- if .Values.networkPolicy.ingressRules.customRules }}
|
||||
{{- toYaml .Values.networkPolicy.ingressRules.customRules | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -1,11 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ include "kafka-ui.fullname" . }}
|
||||
labels:
|
||||
{{- include "kafka-ui.labels" . | nindent 4 }}
|
||||
type: Opaque
|
||||
data:
|
||||
{{- range $key, $val := .Values.envs.secret }}
|
||||
{{ $key }}: {{ $val | b64enc | quote }}
|
||||
{{- end -}}
|
|
@ -1,22 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ include "kafka-ui.fullname" . }}
|
||||
labels:
|
||||
{{- include "kafka-ui.labels" . | nindent 4 }}
|
||||
{{- if .Values.service.annotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.service.annotations | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
type: {{ .Values.service.type }}
|
||||
ports:
|
||||
- port: {{ .Values.service.port }}
|
||||
targetPort: http
|
||||
protocol: TCP
|
||||
name: http
|
||||
{{- if (and (eq .Values.service.type "NodePort") .Values.service.nodePort) }}
|
||||
nodePort: {{ .Values.service.nodePort }}
|
||||
{{- end }}
|
||||
selector:
|
||||
{{- include "kafka-ui.selectorLabels" . | nindent 4 }}
|
|
@ -1,12 +0,0 @@
|
|||
{{- if .Values.serviceAccount.create -}}
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ include "kafka-ui.serviceAccountName" . }}
|
||||
labels:
|
||||
{{- include "kafka-ui.labels" . | nindent 4 }}
|
||||
{{- with .Values.serviceAccount.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -1,158 +0,0 @@
|
|||
replicaCount: 1
|
||||
|
||||
image:
|
||||
registry: docker.io
|
||||
repository: provectuslabs/kafka-ui
|
||||
pullPolicy: IfNotPresent
|
||||
# Overrides the image tag whose default is the chart appVersion.
|
||||
tag: ""
|
||||
|
||||
imagePullSecrets: []
|
||||
nameOverride: ""
|
||||
fullnameOverride: ""
|
||||
|
||||
serviceAccount:
|
||||
# Specifies whether a service account should be created
|
||||
create: true
|
||||
# Annotations to add to the service account
|
||||
annotations: {}
|
||||
# The name of the service account to use.
|
||||
# If not set and create is true, a name is generated using the fullname template
|
||||
name: ""
|
||||
|
||||
existingConfigMap: ""
|
||||
yamlApplicationConfig:
|
||||
{}
|
||||
# kafka:
|
||||
# clusters:
|
||||
# - name: yaml
|
||||
# bootstrapServers: kafka-service:9092
|
||||
# spring:
|
||||
# security:
|
||||
# oauth2:
|
||||
# auth:
|
||||
# type: disabled
|
||||
# management:
|
||||
# health:
|
||||
# ldap:
|
||||
# enabled: false
|
||||
yamlApplicationConfigConfigMap:
|
||||
{}
|
||||
# keyName: config.yml
|
||||
# name: configMapName
|
||||
existingSecret: ""
|
||||
envs:
|
||||
secret: {}
|
||||
config: {}
|
||||
|
||||
networkPolicy:
|
||||
enabled: false
|
||||
egressRules:
|
||||
## Additional custom egress rules
|
||||
## e.g:
|
||||
## customRules:
|
||||
## - to:
|
||||
## - namespaceSelector:
|
||||
## matchLabels:
|
||||
## label: example
|
||||
customRules: []
|
||||
ingressRules:
|
||||
## Additional custom ingress rules
|
||||
## e.g:
|
||||
## customRules:
|
||||
## - from:
|
||||
## - namespaceSelector:
|
||||
## matchLabels:
|
||||
## label: example
|
||||
customRules: []
|
||||
|
||||
podAnnotations: {}
|
||||
podLabels: {}
|
||||
|
||||
## Annotations to be added to kafka-ui Deployment
|
||||
##
|
||||
annotations: {}
|
||||
|
||||
## Set field schema as HTTPS for readines and liveness probe
|
||||
##
|
||||
probes:
|
||||
useHttpsScheme: false
|
||||
|
||||
podSecurityContext:
|
||||
{}
|
||||
# fsGroup: 2000
|
||||
|
||||
securityContext:
|
||||
{}
|
||||
# capabilities:
|
||||
# drop:
|
||||
# - ALL
|
||||
# readOnlyRootFilesystem: true
|
||||
# runAsNonRoot: true
|
||||
# runAsUser: 1000
|
||||
|
||||
service:
|
||||
type: ClusterIP
|
||||
port: 80
|
||||
# if you want to force a specific nodePort. Must be use with service.type=NodePort
|
||||
# nodePort:
|
||||
|
||||
# Ingress configuration
|
||||
ingress:
|
||||
# Enable ingress resource
|
||||
enabled: false
|
||||
|
||||
# Annotations for the Ingress
|
||||
annotations: {}
|
||||
|
||||
# ingressClassName for the Ingress
|
||||
ingressClassName: ""
|
||||
|
||||
# The path for the Ingress
|
||||
path: "/"
|
||||
|
||||
# The hostname for the Ingress
|
||||
host: ""
|
||||
|
||||
# configs for Ingress TLS
|
||||
tls:
|
||||
# Enable TLS termination for the Ingress
|
||||
enabled: false
|
||||
# the name of a pre-created Secret containing a TLS private key and certificate
|
||||
secretName: ""
|
||||
|
||||
# HTTP paths to add to the Ingress before the default path
|
||||
precedingPaths: []
|
||||
|
||||
# Http paths to add to the Ingress after the default path
|
||||
succeedingPaths: []
|
||||
|
||||
resources:
|
||||
{}
|
||||
# limits:
|
||||
# cpu: 200m
|
||||
# memory: 512Mi
|
||||
# requests:
|
||||
# cpu: 200m
|
||||
# memory: 256Mi
|
||||
|
||||
autoscaling:
|
||||
enabled: false
|
||||
minReplicas: 1
|
||||
maxReplicas: 100
|
||||
targetCPUUtilizationPercentage: 80
|
||||
# targetMemoryUtilizationPercentage: 80
|
||||
|
||||
nodeSelector: {}
|
||||
|
||||
tolerations: []
|
||||
|
||||
affinity: {}
|
||||
|
||||
env: {}
|
||||
|
||||
initContainers: {}
|
||||
|
||||
volumeMounts: {}
|
||||
|
||||
volumes: {}
|
|
@ -1,43 +0,0 @@
|
|||
# Quick Start with docker-compose
|
||||
|
||||
Environment variables documentation - [see usage](README.md#env_variables).<br/>
|
||||
We have plenty of example files with more complex configurations. Please check them out in ``docker`` directory.
|
||||
|
||||
* Add a new service in docker-compose.yml
|
||||
|
||||
```yaml
|
||||
version: '2'
|
||||
services:
|
||||
kafka-ui:
|
||||
image: provectuslabs/kafka-ui
|
||||
container_name: kafka-ui
|
||||
ports:
|
||||
- "8080:8080"
|
||||
restart: always
|
||||
environment:
|
||||
- KAFKA_CLUSTERS_0_NAME=local
|
||||
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
|
||||
```
|
||||
|
||||
* If you prefer UI for Apache Kafka in read only mode
|
||||
|
||||
```yaml
|
||||
version: '2'
|
||||
services:
|
||||
kafka-ui:
|
||||
image: provectuslabs/kafka-ui
|
||||
container_name: kafka-ui
|
||||
ports:
|
||||
- "8080:8080"
|
||||
restart: always
|
||||
environment:
|
||||
- KAFKA_CLUSTERS_0_NAME=local
|
||||
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
|
||||
- KAFKA_CLUSTERS_0_READONLY=true
|
||||
```
|
||||
|
||||
* Start UI for Apache Kafka process
|
||||
|
||||
```bash
|
||||
docker-compose up -d kafka-ui
|
||||
```
|
|
@ -8,9 +8,9 @@
|
|||
6. [kafka-ui-auth-context.yaml](./kafka-ui-auth-context.yaml) - Basic (username/password) authentication with custom path (URL) (issue 861).
|
||||
7. [e2e-tests.yaml](./e2e-tests.yaml) - Configuration with different connectors (github-source, s3, sink-activities, source-activities) and Ksql functionality.
|
||||
8. [kafka-ui-jmx-secured.yml](./kafka-ui-jmx-secured.yml) - Kafka’s JMX with SSL and authentication.
|
||||
9. [kafka-ui-reverse-proxy.yaml](./kafka-ui-reverse-proxy.yaml) - An example for using the app behind a proxy (like nginx).
|
||||
9. [kafka-ui-reverse-proxy.yaml](./nginx-proxy.yaml) - An example for using the app behind a proxy (like nginx).
|
||||
10. [kafka-ui-sasl.yaml](./kafka-ui-sasl.yaml) - SASL auth for Kafka.
|
||||
11. [kafka-ui-traefik-proxy.yaml](./kafka-ui-traefik-proxy.yaml) - Traefik specific proxy configuration.
|
||||
11. [kafka-ui-traefik-proxy.yaml](./traefik-proxy.yaml) - Traefik specific proxy configuration.
|
||||
12. [oauth-cognito.yaml](./oauth-cognito.yaml) - OAuth2 with Cognito
|
||||
13. [kafka-ui-with-jmx-exporter.yaml](./kafka-ui-with-jmx-exporter.yaml) - A configuration with 2 kafka clusters with enabled prometheus jmx exporters instead of jmx.
|
||||
14. [kafka-with-zookeeper.yaml](./kafka-with-zookeeper.yaml) - An example for using kafka with zookeeper
|
|
@ -11,14 +11,14 @@ services:
|
|||
test: wget --no-verbose --tries=1 --spider http://localhost:8080/actuator/health
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 10
|
||||
retries: 10
|
||||
depends_on:
|
||||
kafka0:
|
||||
condition: service_healthy
|
||||
schemaregistry0:
|
||||
condition: service_healthy
|
||||
kafka-connect0:
|
||||
condition: service_healthy
|
||||
kafka0:
|
||||
condition: service_healthy
|
||||
schemaregistry0:
|
||||
condition: service_healthy
|
||||
kafka-connect0:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
KAFKA_CLUSTERS_0_NAME: local
|
||||
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka0:29092
|
||||
|
@ -33,10 +33,10 @@ services:
|
|||
hostname: kafka0
|
||||
container_name: kafka0
|
||||
healthcheck:
|
||||
test: unset JMX_PORT && KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=kafka0 -Dcom.sun.management.jmxremote.rmi.port=9999" && kafka-broker-api-versions --bootstrap-server=localhost:9092
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 10
|
||||
test: unset JMX_PORT && KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=kafka0 -Dcom.sun.management.jmxremote.rmi.port=9999" && kafka-broker-api-versions --bootstrap-server=localhost:9092
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 10
|
||||
ports:
|
||||
- "9092:9092"
|
||||
- "9997:9997"
|
||||
|
@ -68,12 +68,12 @@ services:
|
|||
- 8085:8085
|
||||
depends_on:
|
||||
kafka0:
|
||||
condition: service_healthy
|
||||
condition: service_healthy
|
||||
healthcheck:
|
||||
test: ["CMD", "timeout", "1", "curl", "--silent", "--fail", "http://schemaregistry0:8085/subjects"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 10
|
||||
test: [ "CMD", "timeout", "1", "curl", "--silent", "--fail", "http://schemaregistry0:8085/subjects" ]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 10
|
||||
environment:
|
||||
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://kafka0:29092
|
||||
SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL: PLAINTEXT
|
||||
|
@ -93,11 +93,11 @@ services:
|
|||
- 8083:8083
|
||||
depends_on:
|
||||
kafka0:
|
||||
condition: service_healthy
|
||||
condition: service_healthy
|
||||
schemaregistry0:
|
||||
condition: service_healthy
|
||||
condition: service_healthy
|
||||
healthcheck:
|
||||
test: ["CMD", "nc", "127.0.0.1", "8083"]
|
||||
test: [ "CMD", "nc", "127.0.0.1", "8083" ]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 10
|
||||
|
@ -118,16 +118,16 @@ services:
|
|||
CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
|
||||
CONNECT_REST_ADVERTISED_HOST_NAME: kafka-connect0
|
||||
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
|
||||
# AWS_ACCESS_KEY_ID: ""
|
||||
# AWS_SECRET_ACCESS_KEY: ""
|
||||
# AWS_ACCESS_KEY_ID: ""
|
||||
# AWS_SECRET_ACCESS_KEY: ""
|
||||
|
||||
kafka-init-topics:
|
||||
image: confluentinc/cp-kafka:7.2.1
|
||||
volumes:
|
||||
- ./message.json:/data/message.json
|
||||
- ./data/message.json:/data/message.json
|
||||
depends_on:
|
||||
kafka0:
|
||||
condition: service_healthy
|
||||
condition: service_healthy
|
||||
command: "bash -c 'echo Waiting for Kafka to be ready... && \
|
||||
cub kafka-ready -b kafka0:29092 1 30 && \
|
||||
kafka-topics --create --topic users --partitions 3 --replication-factor 1 --if-not-exists --bootstrap-server kafka0:29092 && \
|
||||
|
@ -142,10 +142,10 @@ services:
|
|||
ports:
|
||||
- 5432:5432
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U dev_user"]
|
||||
test: [ "CMD-SHELL", "pg_isready -U dev_user" ]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
retries: 5
|
||||
environment:
|
||||
POSTGRES_USER: 'dev_user'
|
||||
POSTGRES_PASSWORD: '12345'
|
||||
|
@ -154,7 +154,7 @@ services:
|
|||
image: ellerbrock/alpine-bash-curl-ssl
|
||||
depends_on:
|
||||
postgres-db:
|
||||
condition: service_healthy
|
||||
condition: service_healthy
|
||||
kafka-connect0:
|
||||
condition: service_healthy
|
||||
volumes:
|
||||
|
@ -164,7 +164,7 @@ services:
|
|||
ksqldb:
|
||||
image: confluentinc/ksqldb-server:0.18.0
|
||||
healthcheck:
|
||||
test: ["CMD", "timeout", "1", "curl", "--silent", "--fail", "http://localhost:8088/info"]
|
||||
test: [ "CMD", "timeout", "1", "curl", "--silent", "--fail", "http://localhost:8088/info" ]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 10
|
||||
|
@ -174,7 +174,7 @@ services:
|
|||
kafka-connect0:
|
||||
condition: service_healthy
|
||||
schemaregistry0:
|
||||
condition: service_healthy
|
||||
condition: service_healthy
|
||||
ports:
|
||||
- 8088:8088
|
||||
environment:
|
||||
|
@ -187,4 +187,4 @@ services:
|
|||
KSQL_KSQL_SCHEMA_REGISTRY_URL: http://schemaregistry0:8085
|
||||
KSQL_KSQL_SERVICE_ID: my_ksql_1
|
||||
KSQL_KSQL_HIDDEN_TOPICS: '^_.*'
|
||||
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
|
||||
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
|
||||
|
|
0
documentation/compose/jaas/client.properties
Normal file → Executable file
0
documentation/compose/jaas/client.properties
Normal file → Executable file
0
documentation/compose/jaas/kafka_connect.jaas
Normal file → Executable file
0
documentation/compose/jaas/kafka_connect.jaas
Normal file → Executable file
0
documentation/compose/jaas/kafka_connect.password
Normal file → Executable file
0
documentation/compose/jaas/kafka_connect.password
Normal file → Executable file
|
@ -11,4 +11,8 @@ KafkaClient {
|
|||
user_admin="admin-secret";
|
||||
};
|
||||
|
||||
Client {};
|
||||
Client {
|
||||
org.apache.zookeeper.server.auth.DigestLoginModule required
|
||||
username="zkuser"
|
||||
password="zkuserpassword";
|
||||
};
|
||||
|
|
0
documentation/compose/jaas/schema_registry.jaas
Normal file → Executable file
0
documentation/compose/jaas/schema_registry.jaas
Normal file → Executable file
0
documentation/compose/jaas/schema_registry.password
Normal file → Executable file
0
documentation/compose/jaas/schema_registry.password
Normal file → Executable file
4
documentation/compose/jaas/zookeeper_jaas.conf
Normal file
4
documentation/compose/jaas/zookeeper_jaas.conf
Normal file
|
@ -0,0 +1,4 @@
|
|||
Server {
|
||||
org.apache.zookeeper.server.auth.DigestLoginModule required
|
||||
user_zkuser="zkuserpassword";
|
||||
};
|
|
@ -1,2 +1,2 @@
|
|||
rules:
|
||||
- pattern: ".*"
|
||||
- pattern: ".*"
|
||||
|
|
|
@ -57,7 +57,7 @@ services:
|
|||
kafka-init-topics:
|
||||
image: confluentinc/cp-kafka:7.2.1
|
||||
volumes:
|
||||
- ./message.json:/data/message.json
|
||||
- ./data/message.json:/data/message.json
|
||||
depends_on:
|
||||
- kafka1
|
||||
command: "bash -c 'echo Waiting for Kafka to be ready... && \
|
||||
|
@ -80,4 +80,4 @@ services:
|
|||
KAFKA_CLUSTERS_0_METRICS_PORT: 9997
|
||||
KAFKA_CLUSTERS_0_SCHEMAREGISTRY: http://schemaregistry1:8085
|
||||
KAFKA_CLUSTERS_0_SCHEMAREGISTRYAUTH_USERNAME: admin
|
||||
KAFKA_CLUSTERS_0_SCHEMAREGISTRYAUTH_PASSWORD: letmein
|
||||
KAFKA_CLUSTERS_0_SCHEMAREGISTRYAUTH_PASSWORD: letmein
|
||||
|
|
|
@ -1,84 +0,0 @@
|
|||
---
|
||||
version: "2"
|
||||
services:
|
||||
kafka0:
|
||||
image: confluentinc/cp-kafka:7.2.1
|
||||
hostname: kafka0
|
||||
container_name: kafka0
|
||||
ports:
|
||||
- "9092:9092"
|
||||
- "9997:9997"
|
||||
environment:
|
||||
KAFKA_BROKER_ID: 1
|
||||
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: "CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT"
|
||||
KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka0:29092,PLAINTEXT_HOST://localhost:9092"
|
||||
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
|
||||
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
|
||||
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
|
||||
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
|
||||
KAFKA_JMX_PORT: 9997
|
||||
KAFKA_JMX_HOSTNAME: localhost
|
||||
KAFKA_PROCESS_ROLES: "broker,controller"
|
||||
KAFKA_NODE_ID: 1
|
||||
KAFKA_CONTROLLER_QUORUM_VOTERS: "1@kafka0:29093"
|
||||
KAFKA_LISTENERS: "PLAINTEXT://kafka0:29092,CONTROLLER://kafka0:29093,PLAINTEXT_HOST://0.0.0.0:9092"
|
||||
KAFKA_INTER_BROKER_LISTENER_NAME: "PLAINTEXT"
|
||||
KAFKA_CONTROLLER_LISTENER_NAMES: "CONTROLLER"
|
||||
KAFKA_LOG_DIRS: "/tmp/kraft-combined-logs"
|
||||
volumes:
|
||||
- ./scripts/update_run_cluster.sh:/tmp/update_run.sh
|
||||
- ./scripts/clusterID:/tmp/clusterID
|
||||
command: 'bash -c ''if [ ! -f /tmp/update_run.sh ]; then echo "ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'''
|
||||
|
||||
schemaregistry0:
|
||||
image: confluentinc/cp-schema-registry:7.2.1
|
||||
depends_on:
|
||||
- kafka0
|
||||
environment:
|
||||
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://kafka0:29092
|
||||
SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL: PLAINTEXT
|
||||
SCHEMA_REGISTRY_HOST_NAME: schemaregistry0
|
||||
SCHEMA_REGISTRY_LISTENERS: http://schemaregistry0:8085
|
||||
|
||||
SCHEMA_REGISTRY_SCHEMA_REGISTRY_INTER_INSTANCE_PROTOCOL: "http"
|
||||
SCHEMA_REGISTRY_LOG4J_ROOT_LOGLEVEL: INFO
|
||||
SCHEMA_REGISTRY_KAFKASTORE_TOPIC: _schemas
|
||||
ports:
|
||||
- 8085:8085
|
||||
|
||||
kafka-connect0:
|
||||
image: confluentinc/cp-kafka-connect:7.2.1
|
||||
ports:
|
||||
- 8083:8083
|
||||
depends_on:
|
||||
- kafka0
|
||||
- schemaregistry0
|
||||
environment:
|
||||
CONNECT_BOOTSTRAP_SERVERS: kafka0:29092
|
||||
CONNECT_GROUP_ID: compose-connect-group
|
||||
CONNECT_CONFIG_STORAGE_TOPIC: _connect_configs
|
||||
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
|
||||
CONNECT_OFFSET_STORAGE_TOPIC: _connect_offset
|
||||
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
|
||||
CONNECT_STATUS_STORAGE_TOPIC: _connect_status
|
||||
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
|
||||
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
|
||||
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: http://schemaregistry0:8085
|
||||
CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.storage.StringConverter
|
||||
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schemaregistry0:8085
|
||||
CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
|
||||
CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
|
||||
CONNECT_REST_ADVERTISED_HOST_NAME: kafka-connect0
|
||||
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
|
||||
|
||||
kafka-init-topics:
|
||||
image: confluentinc/cp-kafka:7.2.1
|
||||
volumes:
|
||||
- ./message.json:/data/message.json
|
||||
depends_on:
|
||||
- kafka0
|
||||
command: "bash -c 'echo Waiting for Kafka to be ready... && \
|
||||
cub kafka-ready -b kafka0:29092 1 30 && \
|
||||
kafka-topics --create --topic users --partitions 3 --replication-factor 1 --if-not-exists --bootstrap-server kafka0:29092 && \
|
||||
kafka-topics --create --topic messages --partitions 2 --replication-factor 1 --if-not-exists --bootstrap-server kafka0:29092 && \
|
||||
kafka-console-producer --bootstrap-server kafka0:29092 --topic users < /data/message.json'"
|
|
@ -15,27 +15,25 @@ services:
|
|||
KAFKA_CLUSTERS_0_NAME: local
|
||||
KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL: SSL
|
||||
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka0:29092 # SSL LISTENER!
|
||||
KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_LOCATION: /kafka.truststore.jks
|
||||
KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_PASSWORD: secret
|
||||
KAFKA_CLUSTERS_0_PROPERTIES_SSL_KEYSTORE_LOCATION: /kafka.keystore.jks
|
||||
KAFKA_CLUSTERS_0_PROPERTIES_SSL_KEYSTORE_PASSWORD: secret
|
||||
KAFKA_CLUSTERS_0_PROPERTIES_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: '' # DISABLE COMMON NAME VERIFICATION
|
||||
|
||||
KAFKA_CLUSTERS_0_SCHEMAREGISTRY: https://schemaregistry0:8085
|
||||
KAFKA_CLUSTERS_0_SCHEMAREGISTRYSSL_KEYSTORELOCATION: /kafka.keystore.jks
|
||||
KAFKA_CLUSTERS_0_SCHEMAREGISTRYSSL_KEYSTOREPASSWORD: "secret"
|
||||
KAFKA_CLUSTERS_0_SCHEMAREGISTRYSSL_TRUSTSTORELOCATION: /kafka.truststore.jks
|
||||
KAFKA_CLUSTERS_0_SCHEMAREGISTRYSSL_TRUSTSTOREPASSWORD: "secret"
|
||||
|
||||
KAFKA_CLUSTERS_0_KSQLDBSERVER: https://ksqldb0:8088
|
||||
KAFKA_CLUSTERS_0_KSQLDBSERVERSSL_KEYSTORELOCATION: /kafka.keystore.jks
|
||||
KAFKA_CLUSTERS_0_KSQLDBSERVERSSL_KEYSTOREPASSWORD: "secret"
|
||||
KAFKA_CLUSTERS_0_KSQLDBSERVERSSL_TRUSTSTORELOCATION: /kafka.truststore.jks
|
||||
KAFKA_CLUSTERS_0_KSQLDBSERVERSSL_TRUSTSTOREPASSWORD: "secret"
|
||||
|
||||
KAFKA_CLUSTERS_0_KAFKACONNECT_0_NAME: local
|
||||
KAFKA_CLUSTERS_0_KAFKACONNECT_0_ADDRESS: https://kafka-connect0:8083
|
||||
KAFKA_CLUSTERS_0_KAFKACONNECT_0_KEYSTORELOCATION: /kafka.keystore.jks
|
||||
KAFKA_CLUSTERS_0_KAFKACONNECT_0_KEYSTOREPASSWORD: "secret"
|
||||
KAFKA_CLUSTERS_0_KAFKACONNECT_0_TRUSTSTORELOCATION: /kafka.truststore.jks
|
||||
KAFKA_CLUSTERS_0_KAFKACONNECT_0_TRUSTSTOREPASSWORD: "secret"
|
||||
|
||||
KAFKA_CLUSTERS_0_SSL_TRUSTSTORELOCATION: /kafka.truststore.jks
|
||||
KAFKA_CLUSTERS_0_SSL_TRUSTSTOREPASSWORD: "secret"
|
||||
DYNAMIC_CONFIG_ENABLED: 'true' # not necessary for ssl, added for tests
|
||||
|
||||
volumes:
|
||||
- ./ssl/kafka.truststore.jks:/kafka.truststore.jks
|
||||
- ./ssl/kafka.keystore.jks:/kafka.keystore.jks
|
||||
|
|
|
@ -11,11 +11,11 @@ services:
|
|||
environment:
|
||||
KAFKA_CLUSTERS_0_NAME: local
|
||||
KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL: SSL
|
||||
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:29092 # SSL LISTENER!
|
||||
KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_LOCATION: /kafka.truststore.jks
|
||||
KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_PASSWORD: secret
|
||||
KAFKA_CLUSTERS_0_PROPERTIES_SSL_KEYSTORE_LOCATION: /kafka.keystore.jks
|
||||
KAFKA_CLUSTERS_0_PROPERTIES_SSL_KEYSTORE_PASSWORD: secret
|
||||
KAFKA_CLUSTERS_0_PROPERTIES_SSL_KEYSTORE_PASSWORD: "secret"
|
||||
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:29092 # SSL LISTENER!
|
||||
KAFKA_CLUSTERS_0_SSL_TRUSTSTORELOCATION: /kafka.truststore.jks
|
||||
KAFKA_CLUSTERS_0_SSL_TRUSTSTOREPASSWORD: "secret"
|
||||
KAFKA_CLUSTERS_0_PROPERTIES_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: '' # DISABLE COMMON NAME VERIFICATION
|
||||
volumes:
|
||||
- ./ssl/kafka.truststore.jks:/kafka.truststore.jks
|
||||
|
@ -60,4 +60,4 @@ services:
|
|||
- ./ssl/creds:/etc/kafka/secrets/creds
|
||||
- ./ssl/kafka.truststore.jks:/etc/kafka/secrets/kafka.truststore.jks
|
||||
- ./ssl/kafka.keystore.jks:/etc/kafka/secrets/kafka.keystore.jks
|
||||
command: "bash -c 'if [ ! -f /tmp/update_run.sh ]; then echo \"ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?\" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'"
|
||||
command: "bash -c 'if [ ! -f /tmp/update_run.sh ]; then echo \"ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?\" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'"
|
||||
|
|
59
documentation/compose/kafka-ui-acl-with-zk.yaml
Normal file
59
documentation/compose/kafka-ui-acl-with-zk.yaml
Normal file
|
@ -0,0 +1,59 @@
|
|||
---
|
||||
version: '2'
|
||||
services:
|
||||
|
||||
kafka-ui:
|
||||
container_name: kafka-ui
|
||||
image: provectuslabs/kafka-ui:latest
|
||||
ports:
|
||||
- 8080:8080
|
||||
depends_on:
|
||||
- zookeeper
|
||||
- kafka
|
||||
environment:
|
||||
KAFKA_CLUSTERS_0_NAME: local
|
||||
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:29092
|
||||
KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL: SASL_PLAINTEXT
|
||||
KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM: PLAIN
|
||||
KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";'
|
||||
|
||||
zookeeper:
|
||||
image: wurstmeister/zookeeper:3.4.6
|
||||
environment:
|
||||
JVMFLAGS: "-Djava.security.auth.login.config=/etc/zookeeper/zookeeper_jaas.conf"
|
||||
volumes:
|
||||
- ./jaas/zookeeper_jaas.conf:/etc/zookeeper/zookeeper_jaas.conf
|
||||
ports:
|
||||
- 2181:2181
|
||||
|
||||
kafka:
|
||||
image: confluentinc/cp-kafka:7.2.1
|
||||
hostname: kafka
|
||||
container_name: kafka
|
||||
ports:
|
||||
- "9092:9092"
|
||||
- "9997:9997"
|
||||
environment:
|
||||
KAFKA_BROKER_ID: 1
|
||||
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
|
||||
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:PLAINTEXT,SASL_PLAINTEXT:SASL_PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT'
|
||||
KAFKA_ADVERTISED_LISTENERS: 'SASL_PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092'
|
||||
KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/jaas/kafka_server.conf"
|
||||
KAFKA_AUTHORIZER_CLASS_NAME: "kafka.security.authorizer.AclAuthorizer"
|
||||
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
|
||||
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
|
||||
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
|
||||
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
|
||||
KAFKA_JMX_PORT: 9997
|
||||
KAFKA_JMX_HOSTNAME: localhost
|
||||
KAFKA_NODE_ID: 1
|
||||
KAFKA_CONTROLLER_QUORUM_VOTERS: '1@kafka:29093'
|
||||
KAFKA_LISTENERS: 'SASL_PLAINTEXT://kafka:29092,CONTROLLER://kafka:29093,PLAINTEXT_HOST://0.0.0.0:9092'
|
||||
KAFKA_INTER_BROKER_LISTENER_NAME: 'SASL_PLAINTEXT'
|
||||
KAFKA_SASL_ENABLED_MECHANISMS: 'PLAIN'
|
||||
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: 'PLAIN'
|
||||
KAFKA_SECURITY_PROTOCOL: 'SASL_PLAINTEXT'
|
||||
KAFKA_SUPER_USERS: 'User:admin'
|
||||
volumes:
|
||||
- ./scripts/update_run.sh:/tmp/update_run.sh
|
||||
- ./jaas:/etc/kafka/jaas
|
|
@ -19,6 +19,9 @@ services:
|
|||
KAFKA_CLUSTERS_0_SCHEMAREGISTRY: http://schema-registry0:8085
|
||||
KAFKA_CLUSTERS_0_KAFKACONNECT_0_NAME: first
|
||||
KAFKA_CLUSTERS_0_KAFKACONNECT_0_ADDRESS: http://kafka-connect0:8083
|
||||
DYNAMIC_CONFIG_ENABLED: 'true' # not necessary, added for tests
|
||||
KAFKA_CLUSTERS_0_AUDIT_TOPICAUDITENABLED: 'true'
|
||||
KAFKA_CLUSTERS_0_AUDIT_CONSOLEAUDITENABLED: 'true'
|
||||
|
||||
kafka0:
|
||||
image: confluentinc/cp-kafka:7.2.1.arm64
|
||||
|
@ -92,7 +95,7 @@ services:
|
|||
kafka-init-topics:
|
||||
image: confluentinc/cp-kafka:7.2.1.arm64
|
||||
volumes:
|
||||
- ./message.json:/data/message.json
|
||||
- ./data/message.json:/data/message.json
|
||||
depends_on:
|
||||
- kafka0
|
||||
command: "bash -c 'echo Waiting for Kafka to be ready... && \
|
||||
|
|
|
@ -69,7 +69,7 @@ services:
|
|||
build:
|
||||
context: ./kafka-connect
|
||||
args:
|
||||
image: confluentinc/cp-kafka-connect:6.0.1
|
||||
image: confluentinc/cp-kafka-connect:7.2.1
|
||||
ports:
|
||||
- 8083:8083
|
||||
depends_on:
|
||||
|
@ -104,7 +104,7 @@ services:
|
|||
kafka-init-topics:
|
||||
image: confluentinc/cp-kafka:7.2.1
|
||||
volumes:
|
||||
- ./message.json:/data/message.json
|
||||
- ./data/message.json:/data/message.json
|
||||
depends_on:
|
||||
- kafka0
|
||||
command: "bash -c 'echo Waiting for Kafka to be ready... && \
|
||||
|
|
|
@ -7,11 +7,8 @@ services:
|
|||
image: provectuslabs/kafka-ui:latest
|
||||
ports:
|
||||
- 8080:8080
|
||||
- 5005:5005
|
||||
depends_on:
|
||||
- kafka0
|
||||
- schemaregistry0
|
||||
- kafka-connect0
|
||||
environment:
|
||||
KAFKA_CLUSTERS_0_NAME: local
|
||||
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka0:29092
|
||||
|
@ -19,15 +16,12 @@ services:
|
|||
KAFKA_CLUSTERS_0_KAFKACONNECT_0_NAME: first
|
||||
KAFKA_CLUSTERS_0_KAFKACONNECT_0_ADDRESS: http://kafka-connect0:8083
|
||||
KAFKA_CLUSTERS_0_METRICS_PORT: 9997
|
||||
KAFKA_CLUSTERS_0_METRICS_SSL: 'true'
|
||||
KAFKA_CLUSTERS_0_METRICS_USERNAME: root
|
||||
KAFKA_CLUSTERS_0_METRICS_PASSWORD: password
|
||||
JAVA_OPTS: >-
|
||||
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005
|
||||
-Djavax.net.ssl.trustStore=/jmx/clienttruststore
|
||||
-Djavax.net.ssl.trustStorePassword=12345678
|
||||
-Djavax.net.ssl.keyStore=/jmx/clientkeystore
|
||||
-Djavax.net.ssl.keyStorePassword=12345678
|
||||
KAFKA_CLUSTERS_0_METRICS_KEYSTORE_LOCATION: /jmx/clientkeystore
|
||||
KAFKA_CLUSTERS_0_METRICS_KEYSTORE_PASSWORD: '12345678'
|
||||
KAFKA_CLUSTERS_0_SSL_TRUSTSTORE_LOCATION: /jmx/clienttruststore
|
||||
KAFKA_CLUSTERS_0_SSL_TRUSTSTORE_PASSWORD: '12345678'
|
||||
volumes:
|
||||
- ./jmx/clienttruststore:/jmx/clienttruststore
|
||||
- ./jmx/clientkeystore:/jmx/clientkeystore
|
||||
|
@ -70,8 +64,6 @@ services:
|
|||
-Dcom.sun.management.jmxremote.access.file=/jmx/jmxremote.access
|
||||
-Dcom.sun.management.jmxremote.rmi.port=9997
|
||||
-Djava.rmi.server.hostname=kafka0
|
||||
-Djava.rmi.server.logCalls=true
|
||||
# -Djavax.net.debug=ssl:handshake
|
||||
volumes:
|
||||
- ./jmx/serverkeystore:/jmx/serverkeystore
|
||||
- ./jmx/servertruststore:/jmx/servertruststore
|
||||
|
@ -79,56 +71,3 @@ services:
|
|||
- ./jmx/jmxremote.access:/jmx/jmxremote.access
|
||||
- ./scripts/update_run.sh:/tmp/update_run.sh
|
||||
command: "bash -c 'if [ ! -f /tmp/update_run.sh ]; then echo \"ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?\" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'"
|
||||
|
||||
schemaregistry0:
|
||||
image: confluentinc/cp-schema-registry:7.2.1
|
||||
ports:
|
||||
- 8085:8085
|
||||
depends_on:
|
||||
- kafka0
|
||||
environment:
|
||||
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://kafka0:29092
|
||||
SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL: PLAINTEXT
|
||||
SCHEMA_REGISTRY_HOST_NAME: schemaregistry0
|
||||
SCHEMA_REGISTRY_LISTENERS: http://schemaregistry0:8085
|
||||
|
||||
SCHEMA_REGISTRY_SCHEMA_REGISTRY_INTER_INSTANCE_PROTOCOL: "http"
|
||||
SCHEMA_REGISTRY_LOG4J_ROOT_LOGLEVEL: INFO
|
||||
SCHEMA_REGISTRY_KAFKASTORE_TOPIC: _schemas
|
||||
|
||||
kafka-connect0:
|
||||
image: confluentinc/cp-kafka-connect:7.2.1
|
||||
ports:
|
||||
- 8083:8083
|
||||
depends_on:
|
||||
- kafka0
|
||||
- schemaregistry0
|
||||
environment:
|
||||
CONNECT_BOOTSTRAP_SERVERS: kafka0:29092
|
||||
CONNECT_GROUP_ID: compose-connect-group
|
||||
CONNECT_CONFIG_STORAGE_TOPIC: _connect_configs
|
||||
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
|
||||
CONNECT_OFFSET_STORAGE_TOPIC: _connect_offset
|
||||
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
|
||||
CONNECT_STATUS_STORAGE_TOPIC: _connect_status
|
||||
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
|
||||
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
|
||||
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: http://schemaregistry0:8085
|
||||
CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.storage.StringConverter
|
||||
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schemaregistry0:8085
|
||||
CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
|
||||
CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
|
||||
CONNECT_REST_ADVERTISED_HOST_NAME: kafka-connect0
|
||||
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
|
||||
|
||||
kafka-init-topics:
|
||||
image: confluentinc/cp-kafka:7.2.1
|
||||
volumes:
|
||||
- ./message.json:/data/message.json
|
||||
depends_on:
|
||||
- kafka0
|
||||
command: "bash -c 'echo Waiting for Kafka to be ready... && \
|
||||
cub kafka-ready -b kafka0:29092 1 30 && \
|
||||
kafka-topics --create --topic second.users --partitions 3 --replication-factor 1 --if-not-exists --bootstrap-server kafka0:29092 && \
|
||||
kafka-topics --create --topic first.messages --partitions 2 --replication-factor 1 --if-not-exists --bootstrap-server kafka0:29092 && \
|
||||
kafka-console-producer --bootstrap-server kafka0:29092 --topic second.users < /data/message.json'"
|
|
@ -15,6 +15,7 @@ services:
|
|||
KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL: SASL_PLAINTEXT
|
||||
KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM: PLAIN
|
||||
KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";'
|
||||
DYNAMIC_CONFIG_ENABLED: true # not necessary for sasl auth, added for tests
|
||||
|
||||
kafka:
|
||||
image: confluentinc/cp-kafka:7.2.1
|
||||
|
@ -48,4 +49,4 @@ services:
|
|||
volumes:
|
||||
- ./scripts/update_run.sh:/tmp/update_run.sh
|
||||
- ./jaas:/etc/kafka/jaas
|
||||
command: "bash -c 'if [ ! -f /tmp/update_run.sh ]; then echo \"ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?\" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'"
|
||||
command: "bash -c 'if [ ! -f /tmp/update_run.sh ]; then echo \"ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?\" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'"
|
||||
|
|
|
@ -14,13 +14,16 @@ services:
|
|||
kafka.clusters.0.name: SerdeExampleCluster
|
||||
kafka.clusters.0.bootstrapServers: kafka0:29092
|
||||
kafka.clusters.0.schemaRegistry: http://schemaregistry0:8085
|
||||
# optional auth and ssl properties for SR
|
||||
|
||||
# optional SSL settings for cluster (will be used by SchemaRegistry serde, if set)
|
||||
#kafka.clusters.0.ssl.keystoreLocation: /kafka.keystore.jks
|
||||
#kafka.clusters.0.ssl.keystorePassword: "secret"
|
||||
#kafka.clusters.0.ssl.truststoreLocation: /kafka.truststore.jks
|
||||
#kafka.clusters.0.ssl.truststorePassword: "secret"
|
||||
|
||||
# optional auth properties for SR
|
||||
#kafka.clusters.0.schemaRegistryAuth.username: "use"
|
||||
#kafka.clusters.0.schemaRegistryAuth.password: "pswrd"
|
||||
#kafka.clusters.0.schemaRegistrySSL.keystoreLocation: /kafka.keystore.jks
|
||||
#kafka.clusters.0.schemaRegistrySSL.keystorePassword: "secret"
|
||||
#kafka.clusters.0.schemaRegistrySSL.truststoreLocation: /kafka.truststore.jks
|
||||
#kafka.clusters.0.schemaRegistrySSL.truststorePassword: "secret"
|
||||
|
||||
kafka.clusters.0.defaultKeySerde: Int32 #optional
|
||||
kafka.clusters.0.defaultValueSerde: String #optional
|
||||
|
@ -28,8 +31,7 @@ services:
|
|||
kafka.clusters.0.serde.0.name: ProtobufFile
|
||||
kafka.clusters.0.serde.0.topicKeysPattern: "topic1"
|
||||
kafka.clusters.0.serde.0.topicValuesPattern: "topic1"
|
||||
kafka.clusters.0.serde.0.properties.protobufFiles.0: /protofiles/key-types.proto
|
||||
kafka.clusters.0.serde.0.properties.protobufFiles.1: /protofiles/values.proto
|
||||
kafka.clusters.0.serde.0.properties.protobufFilesDir: /protofiles/
|
||||
kafka.clusters.0.serde.0.properties.protobufMessageNameForKey: test.MyKey # default type for keys
|
||||
kafka.clusters.0.serde.0.properties.protobufMessageName: test.MyValue # default type for values
|
||||
kafka.clusters.0.serde.0.properties.protobufMessageNameForKeyByTopic.topic1: test.MySpecificTopicKey # keys type for topic "topic1"
|
||||
|
@ -52,7 +54,7 @@ services:
|
|||
kafka.clusters.0.serde.4.properties.keySchemaNameTemplate: "%s-key"
|
||||
kafka.clusters.0.serde.4.properties.schemaNameTemplate: "%s-value"
|
||||
#kafka.clusters.0.serde.4.topicValuesPattern: "sr2-topic.*"
|
||||
# optional auth and ssl properties for SR:
|
||||
# optional auth and ssl properties for SR (overrides cluster-level):
|
||||
#kafka.clusters.0.serde.4.properties.username: "user"
|
||||
#kafka.clusters.0.serde.4.properties.password: "passw"
|
||||
#kafka.clusters.0.serde.4.properties.keystoreLocation: /kafka.keystore.jks
|
||||
|
|
|
@ -24,6 +24,7 @@ services:
|
|||
KAFKA_CLUSTERS_1_BOOTSTRAPSERVERS: kafka1:29092
|
||||
KAFKA_CLUSTERS_1_METRICS_PORT: 9998
|
||||
KAFKA_CLUSTERS_1_SCHEMAREGISTRY: http://schemaregistry1:8085
|
||||
DYNAMIC_CONFIG_ENABLED: 'true'
|
||||
|
||||
kafka0:
|
||||
image: confluentinc/cp-kafka:7.2.1
|
||||
|
@ -114,7 +115,7 @@ services:
|
|||
SCHEMA_REGISTRY_KAFKASTORE_TOPIC: _schemas
|
||||
|
||||
kafka-connect0:
|
||||
image: confluentinc/cp-kafka-connect:6.0.1
|
||||
image: confluentinc/cp-kafka-connect:7.2.1
|
||||
ports:
|
||||
- 8083:8083
|
||||
depends_on:
|
||||
|
@ -141,7 +142,7 @@ services:
|
|||
kafka-init-topics:
|
||||
image: confluentinc/cp-kafka:7.2.1
|
||||
volumes:
|
||||
- ./message.json:/data/message.json
|
||||
- ./data/message.json:/data/message.json
|
||||
depends_on:
|
||||
- kafka1
|
||||
command: "bash -c 'echo Waiting for Kafka to be ready... && \
|
||||
|
|
|
@ -38,7 +38,7 @@ services:
|
|||
kafka-init-topics:
|
||||
image: confluentinc/cp-kafka:7.2.1
|
||||
volumes:
|
||||
- ./message.json:/data/message.json
|
||||
- ./data/message.json:/data/message.json
|
||||
depends_on:
|
||||
- kafka
|
||||
command: "bash -c 'echo Waiting for Kafka to be ready... && \
|
||||
|
|
|
@ -15,26 +15,23 @@ services:
|
|||
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka0:29092
|
||||
KAFKA_CLUSTERS_0_METRICS_PORT: 9997
|
||||
KAFKA_CLUSTERS_0_SCHEMAREGISTRY: http://schemaregistry0:8085
|
||||
|
||||
AUTH_TYPE: "LDAP"
|
||||
SPRING_LDAP_URLS: "ldap://ldap:10389"
|
||||
SPRING_LDAP_DN_PATTERN: "cn={0},ou=people,dc=planetexpress,dc=com"
|
||||
|
||||
# ===== USER SEARCH FILTER INSTEAD OF DN =====
|
||||
|
||||
# SPRING_LDAP_USERFILTER_SEARCHBASE: "dc=planetexpress,dc=com"
|
||||
# SPRING_LDAP_USERFILTER_SEARCHFILTER: "(&(uid={0})(objectClass=inetOrgPerson))"
|
||||
# LDAP ADMIN USER
|
||||
# SPRING_LDAP_ADMINUSER: "cn=admin,dc=planetexpress,dc=com"
|
||||
# SPRING_LDAP_ADMINPASSWORD: "GoodNewsEveryone"
|
||||
|
||||
# ===== ACTIVE DIRECTORY =====
|
||||
|
||||
# OAUTH2.LDAP.ACTIVEDIRECTORY: true
|
||||
# OAUTH2.LDAP.AСTIVEDIRECTORY.DOMAIN: "memelord.lol"
|
||||
SPRING_LDAP_BASE: "cn={0},ou=people,dc=planetexpress,dc=com"
|
||||
SPRING_LDAP_ADMIN_USER: "cn=admin,dc=planetexpress,dc=com"
|
||||
SPRING_LDAP_ADMIN_PASSWORD: "GoodNewsEveryone"
|
||||
SPRING_LDAP_USER_FILTER_SEARCH_BASE: "dc=planetexpress,dc=com"
|
||||
SPRING_LDAP_USER_FILTER_SEARCH_FILTER: "(&(uid={0})(objectClass=inetOrgPerson))"
|
||||
SPRING_LDAP_GROUP_FILTER_SEARCH_BASE: "ou=people,dc=planetexpress,dc=com"
|
||||
# OAUTH2.LDAP.ACTIVEDIRECTORY: true
|
||||
# OAUTH2.LDAP.AСTIVEDIRECTORY.DOMAIN: "memelord.lol"
|
||||
|
||||
ldap:
|
||||
image: rroemhild/test-openldap:latest
|
||||
hostname: "ldap"
|
||||
ports:
|
||||
- 10389:10389
|
||||
|
||||
kafka0:
|
||||
image: confluentinc/cp-kafka:7.2.1
|
||||
|
@ -79,4 +76,4 @@ services:
|
|||
|
||||
SCHEMA_REGISTRY_SCHEMA_REGISTRY_INTER_INSTANCE_PROTOCOL: "http"
|
||||
SCHEMA_REGISTRY_LOG4J_ROOT_LOGLEVEL: INFO
|
||||
SCHEMA_REGISTRY_KAFKASTORE_TOPIC: _schemas
|
||||
SCHEMA_REGISTRY_KAFKASTORE_TOPIC: _schemas
|
|
@ -4,7 +4,7 @@ services:
|
|||
nginx:
|
||||
image: nginx:latest
|
||||
volumes:
|
||||
- ./proxy.conf:/etc/nginx/conf.d/default.conf
|
||||
- ./data/proxy.conf:/etc/nginx/conf.d/default.conf
|
||||
ports:
|
||||
- 8080:80
|
||||
|
|
@ -1,22 +0,0 @@
|
|||
---
|
||||
version: '3.4'
|
||||
services:
|
||||
|
||||
kafka-ui:
|
||||
container_name: kafka-ui
|
||||
image: provectuslabs/kafka-ui:local
|
||||
ports:
|
||||
- 8080:8080
|
||||
depends_on:
|
||||
- kafka0 # OMITTED, TAKE UP AN EXAMPLE FROM OTHER COMPOSE FILES
|
||||
environment:
|
||||
KAFKA_CLUSTERS_0_NAME: local
|
||||
KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL: SSL
|
||||
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka0:29092
|
||||
AUTH_TYPE: OAUTH2_COGNITO
|
||||
AUTH_COGNITO_ISSUER_URI: "https://cognito-idp.eu-central-1.amazonaws.com/eu-central-xxxxxx"
|
||||
AUTH_COGNITO_CLIENT_ID: ""
|
||||
AUTH_COGNITO_CLIENT_SECRET: ""
|
||||
AUTH_COGNITO_SCOPE: "openid"
|
||||
AUTH_COGNITO_USER_NAME_ATTRIBUTE: "username"
|
||||
AUTH_COGNITO_LOGOUT_URI: "https://<domain>.auth.eu-central-1.amazoncognito.com/logout"
|
|
@ -1,11 +1,15 @@
|
|||
syntax = "proto3";
|
||||
package test;
|
||||
|
||||
import "google/protobuf/wrappers.proto";
|
||||
|
||||
message MyKey {
|
||||
string myKeyF1 = 1;
|
||||
google.protobuf.UInt64Value uint_64_wrapper = 2;
|
||||
}
|
||||
|
||||
message MySpecificTopicKey {
|
||||
string special_field1 = 1;
|
||||
string special_field2 = 2;
|
||||
google.protobuf.FloatValue float_wrapper = 3;
|
||||
}
|
||||
|
|
|
@ -9,4 +9,6 @@ message MySpecificTopicValue {
|
|||
message MyValue {
|
||||
int32 version = 1;
|
||||
string payload = 2;
|
||||
map<int32, string> intToStringMap = 3;
|
||||
map<string, MyValue> strToObjMap = 4;
|
||||
}
|
||||
|
|
|
@ -1,41 +0,0 @@
|
|||
# How to configure AWS IAM Authentication
|
||||
|
||||
UI for Apache Kafka comes with built-in [aws-msk-iam-auth](https://github.com/aws/aws-msk-iam-auth) library.
|
||||
|
||||
You could pass sasl configs in properties section for each cluster.
|
||||
|
||||
More details could be found here: [aws-msk-iam-auth](https://github.com/aws/aws-msk-iam-auth)
|
||||
|
||||
## Examples:
|
||||
|
||||
Please replace
|
||||
* <KAFKA_URL> with broker list
|
||||
* <PROFILE_NAME> with your aws profile
|
||||
|
||||
|
||||
### Running From Docker Image
|
||||
|
||||
```sh
|
||||
docker run -p 8080:8080 \
|
||||
-e KAFKA_CLUSTERS_0_NAME=local \
|
||||
-e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=<KAFKA_URL> \
|
||||
-e KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL=SASL_SSL \
|
||||
-e KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM=AWS_MSK_IAM \
|
||||
-e KAFKA_CLUSTERS_0_PROPERTIES_SASL_CLIENT_CALLBACK_HANDLER_CLASS=software.amazon.msk.auth.iam.IAMClientCallbackHandler \
|
||||
-e KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG=software.amazon.msk.auth.iam.IAMLoginModule required awsProfileName="<PROFILE_NAME>"; \
|
||||
-d provectuslabs/kafka-ui:latest
|
||||
```
|
||||
|
||||
### Configuring by application.yaml
|
||||
|
||||
```yaml
|
||||
kafka:
|
||||
clusters:
|
||||
- name: local
|
||||
bootstrapServers: <KAFKA_URL>
|
||||
properties:
|
||||
security.protocol: SASL_SSL
|
||||
sasl.mechanism: AWS_MSK_IAM
|
||||
sasl.client.callback.handler.class: software.amazon.msk.auth.iam.IAMClientCallbackHandler
|
||||
sasl.jaas.config: software.amazon.msk.auth.iam.IAMLoginModule required awsProfileName="<PROFILE_NAME>";
|
||||
```
|
|
@ -1,123 +0,0 @@
|
|||
# Topics data masking
|
||||
|
||||
You can configure kafka-ui to mask sensitive data shown in Messages page.
|
||||
|
||||
Several masking policies supported:
|
||||
|
||||
### REMOVE
|
||||
For json objects - remove target fields, otherwise - return "null" string.
|
||||
```yaml
|
||||
- type: REMOVE
|
||||
fields: [ "id", "name" ]
|
||||
...
|
||||
```
|
||||
|
||||
Apply examples:
|
||||
```
|
||||
{ "id": 1234, "name": { "first": "James" }, "age": 30 }
|
||||
->
|
||||
{ "age": 30 }
|
||||
```
|
||||
```
|
||||
non-json string -> null
|
||||
```
|
||||
|
||||
### REPLACE
|
||||
For json objects - replace target field's values with specified replacement string (by default with `***DATA_MASKED***`). Note: if target field's value is object, then replacement applied to all its fields recursively (see example).
|
||||
|
||||
```yaml
|
||||
- type: REPLACE
|
||||
fields: [ "id", "name" ]
|
||||
replacement: "***" #optional, "***DATA_MASKED***" by default
|
||||
...
|
||||
```
|
||||
|
||||
Apply examples:
|
||||
```
|
||||
{ "id": 1234, "name": { "first": "James", "last": "Bond" }, "age": 30 }
|
||||
->
|
||||
{ "id": "***", "name": { "first": "***", "last": "***" }, "age": 30 }
|
||||
```
|
||||
```
|
||||
non-json string -> ***
|
||||
```
|
||||
|
||||
### MASK
|
||||
Mask target field's values with specified masking characters, recursively (spaces and line separators will be kept as-is).
|
||||
`pattern` array specifies what symbols will be used to replace upper-case chars, lower-case chars, digits and other symbols correspondingly.
|
||||
|
||||
```yaml
|
||||
- type: MASK
|
||||
fields: [ "id", "name" ]
|
||||
pattern: ["A", "a", "N", "_"] # optional, default is ["X", "x", "n", "-"]
|
||||
...
|
||||
```
|
||||
|
||||
Apply examples:
|
||||
```
|
||||
{ "id": 1234, "name": { "first": "James", "last": "Bond!" }, "age": 30 }
|
||||
->
|
||||
{ "id": "NNNN", "name": { "first": "Aaaaa", "last": "Aaaa_" }, "age": 30 }
|
||||
```
|
||||
```
|
||||
Some string! -> Aaaa aaaaaa_
|
||||
```
|
||||
|
||||
----
|
||||
|
||||
For each policy, if `fields` not specified, then policy will be applied to all object's fields or whole string if it is not a json-object.
|
||||
|
||||
You can specify which masks will be applied to topic's keys/values. Multiple policies will be applied if topic matches both policy's patterns.
|
||||
|
||||
Yaml configuration example:
|
||||
```yaml
|
||||
kafka:
|
||||
clusters:
|
||||
- name: ClusterName
|
||||
# Other Cluster configuration omitted ...
|
||||
masking:
|
||||
- type: REMOVE
|
||||
fields: [ "id" ]
|
||||
topicKeysPattern: "events-with-ids-.*"
|
||||
topicValuesPattern: "events-with-ids-.*"
|
||||
|
||||
- type: REPLACE
|
||||
fields: [ "companyName", "organizationName" ]
|
||||
replacement: "***MASKED_ORG_NAME***" #optional
|
||||
topicValuesPattern: "org-events-.*"
|
||||
|
||||
- type: MASK
|
||||
fields: [ "name", "surname" ]
|
||||
pattern: ["A", "a", "N", "_"] #optional
|
||||
topicValuesPattern: "user-states"
|
||||
|
||||
- type: MASK
|
||||
topicValuesPattern: "very-secured-topic"
|
||||
```
|
||||
|
||||
Same configuration in env-vars fashion:
|
||||
```
|
||||
...
|
||||
KAFKA_CLUSTERS_0_MASKING_0_TYPE: REMOVE
|
||||
KAFKA_CLUSTERS_0_MASKING_0_FIELDS_0: "id"
|
||||
KAFKA_CLUSTERS_0_MASKING_0_TOPICKEYSPATTERN: "events-with-ids-.*"
|
||||
KAFKA_CLUSTERS_0_MASKING_0_TOPICVALUESPATTERN: "events-with-ids-.*"
|
||||
|
||||
KAFKA_CLUSTERS_0_MASKING_1_TYPE: REPLACE
|
||||
KAFKA_CLUSTERS_0_MASKING_1_FIELDS_0: "companyName"
|
||||
KAFKA_CLUSTERS_0_MASKING_1_FIELDS_1: "organizationName"
|
||||
KAFKA_CLUSTERS_0_MASKING_1_REPLACEMENT: "***MASKED_ORG_NAME***"
|
||||
KAFKA_CLUSTERS_0_MASKING_1_TOPICVALUESPATTERN: "org-events-.*"
|
||||
|
||||
KAFKA_CLUSTERS_0_MASKING_2_TYPE: MASK
|
||||
KAFKA_CLUSTERS_0_MASKING_2_FIELDS_0: "name"
|
||||
KAFKA_CLUSTERS_0_MASKING_2_FIELDS_1: "surname"
|
||||
KAFKA_CLUSTERS_0_MASKING_2_PATTERN_0: 'A'
|
||||
KAFKA_CLUSTERS_0_MASKING_2_PATTERN_1: 'a'
|
||||
KAFKA_CLUSTERS_0_MASKING_2_PATTERN_2: 'N'
|
||||
KAFKA_CLUSTERS_0_MASKING_2_PATTERN_3: '_'
|
||||
KAFKA_CLUSTERS_0_MASKING_2_TOPICVALUESPATTERN: "user-states"
|
||||
|
||||
KAFKA_CLUSTERS_0_MASKING_3_TYPE: MASK
|
||||
KAFKA_CLUSTERS_0_MASKING_3_TOPICVALUESPATTERN: "very-secured-topic"
|
||||
```
|
|
@ -1,51 +0,0 @@
|
|||
# Kafkaui Protobuf Support
|
||||
|
||||
### This document is deprecated, please see examples in [Serialization document](Serialization.md).
|
||||
|
||||
Kafkaui supports deserializing protobuf messages in two ways:
|
||||
1. Using Confluent Schema Registry's [protobuf support](https://docs.confluent.io/platform/current/schema-registry/serdes-develop/serdes-protobuf.html).
|
||||
2. Supplying a protobuf file as well as a configuration that maps topic names to protobuf types.
|
||||
|
||||
## Configuring Kafkaui with a Protobuf File
|
||||
|
||||
To configure Kafkaui to deserialize protobuf messages using a supplied protobuf schema add the following to the config:
|
||||
```yaml
|
||||
kafka:
|
||||
clusters:
|
||||
- # Cluster configuration omitted.
|
||||
# protobufFile is the path to the protobuf schema. (deprecated: please use "protobufFiles")
|
||||
protobufFile: path/to/my.proto
|
||||
# protobufFiles is the path to one or more protobuf schemas.
|
||||
protobufFiles:
|
||||
- /path/to/my.proto
|
||||
- /path/to/another.proto
|
||||
# protobufMessageName is the default protobuf type that is used to deserilize
|
||||
# the message's value if the topic is not found in protobufMessageNameByTopic.
|
||||
protobufMessageName: my.DefaultValType
|
||||
# protobufMessageNameByTopic is a mapping of topic names to protobuf types.
|
||||
# This mapping is required and is used to deserialize the Kafka message's value.
|
||||
protobufMessageNameByTopic:
|
||||
topic1: my.Type1
|
||||
topic2: my.Type2
|
||||
# protobufMessageNameForKey is the default protobuf type that is used to deserilize
|
||||
# the message's key if the topic is not found in protobufMessageNameForKeyByTopic.
|
||||
protobufMessageNameForKey: my.DefaultKeyType
|
||||
# protobufMessageNameForKeyByTopic is a mapping of topic names to protobuf types.
|
||||
# This mapping is optional and is used to deserialize the Kafka message's key.
|
||||
# If a protobuf type is not found for a topic's key, the key is deserialized as a string,
|
||||
# unless protobufMessageNameForKey is specified.
|
||||
protobufMessageNameForKeyByTopic:
|
||||
topic1: my.KeyType1
|
||||
```
|
||||
|
||||
Same config with flattened config (for docker-compose):
|
||||
|
||||
```text
|
||||
kafka.clusters.0.protobufFiles.0: /path/to/my.proto
|
||||
kafka.clusters.0.protobufFiles.1: /path/to/another.proto
|
||||
kafka.clusters.0.protobufMessageName: my.DefaultValType
|
||||
kafka.clusters.0.protobufMessageNameByTopic.topic1: my.Type1
|
||||
kafka.clusters.0.protobufMessageNameByTopic.topic2: my.Type2
|
||||
kafka.clusters.0.protobufMessageNameForKey: my.DefaultKeyType
|
||||
kafka.clusters.0.protobufMessageNameForKeyByTopic.topic1: my.KeyType1
|
||||
```
|
|
@ -1,58 +0,0 @@
|
|||
# How to configure SASL SCRAM Authentication
|
||||
|
||||
You could pass sasl configs in properties section for each cluster.
|
||||
|
||||
## Examples:
|
||||
|
||||
Please replace
|
||||
- <KAFKA_NAME> with cluster name
|
||||
- <KAFKA_URL> with broker list
|
||||
- <KAFKA_USERNAME> with username
|
||||
- <KAFKA_PASSWORD> with password
|
||||
|
||||
### Running From Docker Image
|
||||
|
||||
```sh
|
||||
docker run -p 8080:8080 \
|
||||
-e KAFKA_CLUSTERS_0_NAME=<KAFKA_NAME> \
|
||||
-e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=<KAFKA_URL> \
|
||||
-e KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL=SASL_SSL \
|
||||
-e KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM=SCRAM-SHA-512 \
|
||||
-e KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG=org.apache.kafka.common.security.scram.ScramLoginModule required username="<KAFKA_USERNAME>" password="<KAFKA_PASSWORD>"; \
|
||||
-d provectuslabs/kafka-ui:latest
|
||||
```
|
||||
|
||||
### Running From Docker-compose file
|
||||
|
||||
```yaml
|
||||
|
||||
version: '3.4'
|
||||
services:
|
||||
|
||||
kafka-ui:
|
||||
image: provectuslabs/kafka-ui
|
||||
container_name: kafka-ui
|
||||
ports:
|
||||
- "888:8080"
|
||||
restart: always
|
||||
environment:
|
||||
- KAFKA_CLUSTERS_0_NAME=<KAFKA_NAME>
|
||||
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=<KAFKA_URL>
|
||||
- KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL=SASL_SSL
|
||||
- KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM=SCRAM-SHA-512
|
||||
- KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG=org.apache.kafka.common.security.scram.ScramLoginModule required username="<KAFKA_USERNAME>" password="<KAFKA_PASSWORD>";
|
||||
- KAFKA_CLUSTERS_0_PROPERTIES_PROTOCOL=SASL
|
||||
```
|
||||
|
||||
### Configuring by application.yaml
|
||||
|
||||
```yaml
|
||||
kafka:
|
||||
clusters:
|
||||
- name: local
|
||||
bootstrapServers: <KAFKA_URL>
|
||||
properties:
|
||||
security.protocol: SASL_SSL
|
||||
sasl.mechanism: SCRAM-SHA-512
|
||||
sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="<KAFKA_USERNAME>" password="<KAFKA_PASSWORD>";
|
||||
```
|
|
@ -1,7 +0,0 @@
|
|||
## Connecting to a Secure Broker
|
||||
|
||||
The app supports TLS (SSL) and SASL connections for [encryption and authentication](http://kafka.apache.org/090/documentation.html#security). <br/>
|
||||
|
||||
### Running From Docker-compose file
|
||||
|
||||
See [this](/documentation/compose/kafka-ssl.yml) docker-compose file reference for ssl-enabled kafka
|
|
@ -1,71 +0,0 @@
|
|||
# How to configure SSO
|
||||
SSO require additionaly to configure TLS for application, in that example we will use self-signed certificate, in case of use legal certificates please skip step 1.
|
||||
## Step 1
|
||||
At this step we will generate self-signed PKCS12 keypair.
|
||||
``` bash
|
||||
mkdir cert
|
||||
keytool -genkeypair -alias ui-for-apache-kafka -keyalg RSA -keysize 2048 \
|
||||
-storetype PKCS12 -keystore cert/ui-for-apache-kafka.p12 -validity 3650
|
||||
```
|
||||
## Step 2
|
||||
Create new application in any SSO provider, we will continue with [Auth0](https://auth0.com).
|
||||
|
||||
<img src="https://github.com/provectus/kafka-ui/raw/images/images/sso-new-app.png" width="70%"/>
|
||||
|
||||
After that need to provide callback URLs, in our case we will use `https://127.0.0.1:8080/login/oauth2/code/auth0`
|
||||
|
||||
<img src="https://github.com/provectus/kafka-ui/raw/images/images/sso-configuration.png" width="70%"/>
|
||||
|
||||
This is a main parameters required for enabling SSO
|
||||
|
||||
<img src="https://github.com/provectus/kafka-ui/raw/images/images/sso-parameters.png" width="70%"/>
|
||||
|
||||
## Step 3
|
||||
To launch UI for Apache Kafka with enabled TLS and SSO run following:
|
||||
``` bash
|
||||
docker run -p 8080:8080 -v `pwd`/cert:/opt/cert -e AUTH_TYPE=LOGIN_FORM \
|
||||
-e SECURITY_BASIC_ENABLED=true \
|
||||
-e SERVER_SSL_KEY_STORE_TYPE=PKCS12 \
|
||||
-e SERVER_SSL_KEY_STORE=/opt/cert/ui-for-apache-kafka.p12 \
|
||||
-e SERVER_SSL_KEY_STORE_PASSWORD=123456 \
|
||||
-e SERVER_SSL_KEY_ALIAS=ui-for-apache-kafka \
|
||||
-e SERVER_SSL_ENABLED=true \
|
||||
-e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTID=uhvaPKIHU4ZF8Ne4B6PGvF0hWW6OcUSB \
|
||||
-e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTSECRET=YXfRjmodifiedTujnkVr7zuW9ECCAK4TcnCio-i \
|
||||
-e SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_AUTH0_ISSUER_URI=https://dev-a63ggcut.auth0.com/ \
|
||||
-e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_SCOPE=openid \
|
||||
-e TRUST_STORE=/opt/cert/ui-for-apache-kafka.p12 \
|
||||
-e TRUST_STORE_PASSWORD=123456 \
|
||||
provectuslabs/kafka-ui:latest
|
||||
```
|
||||
In the case with trusted CA-signed SSL certificate and SSL termination somewhere outside of application we can pass only SSO related environment variables:
|
||||
``` bash
|
||||
docker run -p 8080:8080 -v `pwd`/cert:/opt/cert -e AUTH_TYPE=OAUTH2 \
|
||||
-e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTID=uhvaPKIHU4ZF8Ne4B6PGvF0hWW6OcUSB \
|
||||
-e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTSECRET=YXfRjmodifiedTujnkVr7zuW9ECCAK4TcnCio-i \
|
||||
-e SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_AUTH0_ISSUER_URI=https://dev-a63ggcut.auth0.com/ \
|
||||
-e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_SCOPE=openid \
|
||||
provectuslabs/kafka-ui:latest
|
||||
```
|
||||
|
||||
## Step 4 (Load Balancer HTTP) (optional)
|
||||
If you're using load balancer/proxy and use HTTP between the proxy and the app, you might want to set `server_forward-headers-strategy` to `native` as well (`SERVER_FORWARDHEADERSSTRATEGY=native`), for more info refer to [this issue](https://github.com/provectus/kafka-ui/issues/1017).
|
||||
|
||||
## Step 5 (Azure) (optional)
|
||||
For Azure AD (Office365) OAUTH2 you'll want to add additional environment variables:
|
||||
|
||||
```bash
|
||||
docker run -p 8080:8080 \
|
||||
-e KAFKA_CLUSTERS_0_NAME="${cluster_name}"\
|
||||
-e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS="${kafka_listeners}" \
|
||||
-e KAFKA_CLUSTERS_0_KAFKACONNECT_0_ADDRESS="${kafka_connect_servers}"
|
||||
-e AUTH_TYPE=OAUTH2 \
|
||||
-e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTID=uhvaPKIHU4ZF8Ne4B6PGvF0hWW6OcUSB \
|
||||
-e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTSECRET=YXfRjmodifiedTujnkVr7zuW9ECCAK4TcnCio-i \
|
||||
-e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_SCOPE="https://graph.microsoft.com/User.Read" \
|
||||
-e SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_AUTH0_ISSUER_URI="https://login.microsoftonline.com/{tenant-id}/v2.0" \
|
||||
-d provectuslabs/kafka-ui:latest"
|
||||
```
|
||||
|
||||
Note that scope is created by default when Application registration is done in Azure portal.
|
||||
You'll need to update application registration manifest to include `"accessTokenAcceptedVersion": 2`
|
|
@ -1,169 +0,0 @@
|
|||
## Serialization and deserialization and custom plugins
|
||||
|
||||
Kafka-ui supports multiple ways to serialize/deserialize data.
|
||||
|
||||
|
||||
### Int32, Int64, UInt32, UInt64
|
||||
Big-endian 4/8 bytes representation of signed/unsigned integers.
|
||||
|
||||
### Base64
|
||||
Base64 (RFC4648) binary data representation. Can be useful in case if the actual data is not important, but exactly the same (byte-wise) key/value should be send.
|
||||
|
||||
### String
|
||||
Treats binary data as a string in specified encoding. Default encoding is UTF-8.
|
||||
|
||||
Class name: `com.provectus.kafka.ui.serdes.builtin.StringSerde`
|
||||
|
||||
Sample configuration (if you want to overwrite default configuration):
|
||||
```yaml
|
||||
kafka:
|
||||
clusters:
|
||||
- name: Cluster1
|
||||
# Other Cluster configuration omitted ...
|
||||
serdes:
|
||||
# registering String serde with custom config
|
||||
- name: AsciiString
|
||||
className: com.provectus.kafka.ui.serdes.builtin.StringSerde
|
||||
properties:
|
||||
encoding: "ASCII"
|
||||
|
||||
# overriding build-it String serde config
|
||||
- name: String
|
||||
properties:
|
||||
encoding: "UTF-16"
|
||||
```
|
||||
|
||||
### Protobuf
|
||||
|
||||
Class name: `com.provectus.kafka.ui.serdes.builtin.ProtobufFileSerde`
|
||||
|
||||
Sample configuration:
|
||||
```yaml
|
||||
kafka:
|
||||
clusters:
|
||||
- name: Cluster1
|
||||
# Other Cluster configuration omitted ...
|
||||
serdes:
|
||||
- name: ProtobufFile
|
||||
properties:
|
||||
# path to the protobuf schema files
|
||||
protobufFiles:
|
||||
- path/to/my.proto
|
||||
- path/to/another.proto
|
||||
# default protobuf type that is used for KEY serialization/deserialization
|
||||
# optional
|
||||
protobufMessageNameForKey: my.Type1
|
||||
# mapping of topic names to protobuf types, that will be used for KEYS serialization/deserialization
|
||||
# optional
|
||||
protobufMessageNameForKeyByTopic:
|
||||
topic1: my.KeyType1
|
||||
topic2: my.KeyType2
|
||||
# default protobuf type that is used for VALUE serialization/deserialization
|
||||
# optional, if not set - first type in file will be used as default
|
||||
protobufMessageName: my.Type1
|
||||
# mapping of topic names to protobuf types, that will be used for VALUES serialization/deserialization
|
||||
# optional
|
||||
protobufMessageNameByTopic:
|
||||
topic1: my.Type1
|
||||
"topic.2": my.Type2
|
||||
```
|
||||
Docker-compose sample for Protobuf serialization is [here](../compose/kafka-ui-serdes.yaml).
|
||||
|
||||
Legacy configuration for protobuf is [here](Protobuf.md).
|
||||
|
||||
### SchemaRegistry
|
||||
SchemaRegistry serde is automatically configured if schema registry properties set on cluster level.
|
||||
But you can add new SchemaRegistry-typed serdes that will connect to another schema-registry instance.
|
||||
|
||||
Class name: `com.provectus.kafka.ui.serdes.builtin.sr.SchemaRegistrySerde`
|
||||
|
||||
Sample configuration:
|
||||
```yaml
|
||||
kafka:
|
||||
clusters:
|
||||
- name: Cluster1
|
||||
# this url will be used by "SchemaRegistry" by default
|
||||
schemaRegistry: http://main-schema-registry:8081
|
||||
serdes:
|
||||
- name: AnotherSchemaRegistry
|
||||
className: com.provectus.kafka.ui.serdes.builtin.sr.SchemaRegistrySerde
|
||||
properties:
|
||||
url: http://another-schema-registry:8081
|
||||
# auth properties, optional
|
||||
username: nameForAuth
|
||||
password: P@ssW0RdForAuth
|
||||
|
||||
# and also add another SchemaRegistry serde
|
||||
- name: ThirdSchemaRegistry
|
||||
className: com.provectus.kafka.ui.serdes.builtin.sr.SchemaRegistrySerde
|
||||
properties:
|
||||
url: http://another-yet-schema-registry:8081
|
||||
```
|
||||
|
||||
## Setting serdes for specific topics
|
||||
You can specify preferable serde for topics key/value. This serde will be chosen by default in UI on topic's view/produce pages.
|
||||
To do so, set `topicValuesPattern/topicValuesPattern` properties for the selected serde. Kafka-ui will choose a first serde that matches specified pattern.
|
||||
|
||||
Sample configuration:
|
||||
```yaml
|
||||
kafka:
|
||||
clusters:
|
||||
- name: Cluster1
|
||||
serdes:
|
||||
- name: String
|
||||
topicKeysPattern: click-events|imp-events
|
||||
|
||||
- name: Int64
|
||||
topicKeysPattern: ".*-events"
|
||||
|
||||
- name: SchemaRegistry
|
||||
topicValuesPattern: click-events|imp-events
|
||||
```
|
||||
|
||||
|
||||
## Default serdes
|
||||
You can specify which serde will be chosen in UI by default if no other serdes selected via `topicKeysPattern/topicValuesPattern` settings.
|
||||
|
||||
Sample configuration:
|
||||
```yaml
|
||||
kafka:
|
||||
clusters:
|
||||
- name: Cluster1
|
||||
defaultKeySerde: Int32
|
||||
defaultValueSerde: String
|
||||
serdes:
|
||||
- name: Int32
|
||||
topicKeysPattern: click-events|imp-events
|
||||
```
|
||||
|
||||
## Fallback
|
||||
If selected serde couldn't be applied (exception was thrown), then fallback (String serde with UTF-8 encoding) serde will be applied. Such messages will be specially highlighted in UI.
|
||||
|
||||
## Custom pluggable serde registration
|
||||
You can implement your own serde and register it in kafka-ui application.
|
||||
To do so:
|
||||
1. Add `kafka-ui-serde-api` dependency (should be downloadable via maven central)
|
||||
2. Implement `com.provectus.kafka.ui.serde.api.Serde` interface. See javadoc for implementation requirements.
|
||||
3. Pack your serde into uber jar, or provide directory with no-dependency jar and it's dependencies jars
|
||||
|
||||
|
||||
Example pluggable serdes :
|
||||
https://github.com/provectus/kafkaui-smile-serde
|
||||
https://github.com/provectus/kafkaui-glue-sr-serde
|
||||
|
||||
Sample configuration:
|
||||
```yaml
|
||||
kafka:
|
||||
clusters:
|
||||
- name: Cluster1
|
||||
serdes:
|
||||
- name: MyCustomSerde
|
||||
className: my.lovely.org.KafkaUiSerde
|
||||
filePath: /var/lib/kui-serde/my-kui-serde.jar
|
||||
|
||||
- name: MyCustomSerde2
|
||||
className: my.lovely.org.KafkaUiSerde2
|
||||
filePath: /var/lib/kui-serde2
|
||||
properties:
|
||||
prop1: v1
|
||||
```
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue